AI cybersecurity Oracle [shutterstock: 1500238457, Yurchanka Siarhei]
[shutterstock: 1500238457, Yurchanka Siarhei]
Press Release Security

Can AI-Based Cybersecurity Render Human Error Extinct?

Numbed by yet another year of data breaches – 2,013 in 2019 alone by one estimate – it is no surprise researchers widely acknowledge human error as a top-source of cyber insecurity. C-Suite executives and policy makers called “human error” their top cybersecurity risk in Oracle’s Security in the Age of AI report.

Without change, the scale and frequency of data breaches will continue to increase. To reverse the trend we must better understand how exactly, people contribute to cybersecurity and if and how AI can mitigate these risks.

How human error creates cybersecurity problems

In December 2019, Oracle surveyed 300 enterprise-level IT decision makers nationwide to examine concerns around cybersecurity posture and potential solutions. The survey evaluated the state of cybersecurity preparedness and trust in employing AI to address security gaps at organizations with 500 or more employees. It saw key red flags from patching protocols to workforce constraints that created a cycle of vulnerability and error.

The results demonstrate that the human contribution to IT vulnerability remains undeniable. Most respondents (58 percent) cited human error as a leading reason for IT vulnerabilities. Yet what is  often characterized as a “technical” factor – insufficient patching – was judged just as important as human error, with 57 percent of those surveyed citing it as a leading reason for vulnerability.


But insufficient patching – meaning either that patches are not applied frequently enough, or they are not applied consistently enough – is a factor firmly rooted in human behavior.

The problem of human error may be even bigger than these statistics suggest. Digging deeper, the survey found that nearly half (49 percent) of respondents cited available workforce as a constraint on patching; 62 percent said they would apply a wider variety of patches more frequently if they had additional personnel to do so. When asked about major contributors to failed patches, 84 percent of respondents pointed to human error.

Not surprisingly, a separate question found people were less trusted to keep organization IT systems safe than were machines.

A role for AI

Enterprises will never be able to hire enough humans to fix problems created by human error. Existing methods that companies and agencies employ leave considerable room for error, with 25 percent of all security patches deployed by organizations failing to reach every device.

When U.S.-based CEOs cite cybersecurity as the number one external concern for their business in 2019, the simple act of patching (and failing) is a huge unresolved risk. Why should humans have to fix this problem alone, when autonomous capabilities are available to improve IT security?

Some vendors may place blame on customers for incorrectly configuring computing resources, but Oracle understands customers deserve more, and should have the option to do less, which is why they use autonomous technologies to secure customer systems.

With the cybersecurity workforce gap growing, deploying AI is the best way to supplement human abilities and protect systems more efficiently. Enterprises that adopt autonomous technologies broadly and quickly will help eliminate an entire category of cyber risk (patching) – and stop repeating the pattern of human-error induced insecurity in the future.


Social Media

Sign up for e3zine´s biweekly newsbites

Please do not use administrative mail adresses like "noreply@..", "admin@.." or similar as these may get blocked for security reasons.

We use rapidmail for dispatching our newsletter. By signing up, you agree that the data you have entered will be transmitted to rapidmail. Please take note of their terms and conditions and privacy policy. terms and conditions .

Our Authors