The vast potential of applying AI to counteract cybercrime 

For many businesses, 2023 was the year of artificial intelligence. ChatGPT became a dinner-table term, billions of dollars were poured into AI ventures and, unfortunately, it was also a tool commonly used by criminal groups for more frequent and greater cyberattacks. Though, the latter could be considered a double-edged sword, as the versatility of AI is also an asset to cybersecurity. Groups and organizations are increasingly finding ways to strengthen existing and create new methods of cyber defense.

What is AI?

Put in very simple terms, an AI is a computer system that can handle problems without direct human direction, ranging from simple text prediction to drafting essays or generating images. Many AI are “trained” on a dataset, usually with a set goal, to make determinations, which determinations are reviewed and revised by humans, who then feed the AI additional data and repeat until the AI’s determinations are within an acceptable outcome. More recently, advances in machine-learning and artificial neural networks have enabled explosive growth in AI capabilities.

There are various levels of AI. AI with a single or limited goal are often called “narrow AI”, for example, a Chabot that can answer certain common questions to free up human users to handle more complex inquiries. AI with a broader task, a “general AI”, may have a variety of goals or tasks that each require multiple determinations, for example modelling climate change. And, few AI are given relative freedom, leading to the famous instance of an early Google AI training itself to recognize cats.

AI in cybersecurity

One of AI’s strengths is in its pattern recognition or detection, which is a crucial aspect of cybersecurity. AI can be trained on sufficient data sets to know what belongs on a network environment and can augment the industry standard practice of passive system monitoring for suspicious files or behavior, often called threat trapping. It can also be applied to other strategies, for example looking at user behavior, file characteristics or log data for aberrant results.

As threat actors become sneakier, however, passive monitoring loses some effectiveness as cybercriminals expand their methods. AI trained to actively seek out problems, aka threat hunting, can look for known malware and other threat actor tools. This type of AI threat hunting already extends to email security, sophisticated login requirements and active threat mitigation. Alternatively, it can be trained to only hunt for normal behavior on an environment. Thus, it may find bad actor activity merely because it does not display known good tendencies, even if target files are passive or inactive at time of detection, for example reconnaissance tools or re-labeled malware uploaded for later exploitation.

Another strength for AI may be at the authentication stage. Single login is out and MFA is in, but with AI it may be possible to create a constantly-validating system where credentials must hold up to frequent checks. While a legitimate login would trigger no alerts, an AI would be more likely to detected MFA bypass or brute forcing of a firewall by threat actors.

The potential of AI is seemingly unlimited. AI is systematic and does not get tired, or distracted, and has the ability to streamline necessary, but routine tasks, like finding missing patches, old passwords or misconfigured settings. Cybercriminals have applied AI in various ways, including generative malware and deep fakes, but those same principles may be mirrored to enhance cybersecurity in the near future. With the notoriety of major hacks this year, next-generation, AI-augmented cybersecurity is of great interest.

Will AI fix everything?

Not so much. AI is limited, as with any other tool. Human expectations for AI are lofty and often ill-informed, and many persons simply do not understanding how nascent much of the AI industry still is. Most AI are very limited in scope, based upon limited data sets and the current requirement of regular human monitoring. Even if an AI works perfectly, it is designed for specific tasks and cannot typically perform other tasks beyond its programming. Unlike people, most AI cannot be flexible or “think” creatively, but only do what it was set up to do.

Fundamentally, an AI is heavily dependent the data sets it was trained on – bad or incomplete data will result in a flawed AI. Even for AI that work, they are expensive, time-consuming to create and require a lot of processing power. Data sets must be frequently, or sometimes constantly, updated to not become obsolete. The data sets themselves must be acquired, set up in an acceptable format, checked for inherent bias or "poisoning," and then monitored to make sure the AI does not develop unintended rules from them.

With all of that said, the potential for AI in cyber defense is very promising and we, certainly, have only seen just the tip of the iceberg.   

If you have questions about how AI might impact your company’s cybersecurity, think you might have experienced a cybersecurity incident, or if you want to learn more about steps to take for a proactive cybersecurity defense, contact a member of McDonald Hopkins' national cybersecurity and data privacy team.

Jump to Page

McDonald Hopkins uses cookies on our website to enhance user experience and analyze website traffic. Third parties may also use cookies in connection with our website for social media, advertising and analytics and other purposes. By continuing to browse our website, you agree to our use of cookies as detailed in our updated Privacy Policy and our Terms of Use.