Artificial intelligence has been an integral part of the cybersecurity industry for several years now. However, the widespread public adoption of Large Language Models (LLMs) that took place in 2023 has brought new and unexpected changes to the security landscape.
LLMs like OpenAI’s ChatGPT, Google’s Bard, and others have opened new capabilities — and new threats — across the global economy. Security leaders in every sector and industry will need to change their approach to accommodate this development.
It’s almost certain that new AI-powered tools will increase the volume and impact of cyberattacks over the next few years. However, they will also enhance the capabilities of cybersecurity leaders and product experts. Lumifi’s Research and Development uses the latest AI tools to refine our MDR capabilities every day.
These developments will likely occur at an uneven pace, typical of a global arms race. Cybercriminals may gain a temporary advantage at some point, only to be subdued by new cybersecurity deployments, and then the cycle will repeat.
This volatile environment should inspire cybersecurity professionals to increase their AI proficiency. Individuals with broad experience, product expertise, and a successful track record will be highly sought after in the industry.
LLMs enable anyone to process large amounts of information, democratizing the ability to leverage AI. This offers significant advantages to people and organizations who want to improve the efficiency, intelligence, and scalability of data-centric workflows.
When the cybersecurity industry was dominated by hardware products, security leaders only changed products when the next version of their preferred hardware was available. Now, AI-powered software can update itself according to each individual use case, requiring security teams to continuously evaluate LLM systems for safety and compliance.
Let’s look more closely at each use case and how it’s likely to evolve as AI technology advances.
There are two major advantages to leveraging LLM capabilities in cybersecurity.
These two benefits will certainly improve over time and lead to new AI capabilities for security teams. SOC analysts may soon be able to read thousands of incident response playbooks at once and identify security gaps and inconsistencies in near real-time.
This will require the creation of a domain-specific cybersecurity LLM capable of contextualizing incident response playbooks at the organizational level. AI-powered SIEM platforms like Exabeam already provide in-depth behavioral analytics for users and assets, and in time we’ll see similar capabilities expanding into threat response and recovery workflows as well.
LLMs are invaluable for threat actors, especially when it comes to gaining initial access to their victims’ assets. By practically eliminating language, cultural, and technical communication barriers between people communicating, they’ve made it much harder for people to reliably flag suspicious content.
Cybercriminals are already using AI to enhance and automate operations in four key areas:
According to one report, phishing attacks have surged more than 1200% since ChatGPT was first released in November 2022. Credential phishing attacks have risen by an astonishing 967% in the same time frame.
It’s no secret that influential tech leaders and investors are pouring significant resources into AI. Some thought leaders warn that the emerging technology will change every aspect of our lives — going so far as to say we’re charging headfirst into an AI apocalypse fueled by the development of Artificial General Intelligence (AGI).
While the technology is new, exaggerating the danger of disruptive technology is a familiar cycle. Plato was famously skeptical of writing, and 16th century Europeans destroyed printing presses out of fear. It’s normal to be anxious about new technology.
Like writing, printing, and every other technology before it, artificial intelligence has limitations. Security leaders who understand those limitations will be able to navigate the challenges of a society increasingly reliant on AI-powered technologies.
Many tech leaders think this is an engineering problem and believe that eventually LLMs will contextualize information with human-like accuracy.
This may not be true. We still don’t know how the human brain contextualizes information and articulates it into language. Contextualizing insight by combining data with real-world experience remains a task best-suited to human experts.
1. AI-powered workflows are resource-intensive
According to the International Energy Agency, training a single AI model uses more electricity than 100 US homes consume in a year. A typical ChatGPT query consumes 2.9 watt-hours of electricity — about the same amount of energy stored in a typical AA battery.
By comparison, the human brain consumes about 300 watt-hours of energy per day. Yet it accomplishes significantly more during this time than even the most efficient LLMs.
This suggests that there’s more to improving neural network performance than simply adding more nodes and introducing more parameters. It also places an upper limit on the feasibility of increasingly energy-intensive AI processes. At some point, the costs will outweigh the benefits.
2. I models have difficulty contradicting consensus
AI training models operate on consensus. If a significant majority of parameters suggest that a certain LLM response is likely to be correct, the LLM will confidently declare the corresponding answer. If the training set data is not accurate, the answer won’t be either.
When it comes to pure facts, overcoming this limitation may be technically feasible. But when it comes to opinions, values, and judgements, AI-powered tools are not equipped to offer anything but the most basic responses.
This means that even highly advanced future AI tools may not be able to make convincing arguments against popular consensus. It’s easy to see how this can lead to severe security consequences, especially in cases where popular wisdom turns out to be wrong.
3. You can’t credit (or blame) AI models for the decisions they make
AI ethics remains a challenging issue for technology experts, cognitive scientists, and philosophers alike. This problem is deeply connected to our lack of understanding of human consciousness and agency.
Currently, there is no real consensus about the moral status of artificially intelligent algorithms. This makes it impossible to attribute moral decisions to AI-powered tools or claim they know the difference between “right” and “wrong”.
We can’t treat AI algorithms as moral agents without also attributing some form of “personhood” to them. Most people strongly doubt that LLMs like ChatGPT are “people” in that sense, which means someone else must take responsibility for the decisions that AI algorithms make — including their mistakes.
Security leaders are beginning to distinguish between generative AI and predictive AI. While people are understandably excited about generative AI, the true information security workhorse is predictive AI, which is a must-have technology in today’s security operations center environment.
As the stakes of AI-powered cybercrime get higher, leaders will become increasingly risk averse. Few executives or stakeholders will be willing to risk their livelihoods on unproven security solutions and vendors.
In this scenario, security leaders who entrust their detection and response workflows to reputable product experts with proven track records will be rewarded. If your detection and response provider doesn’t leverage proven AI expertise in its blue team operations, it will eventually fall behind.
Positive security incident outcomes may become difficult to achieve, but guaranteeing them will be crucial. Learn more about how Lumifi achieves this critical goal by combining AI-enriched data with human expertise and best-in-class automation. Secure your spot for our webinar, Unveiling ShieldVision's Future & New Series of Enhancements, taking place on February 14th to learn more.
Lumifi is a managed detection and response vendor with years of experience driving consistent results with the world’s most sophisticated AI technologies. Find out how we combine AI-enhanced automation with human expertise through our ShieldVision™ SOC automation service.
We’ve expanded our MDR capabilities with enhanced incident response and security services to better protect against evolving cyber threats.