Large Language Models (LLMs) trained on vast quantities of data can make security operations teams smarter. LLMs provide in-line suggestions and guidance on response, audits, posture management, and more. Most security teams are experimenting with or using LLMs to reduce manual toil in workflows. This can be both for mundane and complex tasks.
For example, an LLM can query an employee via email if they meant to share a document that was proprietary and process the response with a recommendation for a security practitioner. An LLM can also be tasked with translating requests to look for supply chain attacks on open source modules and spinning up agents focused on specific conditions — new contributors to widely used libraries, improper code patterns — with each agent primed for that specific condition.
That said, these powerful AI systems bear significant risks that are unlike other risks facing security teams. Models powering security LLMs can be compromised through prompt injection or data poisoning. Continuous feedback loops and machine learning algorithms without sufficient human guidance can allow bad actors to probe controls and then induce poorly targeted responses. LLMs are prone to hallucinations, even in limited domains. Even the best LLMs make things up when they don’t know the answer.
Security processes and AI policies around LLM use and workflows will become more critical as these systems become more common across cybersecurity operations and research. Making sure those processes are complied with, and are measured and accounted for in governance systems, will prove crucial to ensuring that CISOs can provide sufficient GRC (Governance, Risk and Compliance) coverage to meet new mandates like the Cybersecurity Framework 2.0.
The Huge Promise of LLMs in Cybersecurity
CISOs and their teams constantly struggle to keep up with the rising tide of new cyberattacks. According to Qualys, the number of CVEs reported in 2023 hit a new record of 26,447. That’s up more than 5X from 2013.
This challenge has only become more taxing as the attack surface of the average organization grows larger with each passing year. AppSec teams must secure and monitor many more software applications. Cloud computing, APIs, multi-cloud and virtualization technologies have added additional complexity. With modern CI/CD tooling and processes, application teams can ship more code, faster, and more frequently. Microservices have both splintered monolithic app into numerous APIs and attack surface and also punched many more holes in global firewalls for communication with external services or customer devices.
Advanced LLMs hold tremendous promise to reduce the workload of cybersecurity teams and to improve their capabilities. AI-powered coding tools have widely penetrated software development. Github research found that 92% of developers are using or have used AI tools for code suggestion and completion. Most of these “copilot” tools have some security capabilities. In fact, programmatic disciplines with relatively binary outcomes such as coding (code will either pass or fail unit tests) are well suited for LLMs. Beyond code scanning for software development and in the CI/CD pipeline, AI could be valuable for cybersecurity teams in several other ways:
- Enhanced Analysis: LLMs can process massive amounts of security data (logs, alerts, threat intelligence) to identify patterns and correlations invisible to humans. They can do this across languages, around the clock, and across numerous dimensions simultaneously. This opens new opportunities for security teams. LLMs can burn down a stack of alerts in near real-time, flagging the ones that are most likely to be severe. Through reinforcement learning, the analysis should improve over time.
- Automation: LLMs can automate security team tasks that normally require conversational back and forth. For example, when a security team receives an IoC and needs to ask the owner of an endpoint if they had in fact signed into a device or if they are located somewhere outside their normal work zones, the LLM can perform these simple operations and then follow up with questions as required and links or instructions. This used to be an interaction that an IT or security team member had to conduct themselves. LLMs can also provide more advanced functionality. For example, a Microsoft Copilot for Security can generate incident analysis reports and translate complex malware code into natural language descriptions.
- Continuous Learning and Tuning: Unlike previous machine learning systems for security policies and comprehension, LLMs can learn on the fly by ingesting human ratings of its response and by retuning on newer pools of data that may not be contained in internal log files. In fact, using the same underlying foundational model, cybersecurity LLMs can be tuned for different teams and their needs, workflows, or regional or vertical-specific tasks. This also means that the entire system can instantly be as smart as the model, with changes propagating quickly across all interfaces.
Risk of LLMs for Cybersecurity
As a new technology with a short track record, LLMs have serious risks. Worse, understanding the full extent of those risks is challenging because LLM outputs are not 100% predictable or programmatic. For example, LLMs can “hallucinate” and make up answers or answer questions incorrectly, based on imaginary data. Before adopting LLMs for cybersecurity use cases, one must consider potential risks including:
- Prompt Injection: Attackers can craft malicious prompts specifically to produce misleading or harmful outputs. This type of attack can exploit the LLM’s tendency to generate content based on the prompts it receives. In cybersecurity use cases, prompt injection might be most risky as a form of insider attack or attack by an unauthorized user who uses prompts to permanently alter system outputs by skewing model behavior. This could generate inaccurate or invalid outputs for other users of the system.
- Data Poisoning: The training data LLMs rely on can be intentionally corrupted, compromising their decision-making. In cybersecurity settings, where organizations are likely using models trained by tool providers, data poisoning might occur during the tuning of the model for the specific customer and use case. The risk here could be an unauthorized user adding bad data — for example, corrupted log files — to subvert the training process. An authorized user could also do this inadvertently. The result would be LLM outputs based on bad data.
- Hallucinations: As mentioned previously, LLMs may generate factually incorrect, illogical, or even malicious responses due to misunderstandings of prompts or underlying data flaws. In cybersecurity use cases, hallucinations can result in critical errors that cripple threat intelligence, vulnerability triage and remediation, and more. Because cybersecurity is a mission critical activity, LLMs must be held to a higher standard of managing and preventing hallucinations in these contexts.
As AI systems become more capable, their information security deployments are expanding rapidly. To be clear, many cybersecurity companies have long used pattern matching and machine learning for dynamic filtering. What is new in the generative AI era are interactive LLMs that provide a layer of intelligence atop existing workflows and pools of data, ideally improving the efficiency and enhancing the capabilities of cybersecurity teams. In other words, GenAI can help security engineers do more with less effort and the same resources, yielding better performance and accelerated processes.