Published Date : 11/06/2025
NATIONAL HARBOR, Md. — Artificial intelligence is poised to transform the work of security operations centers (SOCs), but experts emphasize that human involvement will always be essential. While AI agents can automate many repetitive and complex SOC tasks, they have significant limitations, such as an inability to replicate unique human knowledge or understand bespoke network configurations, according to experts at the Gartner Security and Risk Management Summit.
The promise of AI dominated this year’s Gartner conference, where experts discussed how the technology could make cyber defenders’ jobs much easier, despite its current limitations. “As the speed, the sophistication, [and] the scale of the attacks [go] up, we can use agentic AI to help us tackle those challenges,” Hammad Rajjoub, director of technical product marketing at Microsoft, said during his presentation. “What’s better to defend at machine speed than AI itself?”
A Silent Partner
AI can already assist SOC staffers with several important tasks. Pete Shoard, a vice president analyst at Gartner, explained that AI can help locate information by automating complex search queries, write code without requiring extensive language knowledge, and summarize incident reports for non-technical executives. However, automating these activities carries risks if mishandled. SOCs should review AI-written code with robust testing processes and ensure that AI summaries are accurate to avoid sending misinformation to decision-makers.
In the future, AI might even automate the investigation and remediation of intrusions. Anton Chuvakin, a senior staff security consultant in the Office of the CISO at Google Cloud, noted that most AI SOC startups currently focus on using AI to analyze alerts and reduce the cognitive burden on humans. “This is very worthwhile,” he added, but “it's also a very narrow take on the problem.” In the far future, he envisions machines that can remediate and resolve certain issues autonomously.
Some IT professionals might be wary of letting AI loose on their customized computer systems, but they should prepare for a future where AI plays a more significant role. “Imagine a future where you have an agent that's working on your behalf, and it's able to protect and defend even before an attack becomes possible in your environment,” Rajjoub said during his presentation. He predicted that within six months, AI agents will be able to reason on their own and automatically deploy various tools on a network to achieve their human operators’ specified goals. Within a year and a half, these agents will be able to improve and modify themselves in pursuit of those goals. And within two years, they will be able to modify the specific instructions they’ve been given to achieve broader goals. “It's not two, three, four, five, six years from now,” he said. “We're literally talking about weeks and months.”
Limitations and Risks
As AI agents take on more tasks, monitoring them will become more complicated. “Do we really think our employees can keep up with the pace of how agents are being built?” Dennis Xu, a research vice president at Gartner, asked. “It’s likely that we are never going to be able to catch up.” He proposed using agents to monitor other agents, although this is further out on the time horizon.
Many analysts urged caution in deploying AI in the SOC. Chuvakin described several categories of tasks, some “plausible but risky” and others that he would “flat-out refuse” to believe AI could accomplish in the near to medium-term future. In the risky category, Chuvakin listed autonomous tasks like patching legacy systems, responding to intrusions, and attesting to regulatory compliance. “I've seen people who use consumer-grade ChatGPT to fill [out] compliance questionnaires,” he said. “I wish them all the luck in the world.” Tasks that Chuvakin said he can’t imagine AI accomplishing anytime soon include strategic risk analysis, crisis communications, and threat hunting against top-tier nation-state adversaries. “Fighting advanced hacker groups is a human task,” he said, “because ultimately, as of today, humans still outsmart machines.”
Gartner’s Shoard noted that using AI to create tabletop exercises could make staffers overly dependent on AI to warn them about evolving threats, while using AI to create threat detection queries might diminish employees’ investigative skills. “You're going to end up with underdeveloped staff,” he said, “staff that over-depend on things like AI.”
Preserving ‘Tribal Knowledge’
AI will never replace humans in a SOC, multiple experts said, because human judgment is an essential part of analyzing and responding to security incidents. “A lot of things we do in a real SOC … involve things that are tribal knowledge,” Chuvakin said, referring to practices that aren’t formally documented. AI will struggle to perform these activities. Chuvakin has seen many models recommend actions that make no sense for the specific networks in which they’re operating. In particular, AI still can’t write threat-detection rules tailored to highly customized legacy IT environments “because of all the peculiarities” in how they’re set up.
Chuvakin urged companies to ask startups about how their AI solutions address these unique challenges. AI can augment SOC analysts’ skills and capabilities. Shoard called it “a massive force multiplier” for a SOC workforce, but he warned companies not to rely too much on it. “If you think you can sack your SOC staff just because you've suddenly bought an AI function, I think you're going to be soundly disappointed,” he said. “AI won't replace your security staff, so use it to enhance them [and] make them better in their jobs.”
In AI We (Need to) Trust
In the SOC of the future, humans won’t just work alongside AI agents; they’ll also need to monitor those agents. “We don't want complete autonomy,” said TIAA CISO Upendra Mardikar. “We have to have a human in the loop.” Those humans will need to ensure that AI agents’ actions are auditable and controlled by company policies. Jose Veitia, director of information security at Red Ventures, said businesses should “make sure all the actions are validated.”
Designing an automated system requires feeding it the right data. “If we allow a machine simply to make the decisions for us,” Gartner’s Shoard said, “then we've got to trust that it has all of the relevant information to make that decision effectively.” Trust and verification were common themes in AI discussions throughout the Gartner conference this week. “Trust has to be the fabric on which these agents are built,” Rajjoub said. “The more prevalent and capable the agents become, the more critical their security becomes for all of us.”
But as AI agents become more capable, their value in the SOC could increase significantly. “Unfortunately, AI isn't magic. I don't think it ever will be,” Shoard said. “But it is going to improve things for us in the SOC. You should consider it with great care, but consider it experimentally and use it.”
Q: What is the role of AI in security operations centers (SOCs)?
A: AI can automate repetitive and complex tasks in SOCs, such as analyzing alerts, writing code, and summarizing incident reports. However, human oversight is crucial for monitoring and verifying AI actions.
Q: What are the limitations of AI in SOCs?
A: AI has limitations in replicating unique human knowledge, understanding bespoke network configurations, and performing tasks like strategic risk analysis and threat hunting against advanced adversaries.
Q: How can AI enhance SOC analysts' skills?
A: AI can serve as a force multiplier by automating routine tasks, allowing SOC analysts to focus on more complex and strategic activities. However, it should not replace human judgment and expertise.
Q: What are the risks of over-relying on AI in SOCs?
A: Over-relying on AI can lead to underdeveloped staff, over-dependence on AI for decision-making, and potential errors in tasks such as patching legacy systems and responding to intrusions.
Q: Why is human oversight important in AI-driven SOCs?
A: Human oversight ensures that AI actions are auditable, controlled by company policies, and aligned with the unique context of the organization. It also helps maintain the integrity and reliability of AI systems.