Published Date : 11/06/2025
Artificial intelligence (AI) is revolutionizing the way hackers operate, from writing malware to preparing phishing messages. However, the much-touted impact of generative AI has its limitations, a cybersecurity expert noted at an industry conference in National Harbor, Maryland.
Generative AI is being used to improve social engineering and attack automation, but it hasn’t introduced novel attack techniques, according to Peter Firstbrook, a distinguished VP analyst at Gartner. Speaking at Gartner’s Security and Risk Management Summit, Firstbrook highlighted the significant role AI plays in enhancing existing methods.
Experts have long predicted that AI would revolutionize attackers’ ability to develop custom intrusion tools, reducing the amount of time it takes even novice hackers to compile malware capable of stealing information, recording computer activity, or wiping hard drives. “There is no question that AI code assistants are a killer app for generative AI,” Firstbrook said. “We see huge productivity gains.”
In September, HP researchers reported that hackers had used AI to create a remote access Trojan. Referencing this report, Firstbrook noted, “It would be difficult to believe that attackers are not going to take advantage of using generative AI to create new malware. We are starting to see that.”
Attackers are also leveraging AI in more insidious ways, such as creating fake open-source utilities and tricking developers into unknowingly incorporating malicious code into their legitimate applications. “If a developer is not careful and they download the wrong open-source utility, their code could be backdoored before it even hits production,” Firstbrook warned.
Hackers could have done this before AI, but the new technology is allowing them to overwhelm code repositories like GitHub, which can’t take down the malicious packages quickly enough. “It’s a cat-and-mouse game, and the generative AI enables them to be faster at getting these utilities out there,” Firstbrook explained.
The integration of AI into traditional phishing campaigns is a growing threat, but so far, the impact appears to be limited. Gartner found in a recent survey that 28% of organizations had experienced a deepfake audio attack, 21% a deepfake video attack, and 19% a deepfake media attack that bypassed biometric protections. However, only 5% of organizations have experienced deepfake attacks resulting in the theft of money or intellectual property.
Even so, Firstbrook emphasized, “This is a big new area.” Analysts are concerned about AI’s potential to make certain types of attacks much more profitable because of the attack volume that AI can create. “If I’m a salesperson, and it typically takes me 100 inquiries to get a ‘yes,’ then what do you do? You do 200 and you've doubled your sales. The same thing with these guys. If they can automate the full spectrum of the attack, then they can move a lot quicker,” Firstbrook said.
At least one generative AI-related fear appears to be overblown—at least for now. Researchers have yet to see it create entirely new attack techniques. “So far, that has not happened, but that's on the cusp of what we’re worried about,” Firstbrook noted. He pointed to data from the MITRE ATT&CK framework, which catalogs the strategies that hackers have developed to pierce computer systems. “We only get one or two brand-new attack techniques every year,” he added.
As the cybersecurity landscape continues to evolve, organizations must remain vigilant and adapt their defenses to counter the growing threat posed by AI-enhanced attacks.
Q: What is generative AI in the context of cybersecurity?
A: Generative AI in cybersecurity refers to the use of artificial intelligence to create and enhance malicious activities, such as writing malware, preparing phishing messages, and creating deepfakes.
Q: How is AI being used to improve social engineering attacks?
A: AI is being used to improve social engineering attacks by automating the process of creating convincing phishing messages and tricking users into divulging sensitive information.
Q: What are deepfakes, and how are they used in cyber attacks?
A: Deepfakes are AI-generated videos or audio that mimic real people. In cyber attacks, they can be used to impersonate individuals, bypass biometric security, and carry out fraudulent activities.
Q: What is the MITRE ATT&CK framework?
A: The MITRE ATT&CK framework is a comprehensive knowledge base of adversary tactics and techniques based on real-world observations. It helps organizations understand and defend against various cyber attack methods.
Q: How can organizations protect themselves from AI-enhanced cyber attacks?
A: Organizations can protect themselves from AI-enhanced cyber attacks by implementing robust security measures, training employees to recognize phishing attempts, and staying updated on the latest threat intelligence.