Just when the main characters in “Jurassic Park” believe they have escaped danger and are safe within an office, the velociraptors demonstrate an unexpected capability: They learn how to open doors.
CIOs aren’t up against dinosaurs, but they are facing AI-generated threats that upend the assumptions of traditional security defenses. Case in point: the first-of-a-kind AI attack on Anthropic’s Claude model, which exploited the model’s own behavior in ways its creators did not anticipate.
In that attack, AI performed most of the work — an estimated 80%-90% — autonomously, from reconnaissance to data exfiltration, with minimal human intervention.
By automating tasks that until recently required specialized expertise, AI is putting advanced attack capabilities in the hands of lower-skill attackers, explained Rohan Massey, partner and cybersecurity practice leader at law firm Ropes & Gray.
When AI cuts both ways
The risk cuts both ways, Massey said. AI can be used by attackers to breach organizations, but the AI risk extends beyond external attacks. An organization’s own use of AI can open the door to new internal attack surfaces if it is poorly governed.
That tension is now a central concern for CIOs and CISOs, said Rik Turner, chief analyst for Omdia’s cybersecurity team. According to Turner, as enterprises “rush to ‘AI-ify’ multiple apps,” embedding large language models (LLMs) and GenAI tools to drive business productivity, security leaders are grappling with how to drive adoption and get value from AI without increasing their attack surface in the process.
Top three AI security challenges
The overall security risk from AI can be examined as a threefold problem for CIOs and CISOs, Turner said.
-
The data security problem. CIOs need to ensure that training and inference data is clean and that AI models aren’t trained on sensitive or confidential information.
-
The application security problem. CIOs need to understand the origin of open source models and third-party models that are embedded in organizational applications, including how those models are maintained and the risks they could pose.
-
The access problem. Employee misuse of LLMs, especially around SaaS-delivered generative AI platforms such as ChatGPT or Claude, can create new vulnerabilities if user access is not well managed.
Massey echoed Turner’s sentiments about the security concerns that AI presents in 2026, forecasting “an increase in the use of AI by attackers. Expect faster phishing, more sophisticated deepfake frauds, and wider automated exploitation,” he said. The speed with which CISOs can respond to AI attacks will be critical.
“No security team can match attackers’ AI speed [with manual processes alone], so think about automated solutions,” he said.
As AI technologies evolve, CIOs and CISOs will also need to prepare for a changing security landscape. CIOs should work closely with their organizations’ CISOs to stay aware of security threats raised by emerging technologies, such as agentic AI, Turner said.
While agentic AI is still in early stages of development, it poses a greater security risk than GenAI, because of the levels both of access and autonomy that agents may be granted, he explained.
“I think companies are only just beginning to wake up to this issue, as most of them are at the very preliminary stage of investigating agentic,” Turner explained.
The danger of deepfakes
In addition to potential threats from agentic AI, AI-generated deepfakes and the AI vibe coding trend present cybersecurity problems, Turner said. Deepfake technology uses AI to generate convincing fake images, videos and audio.
Deepfakes can lead to executive fraud, in which voice and video spoofing is used to deceive employees into transferring funds or disclosing sensitive information, Massey said.
AI-generated deepfakes are becoming more prevalent and are “extraordinarily dangerous and difficult to guard against,” said Brian Greenberg, CIO of RHR International.
One approach RHR is considering to address deepfakes is implementing out-of-band authentication (OOBA) techniques, which rely on a separate communication channel to verify the user’s identity. The OOBA method can send a push notification or pose a question that only the real person could answer.
Fighting AI with AI
To address AI-generated and more traditional cybersecurity risks such as phishing, CIOs and CISOs will need to continue regular security awareness trainings for employees, Turner said.
At RHR International, employees are required to participate in quarterly security trainings. The organization sends fake phishing emails to test if employees can recognize and report phishing attempts, Greenberg said.
In addition to training exercises, automated responses to AI-based attacks will be critical, Turner said. While the use of AI as a defensive security measure is still in relatively early stages, automated threat detection and response — together with skilled SOC employees — will play a central role in supporting organizations’ ability to thwart AI-based cybersecurity threats, he added.
“CIOs must fight fire with fire and use AI defensively,” Massey said, pointing to behavior-based detection tools that create a baseline for user actions by analyzing and monitoring their activity to identify anomalies.
Greenberg echoed that view, noting that RHR International is implementing identity provider tools to centralize the user authentication process, in addition to deploying a zero-trust framework.
“We’re just trying to make it easier to fill the gaps between platforms that naturally seems to occur,” Greenberg said.
Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.

