“We’re on the verge of a very interesting revolution — the AI revolution,” said Gil Shwed, founder and CEO of Check Point Software Technologies, during his keynote to kick off his company’s recent conference in New York City.
That set the tone for CPX 360, where potential and dangers of generative AI took centerstage in a literal way. He said that, overall cyberattacks were becoming more sophisticated and cited a 38% increase in cyberattacks in 2022. Referring to research from the World Economic Forum, Shwed painted a picture of growing concern about cyber threats. “Ninety-one percent of C-levels believe that we’re on the verge of a catastrophic cyber event,” he said.
Just as AI can be used to drive cyberattacks, it could also be tapped for defense. “More than half of the threat engines in Check Point are AI,” Shwed said, then shared his perspective that 2023 might be a tipping point where AI becomes an important part of everyone’s daily lives. He also introduced what he said was a CISO’s “firsthand” story on video discussing a cybersecurity breach and recovery — however the video testimonial was actually an AI story produced from a real incident.
Optimistic or Cautionary?
Later came a panel discussion, “ChatGPT- A Life Changing Phenomenon? And What It Means for Cyber Security,” moderated by Dorit Dor, chief product officer with Check Point. The panelists were asked whether ChatGPT represented the optimistic potential of “Star Wars” or the foreboding, cautionary tone of “Black Mirror.”
“Star Wars it is for sure,” said Eyal Manor, vice president of product management at Check Point.
“In its current iteration, ChatGPT is a protocol droid,” said Eric Anderson, chief architect, Atlantic Data Security. “It’s C3PO; I can converse with it. I can talk to it, I’m not that afraid of it yet. But it has the strong potential to become a Black Mirror story, and that is the challenge.”
Baruch Toledano, vice president and general manager of digital marketing solutions at Similarweb, said it was both but leaned closer to “Star Wars.” As creators of technology, he said, people should recognize ways it can improve lives.
Along with the panelists, ChatGPT audio was piped into the discussion to answer questions about itself. “OpenAI’s technology, including ChatGPT, can be used in both offensive and defensive ways in the realm of cybersecurity,” the AI said.
Despite some lag in live responses as ChatGPT reached capacity, the AI also spoke about the buzz that surrounds it. “Generative AI has become a hot topic due to advancements in deep learning and computing power, a wide range of applications, accessibility, cost effectiveness, and excitement about its potential to revolutionize various industries.” ChatGPT said AI could automate defenses, generate simulated phishing emails to train employees to avoid such attacks, analyze large amounts of data to identify security threats such as malware, and automate incident response procedures.
It also stated advanced language models could be used to automate and scale phishing and social engineering attacks. ChatGPT said it could be used to develop new natural language processing-based security solutions, including improved email filters.
ChatGPT: Looking for Practical Application
Anderson said he had played with ChatGPT out of curiosity but had not found a practical application for himself. “What ChatGPT has essentially done is combine AI, very sophisticated generative AI, with a massive volume of data — I think it’s like 300 billion words they’ve fed into it — and combine them with some human interaction and training into something the general population can understand,” he said.
For example, ChatGPT can write papers. Anderson also said both sides of cybersecurity, attackers and defenders, could ultimately benefit from ChatGPT. “The question becomes, ‘Which maybe is first, and which leverages it better?’’’
As with most things, he said, the offensive side may be seen first and then defenders respond. “I don’t know that we’ve seen ChatGPT yet used in cyber effectively, but we’ve seen that same example play out in education,” Anderson said, referring to students using the AI to write papers they submitted for class, which led to a response for educators to use the same technology to identify content created by AI.
Toledano said almost anyone who uses a form of content may have an interest in ChatGPT, including emails or presentations. “Just don’t take it too seriously,” he said.
There are risks, Toledano said, with generative AI in deepfakes combining audio and video together to create false information. That increases the need, he said, to look into the authenticity, authority, and trust behind a source of content.
Manor described ChatGPT as more lifechanging than the debut of smartphones with the AI producing better results in less time. The automation that ChatGPT and other generative AI offer, he said, could make it easier to identify potential attackers who try to disguise their infiltrations. “We can train the model to do more work instead of ourselves,” Manor said. “Maybe a lot of tasks that we’re doing could be automated and be done without us.”
What to Read Next:
ChatGPT: An Author Without Ethics
ChatGPT: Enterprises Eye Use Cases, Ethicists Remain Concerned