The Internet of Things (IoT) and edge computing have vexed enterprise security efforts for years now. Given the added complexities of work-from-home and hybrid work arrangements, the situation has considerably worsened recently. Now comes ChatGPT to sit atop most IoT and edge devices, effectively adding a welcome beacon — or even a helping hand — to threat actors everywhere.
“Existing vulnerabilities, especially in the context of AI and ChatGPT-enabled or assisted attacks against edge devices and users, can be leveraged against businesses in different ways,” says Jim Broome, President and CTO at DirectDefense.
Despite variances in vulnerabilities and diverse efforts to exploit them, threats from the edge originate from one of two IoT realms: home IoT and enterprise IoT.
In many cases, employee home networks and the data therein are the preferred targets for threat actors.
“Once inside the home network, attackers can then pivot back into the corporate network, potentially compromising sensitive business information via a ‘blessed user or home network,’” Broome says.
But that’s not to say that enterprise IoT and edge devices are locked tight against more direct intrusions.
“Ransomware threat actors, for example, can exploit IoT vulnerabilities as a starting point to carry out their malicious campaigns, potentially causing significant damage and disruption to business operations,” Broome adds.
The Evolving Threatscape in Enterprise IoT
IoT and edge computing usage is up, both on the home and enterprise fronts. While IoT is a highly fragmented market, a view of even a few categories underscores the continued and unfettered growth across the board. Gartner pegs spend on IoT in the enterprise space and across key industries at over $268 billion in 2022. Deloitte projects worldwide spending on software and hardware related to IoT to rise to $1.1 trillion this year.
But the challenges aren’t just tied to the growing number of IoT and edge devices being purchased and deployed. An increasing variety in the types of IoT are causing issues, too.
“The diversity of edge and IoT devices, ranging from switches, routers, and sensors to point-of-sale systems, industrial robots, and automation equipment, also adds an additional layer of complexity and security vulnerability due to the variations in protocols, functions, and security capabilities,” explains James Joonhak Lee, a senior manager in Deloitte’s US Cyber & Strategic Risk practice.
If you think vendors and buyers have gotten better at securing these devices after all this time, think again. Botnet armies and DDoS attacks frequently spring from unprotected IoT devices seemingly as innocuous as hotel lobby aquarium thermostats, home smart refrigerators, and company coffee pots in break rooms.
“IoT devices in particular, and edge devices in general, are the most vulnerable within an organization,” says John Gallagher, VP of Viakoo Labs, a research unit focused on IoT and OT security management.
Where Home and Work Dangers Meet
IoT and edge computing spawn vulnerabilities elsewhere, too. For example, an ever-expanding edge-computing space compounds security problems for enterprises — especially on the border between enterprise and consumer usage.
“Modern image archive systems, called PACS, connect scanners like an ultrasound or a CT scanner with patient management systems,” explains Dirk Schrader, VP of Security Research at Netwrix. “Currently, PACS servers become more and more connected to the public internet, so that patients and physicians can access the data. Quite often even basic precautions are not in place for these IT infrastructures. They are not hardened.”
Growing enmeshment between enterprise and consumer IoT and networks blurs the boundaries and sharply defines the opportunities for attackers.
Dangers and damages flow both ways, too.
“At the moment, there are about 200 of such unprotected archives [PACS servers] connected to the public internet within the US alone. Attackers can exploit them, exfiltrate or encrypt the data to extort the organization, use the data to run medical insurance fraud against the patients, or change the medical imagery so the process itself is corrupted,” Schrader says.
But this crossroads between consumer and professional connections is not the only collision point for enterprises. The things themselves have cross-uses to be wrecked. Autonomous vehicles, for example, exist in both commercial and consumer versions. Attacks are easily transported to the enterprise and the user whether the vehicles are a commercial fleet, a vehicle for rent or hire, or owned by a worker begrudgingly returning to work in the office again.
And then there is the steady march of home IoT — from nanny cams to smart meters and kids’ toys — on the occupant’s employer.
“An additional concern lies with the vulnerabilities present in AI-enhanced home devices used by remote employees,” Broome says. “For example, when was the last time individuals from accounting updated their home routers or their home network-attached storage servers, which they use for backing up corporate work while working remotely. This issue further compounds the challenges faced by organizations, as it increases the risk of intellectual property theft,” he says.
How ChatGPT and AI Make IoT Vulnerabilities Worse
ChatGPT and its ilk are rapidly appearing integrated or embedded in commercial and consumer IoT of all types. Many imagine AI models to be the most sophisticated security threat to date. But most of what is imagined is indeed imaginary.
“Now, if an actual AI emerges, be very worried if the kill switch is very far away from humans,” says Jayendra Pathak, chief scientist at SecureIQLab. He, like others in security and AI, agree that the chances of an actual general artificial intelligence developing any time soon are still very low.
But as to the latest AI sensation, ChatGPT, well that’s another kind of scare.
“ChatGPT poses [insider] threats — similar to the way rogue or ‘all-knowing employees’ pose — to IoT. Some of the consumer IoT vulnerabilities pose the same risk as a microcontroller or microprocessor does,” Pathak says.
In essence, ChatGPT’s potential threats spring from its training to be helpful and useful. Such a rosy prime directive can be very harmful, however. Even when a prompt bumps against its safety guardrails, another well-crafted prompt can fool it into doing the very thing the guardrails were designed to prevent.
This type of attack is called a prompt injection because a prompt is used to make the model ignore previous instructions or perform unintended actions. Prompt injections can be used in ChatGPT directly or in applications built on ChatGPT or other large language AI models. In other words, this type of attack can be pumped into the ChatGPT sitting atop IoT and edge devices.
“Moreover, emerging technologies such as AI and ChatGPT in edge devices are introducing new vulnerabilities that companies need to be aware of,” says Harman Singh, director at Cyphere. For example, AI algorithms can be manipulated to generate false data, leading to incorrect decisions and actions. ChatGPT can also be used to conduct social engineering attacks by impersonating trusted individuals or entities.”
Sometimes a prompt can be used to get ChatGPT and similar LLM-based chatbots to reveal back office or proprietary information used to guide its responses and that was never meant to be accessed by anyone outside of the company.
“When confidential data or source code gets shared within tools like ChatGPT, it opens the door for challenges to compliance obligations and puts intellectual property at risk,” says Nathan Hunstad, Deputy CISO at Code42.
While evolving threats are growing and spreading like wildfire, not all is lost.
“Any new technology presents risks, but it doesn’t mean the risks are all brand new. In fact, companies might find that they already have many of the people, processes, and technology in place to mitigate the risks of tools like ChatGPT,” Hunstad says.