When AI designs a virus

A stunt that woke up Washington

It was early 2023 when I heard the story of the little black box. It had been brought into the historic Eisenhower Executive Office Building, which sits alongside the West Wing. Inside were a dozen test tubes that held the ingredients for mass casualty — engineered DNA that, if assembled by the right pair of hands and with the proper equipment, had the potential to cause the next pandemic.

The box was a kind of stunt. The man behind it, Rocco Casagrande, hadn’t come up with the formula for destruction himself. An AI chatbot had described the steps that needed to be taken to build such a weapon. And Casagrande was determined to show Biden administration officials the kinds of risks that could emerge when two quickly evolving fields — artificial intelligence and synthetic biology — joined forces.

This stunt had a huge impact. Word of the meeting spread, like a game of telephone, across the nation’s security apparatus. It was a wake-up call, revealing the US government was seriously unprepared, sources told me.

It’s not that today’s AI has the capability to design the stuff of sci-fi horror. But the models are always improving, which raises the specter of what those tools might be able to do in the future.

A January report from the RAND Corporation found that while the current generation of AI chatbots don’t pose an especially great threat, that may not be the case for long. The field is “advancing faster than governments can keep pace,” the lead author of the report said.

Anna Marie Wagner, who formerly served as Ginkgo Bioworks’ head of AI, acknowledges that the world was caught “flat-footed” when ChatGPT arrived. “We need to prepare for the risks that emerge,” she said. “I’m concerned that the response could look like, let’s slow down, rather than let’s build robust infrastructure.”

She added it’s naïve to think that the pace of innovation in AI or synthetic biology will slow anytime soon — even in the face of regulation. “That train has left the station. As a global society, we need to determine how to build the tools to identify risks and threats to respond quickly.”

That’s exactly why the AI startup Anthropic had hired Casagrande to see what their chatbot Claude was capable of at the outset of 2023. The company had asked Casagrande to look beneath the hood and see if it could aid a bioterrorist by providing nefarious information. And, well, it could. With Anthropic’s blessing, Casagrande briefed US government officials on the threat.

It took me months to confirm details about these briefings. (You can give my story a read or listen to the Big Take podcast about it.) Casagrande was measured in the way he spoke about the risk of AI-enabled bioweapons. He wasn’t losing sleep over it, but he was worried about the potential for the tools to get even better. Additional checks and balances needed to be put in place.

“Self-policing and self-reporting are not sufficient,” he told me. —Riley Griffin

Original Post>

Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.