The Dark Side of AI: Viruses Using Chatbots for Propagation

New Scientist highlighted a potential concern in the intersection of AI and cybersecurity. Researchers David Zollikofer at ETH Zurich and Benjamin Zimmerman at Ohio State University developed a simple computer virus that uses large language models to rewrite their code to avoid detection and to spread via email attachments.

“We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit,” says Zollikofer. This adaptation allows the altered virus to evade routine antivirus scans once the original format has been identified.

Once the virus is rewritten by ChatGPT, the program opens up Outlook in the background of Windows, without the user knowing, and scans the most recent email chains. It then takes the content of those emails and prompts ChatGPT to write a contextually relevant reply, referencing an attachment – the virus – in an innocuous way....

In their experiments, there was around a 50 per cent chance that the AI chatbot’s alterations would cause the virus file to stop working, or, occasionally, for it to realise it was being put to nefarious use and refuse to follow the instructions. But the researchers suggest that the virus would have a good chance of success if it made five to 10 attempts to replicate itself in each computer.

I wonder about the feasibility of implemenenting such a virus in the real world. LLMs are too big to distribute with malware, leaving the system dependent on an outside service. This could be a public API like OpenAI’s, or a service run by the malware’s creators themselves; either way, it’s a weak point for security experts to exploit. In any case, this looks like a new front in the endless conflict with cybercriminals.