The Dark Side of AI: Viruses Using Chatbots for Propagation

New Scientist highlighted a potential concern in the intersection of AI and cybersecurity. Researchers David Zollikofer at ETH Zurich and Benjamin Zimmerman at Ohio State University developed a simple computer virus that uses large language models to rewrite their code to avoid detection and to spread via email attachments.

“We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit,” says Zollikofer. This adaptation allows the altered virus to evade routine antivirus scans once the original format has been identified.

Once the virus is rewritten by ChatGPT, the program opens up Outlook in the background of Windows, without the user knowing, and scans the most recent email chains. It then takes the content of those emails and prompts ChatGPT to write a contextually relevant reply, referencing an attachment – the virus – in an innocuous way....

In their experiments, there was around a 50 per cent chance that the AI chatbot’s alterations would cause the virus file to stop working, or, occasionally, for it to realise it was being put to nefarious use and refuse to follow the instructions. But the researchers suggest that the virus would have a good chance of success if it made five to 10 attempts to replicate itself in each computer.

I wonder about the feasibility of implemenenting such a virus in the real world. LLMs are too big to distribute with malware, leaving the system dependent on an outside service. This could be a public API like OpenAI's, or a service run by the malware's creators themselves; either way, it's a weak point for security experts to exploit. In any case, this looks like a new front in the endless conflict with cybercriminals.


AI Assistance and Its Impact on Writers' Pay

Participants were given the chance of completing tasks in one of three modes: independently, without any AI assistance; human-primary, where ChatGPT could assist them in editing and polishing their own work; or AI-primary, where ChatGPT would write the first draft and the person would then edit it. Some were given the choice between human-primary and independent writing, while others were given the choice of AI-primary or independent writing. Those who worked independently were always given $3 for completing their task. AI-assisted tasks were offered to workers with a random amount between $1.50 and $4.50, at $0.25 intervals.

Participants were willing to give up about $0.85--28% of the total they would have been paid--to have a first draft written by AI. Participants saw no meaningful difference in the quality of the work they produced with and without AI assistance; so it seems this is entirely down to the AIs taking over some of the work involved in the writing tasks.

This result suggests to me that much of any efficiency savings produced by LLM technology is likely to be enjoyed by employers, not workers. Workers can produce more cheaply and quickly, reducing the overall need for labor. I'm not clear what that means for individual workers--possibly those more skilled in working with AI assistants will benefit.


DIY Propaganda Machines

Earlier this Spring, the Wall Street Journal suggested a fun DIY project for an election-year summer. Jack Brewster of NewsGuard explained,

It took me two days, $105 and no expertise whatsoever to launch a fully automated, AI-generated local news site capable of publishing thousands of articles a day—with the partisan news coverage framing of my choice, nearly all rewritten without credit from legitimate news sources. I created a website specifically designed to support one political candidate against another in a real race for the U.S. Senate. And I made it all happen in a matter of hours.

I'm nervous and morbidly curious to see what effect LLM technology will have on the upcoming US election.


Decreasing Enthusiasm for AI Projects: A Study

"The honeymoon phase of generative AI is over," the company said in its 2024 Generative AI Global Benchmark Study, released on Tuesday. "While leaders remain enthusiastic about its potential to transform businesses, the initial euphoria has given way to a more measured approach."

That's Lucidworks, in a recent study citing cost, data security, and safety reasons for businesses' growing skepticism about Generative AI. The study, "The State of Generative AI in Global Business: 2024 Benchmark Report", was released on Tuesday and indicates a shift from initial euphoria towards a more measured approach.

According to the results of the survey, 63 percent of global companies plan to increase spending on AI in the next twelve months, compared to 93 percent in 2023 when Lucidworks conducted its first investigation.

The financial benefits of implemented AI projects have been unsatisfactory, with 42% of companies seeing no significant benefit from their generative AI initiatives. So far, few companies have managed to exit the pilot testing phase in their initiatives.