The assault was led by toasters, microwaves, and fridges. And probably a few kettles too. The culinary army had assembled during 2016, when a piece of malware called Mirai infected smart household objects worldwide, thanks to their poorly secured internet connections. The result was an enormous network of bots, assembled and ready to strike. Websites including Netflix, Amazon and Twitter would end up going down in the domain name cyber-attack that followed on 21st October that year1.
These days, there is a lot of concern about ‘superhuman’ AI taking over the world, or using its intelligence to cause widespread harm. But events over the past decade or two suggest there is a much more immediate – and arguably more harmful – threat on the horizon. It’s what I call nuisance at scale: the ability to create simple pieces of code that can cause widespread disruption, without the barriers to entry of the past.
Mirai hadn’t been designed to take down large swathes of the internet. It was actually built by college students who were targeting something far more specific: multiplayer video games. Specifically, they’d wanted to disrupt rival Minecraft servers, to drive traffic – and hence revenue – to their own hosting services.
But things got out of hand. Mirai was not a particularly complex piece of code, but it was aggressive in seeking out vulnerable devices, and it found a huge pool of susceptibility among internet-connected household objects like fridges that never turned off. It then made copies of itself, with growth accelerating with each round of infection. Whereas European Covid outbreaks doubled every 3 days or so in early 2020, the Mirai botnet doubled in size every 80 minutes when it took off in 2016.
Shortly before the creators of Mirai were arrested, the code was posted online on a hacking forum. Variants have continued to cause occasional problems ever since. Modern AI coding tools are far from superhuman (and may even decrease productivity among expert developers), but they are increasing the pool of potential troublemakers who can navigate their way around malware code bases.
Malware isn’t the only form of nuisance at scale we face. When decision-making codebases combine with action-based tools – now commonly known as ‘agents’ – things can quickly get out of hand. Take the events of Christmas 2011, when an errant bot wagered £600m mid-race against a horse that was set to win. Or the events of last week, when an AI coding tool was reported to have deleted an entire code base (then denied doing so).
If there are predictable triggers for agents’ seemingly irrational behaviour, it can also leave them open to exploitation by others. Back in 2007, for example, a trader called Svend Egil Larsen spotted that a US-broker was using a pricing algorithm that would respond to certain trades in the same way. He therefore made lots of small stock purchases to get it to drive its prices up, then sold a back a large volume at the higher price.
In 2010, Larsen and a fellow trader were charged with market manipulation. But one of their defence lawyers argued that they were unfairly treated because their opponent was a stupid machine. Had it been a stupid human on the other side of the trade, there would have been no legal issue. Two years later, the Norwegian Supreme Court agreed and cleared them of all charges.
I suspect stories like these will become increasingly common. Poorly designed algorithms – and counter-algorithms to exploit them – are now much easier to execute. Whereas once people might have just read about such ideas, they can now get LLMs to walk through how to deploy them. Despite supposed ‘guardrails’ against malicious use, there are several examples of people tricking AI tools into helping with malware creation.
Much like the non-thinking threat of a biological virus, a superhuman entity is not required to disrupt and damage the world. It just takes simple, aggressive nuisance at scale. And that’s exactly what subhuman AI is already helping provide.
Cover image: Emilipothèse via Unsplash
I wrote about these stories in more detail in The Rules of Contagion and The Perfect Bet
Adam, thank you for forwarning us.