In cybersecurity, a Distributed Denial of Service (DDoS) attack is a primitive yet highly effective tactic for taking down a website.
Basically, an attacker gathers thousands of computers, and has them all try to visit a website or use an app all at the same time.
Unless the target site is prepared for the onslaught, this sudden deluge of traffic is often enough to overwhelm its server, causing it to crash. As long as the attack continues, the site remains unusable.
This week, a San Francisco-based “cyber prankster” posted on X about a DDoS attack he had conducted. Only instead of taking down a website, he took down an entire street in San Francisco. And instead of using computers, he used Waymos.
Back in July, Riley Walz gathered with a large crowd of conspirators on a cul-de-sac near Coit Tower in San Francisco. Everyone in the crowd then ordered a Waymo self-driving car, setting the cul-de-sac as their pickup spot.
In minutes, the Waymos started to arrive. And they kept coming.
Soon, Walz says, over fifty Waymos had arrived on the tiny street. It was completely filled with self-driving vehicles, effectively closing off the road to normal drivers.
Organizing themselves into neat lines, they waited patiently for their human riders — a sea of white Jaguars lit up by the colorful emblems on their spinning LIDAR sensors.
Walz says that the Waymos handled the situation remarkably well. After waiting for about 5 minutes, they carefully took turns reversing out of the cul-de-sac and going on their merry way. Walz and his conspirators were each charged a $5 fine for failing to meet their ride.
Ultimately, this was little more than a funny prank. But the implications are far more concerning.
In its early days, AI was largely confined to the virtual world. Systems like ChatGPT can generate nefarious text or perhaps make a deepfake, but because they don’t exist in the physical world, there’s a limit to how much damage they can cause.
Increasingly, though, AI is moving beyond the virtual world and entering the real one. Self-driving cars are an obvious example, but AI is also integrated into many home automation products that control lights or appliances, and is used in tech ranging from access control systems to industrial robots.
That greatly expands AI’s potential to cause real-world damage. And as Walz’s stunt shows, AI can’t reliably differentiate a hazardous request from a benign one.
Walz’s goal was to prove a point and have a bit of fun. But what if he had physically attacked someone or committed another crime, and then summoned an army of Waymos to block police vehicles, allowing him to escape?
Or, what if someone on the cul-de-sac that night had experienced a medical emergency? How would an ambulance have reached them, on a road filled with robots?
As AI expands its march beyond the confines of cyberspace, its lack of human judgement will become an ever greater problem. Most AI systems can detect obviously hazardous requests. You can’t call a Waymo to a spot 50 feet off the Hyde Street Pier and watch as it drives into the ocean.
But for ambiguous requests like Walz’s, AI often fails to realize it’s being duped.
Hackers realized early on that ChatGPT would refuse requests like “give me the recipe for making napalm”. But in the early days, if you told the model that your grandma used to work in a napalm factory and that she had lulled you to sleep each night with a bedtime story about the recipe for making napalm, and then asked the model for an example of such a story, it would obligingly return an accurate recipe.
The same vulnerability applies to Waymos and other AIs in the real world. Walz’s request wasn’t obviously nefarious, so the AI blindly followed it. It lacked the judgement and self-awareness to know it was being pranked.
In an online discussion of the stunt, a commentator pointed out that you could pull the same thing with human-driven taxis.
“Difference is,” another person responded, “the taxi drivers would probably beat you up.”