By applying a malicious overlay to a regulation STOP sign, academic researchers tricked self-driving cars into thinking it was actually a Speed Limit 45 sign in 100% of test cases.
By applying a malicious overlay to a regulation STOP sign, academic researchers tricked self-driving cars into thinking it was actually a Speed Limit 45 sign in 100% of test cases.

Attackers could subtly deface road signs with malicious overlays or stickers in order to confuse the deep neural networks of autonomous vehicles, potentially causing accidents by causing the cars to mistake one type of sign for another, academic researchers from four U.S. universities have disclosed in a report.

For instance, by overlaying a STOP sign with a cutout that looked the same, yet contained subtle, yet powerful "perturbations," the researchers tricked self-driving cars into thinking it was actually a Speed Limit 45 sign in 100% of test cases. The same exact test on a RIGHT TURN sign caused the cars to wrongly classify it as a STOP sign two-thirds of the time.

The researchers also achieved a 100 percent success rate when applying smaller malicious stickers onto a STOP sign, camouflaging the visual disturbances to look like street art. And in another test, the researchers fooled the vehicles two-thirds of the time when applying malicious stickers in the guise of simple graffiti.

The researchers, from the University of Washington, the University of Michigan Ann Arbor, Stony Brooke University and the University of California Berkeley, were specifically attempting to demonstrate scenarios in which an adversary would try to alter signs discretely, without calling attention to their attack.

"Our algorithm can create spatially constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer," the report reads. "We show that adversarial examples... achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions.

The report credits the following researchers: Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li , Atul Prakash, Amir Rahmati, and Dawn Song.