Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
68,142
38,922



Researchers in the United States and China have been performing tests in an effort to demonstrate that "hidden" commands, or those undetectable to human ears, can reach AI assistants like Siri and force them to perform actions their owners never intended. The research was highlighted in a piece today by The New York Times, suggesting that these subliminal commands can dial phone numbers, open websites, and more potentially malicious actions if placed in the wrong hands.

A group of students from the University of California, Berkeley and Georgetown University published a research paper this month, stating that they could embed commands into music recordings or spoken text. When played near an Amazon Echo or Apple iPhone, a person would just hear the song or someone speaking, while Siri and Alexa "might hear an instruction to add something to your shopping list." Or, more dangerous, unlock doors, wire money from your bank, and purchase items online.

siri-iphone-x.jpg

The method by which the students were able to accomplish the hidden commands shouldn't be a concern for the public at large, but one of the paper's authors, Nicholas Carlini, believes malicious parties could already be making inroads with similar technology.
"We wanted to see if we could make it even more stealthy," said Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper's authors.

Mr. Carlini added that while there was no evidence that these techniques have left the lab, it may only be a matter of time before someone starts exploiting them. "My assumption is that the malicious people already employ people to do what I do," he said.
Last year, researchers based at Princeton University and Zheijiang University in China performed similar tests, demonstrating that AI assistants could be activated through frequencies not heard by humans. In a technique dubbed "DolphinAttack," the researchers built a transmitter to send the hidden command that dialed a specific phone number, while other tests took pictures and sent text messages. DolphinAttack is said to be limited in terms of range, however, since it "must be close to the receiving device."

DolphinAttack could inject covert voice commands at 7 state-of-the-art speech recognition systems (e.g., Siri, Alexa) to activate always-on system and achieve various attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile.
In yet another set of research, a group at the University of Illinois at Urbana-Champaign proved this range limitation could be increased, showing off commands received from 25 feet away. For the most recent group of researchers from Berkeley, Carlini told The New York Times that he was "confident" his team would soon be able to deliver successful commands "against any smart device system on the market." He said the group wants to prove to companies that this flaw is a potential problem, "and then hope that other people will say, 'O.K. this is possible, now let's try and fix it.'"

For security purposes, Apple is stringent with certain HomeKit-related Siri commands, locking them behind device passcodes whenever users have passcodes enabled. For example, if you want to unlock your front door with a connected smart lock, you can ask Siri to do so, but you'll have to enter your passcode on an iPhone or iPad after issuing the command. The HomePod, on the other hand, purposefully lacks this functionality.

Article Link: Researchers Demonstrate Subliminal Smart Device Commands That Have Potential for Malicious Attacks
 
  • Like
Reactions: LiveM
This is really clever. I wouldn’t have thought that the AIs would respond to non-vocal frequencies as they’re intended to listen to humans only. I would think that checking the frequency range of the command would be enough to counteract this problem fairly simply.
 
That's crazy, although I don't see this being an issue for most people, it demonstrates that if there's a will there's a way
 
  • Like
Reactions: yaxomoxay



Researchers in the United States and China have been performing tests in an effort to demonstrate that "hidden" commands, or those undetectable to human ears, can reach AI assistants like Siri and force them to perform actions their owners never intended. The research was highlighted in a piece today by The New York Times, suggesting that these subliminal commands can dial phone numbers, open websites, and more potentially malicious actions if placed in the wrong hands.

A group of students from the University of California, Berkeley and Georgetown University published a research paper this month, stating that they could embed commands into music recordings or spoken text. When played near an Amazon Echo or Apple iPhone, a person would just hear the song or someone speaking, while Siri and Alexa "might hear an instruction to add something to your shopping list." Or, more dangerous, unlock doors, wire money from your bank, and purchase items online.

siri-iphone-x.jpg

The method by which the students were able to accomplish the hidden commands shouldn't be a concern for the public at large, but one of the paper's authors, Nicholas Carlini, believes malicious parties could already be making inroads with similar technology.
Last year, researchers based at Princeton University and Zheijiang University in China performed similar tests, demonstrating that AI assistants could be activated through frequencies not heard by humans. In a technique dubbed "DolphinAttack," the researchers built a transmitter to send the hidden command that dialed a specific phone number, while other tests took pictures and sent text messages. DolphinAttack is said to be limited in terms of range, however, since it "must be close to the receiving device."

In yet another set of research, a group at the University of Illinois at Urbana-Champaign proved this range limitation could be increased, showing off commands received from 25 feet away. For the most recent group of researchers from Berkeley, Carlini told The New York Times that he was "confident" his team would soon be able to deliver successful commands "against any smart device system on the market." He said the group wants to prove to companies that this flaw is a potential problem, "and then hope that other people will say, 'O.K. this is possible, now let's try and fix it.'"

For security purposes, Apple is stringent with certain HomeKit-related Siri commands, locking them behind device passcodes whenever users have passcodes enabled. For example, if you want to unlock your front door with a connected smart lock, you can ask Siri to do so, but you'll have to enter your passcode on an iPhone or iPad after issuing the command. The HomePod, on the other hand, purposefully lacks this functionality.

Article Link: Researchers Demonstrate Subliminal Smart Device Commands That Have Potential for Malicious Attacks
[doublepost=1525972736][/doublepost]Isn’t this what virusses do?
 
  • Like
Reactions: LiveM
Embarassing what people have to overcome to instruct Siri, but it must be such fun to see it work...
 
Last edited:
  • Like
Reactions: LandRovers
This is really clever. I wouldn’t have thought that the AIs would respond to non-vocal frequencies as they’re intended to listen to humans only. I would think that checking the frequency range of the command would be enough to counteract this problem fairly simply.

Agreed. But why wouldn't Apple have foreseen this and limited the frequency range in the first place? There's literally no need for phone mics to detect anything below/above human voice frequencies.
 
I guess when you start having AI or "machine learning" an ones ears anything can be possible, including hacking :)

Bet as a company, my face would be red by now as I over-stepped THAT mark. Kinda makes you wish when it all goes to hell
 
"...while there was no evidence that these techniques have left the lab, it may only be a matter of time before someone starts exploiting them."

Sir, you just exploited them yourself!

I have a big secret I cannot tell you. But I'm going to tell you. Don't tell anyone I told you.
 
  • Like
Reactions: thebeans
HomePod directs me to use my phone to unlock my front door or open my garage doors. This potential issue seems to be somewhat under control with iOS.

My HomePods randomly respond to things on the TV speaker (since AirPlay 2 still isn't up and running). Whats spooky was last night Siri said there was nothing to read; the dialogue on TV had nothing to do with reading.
 
  • Like
Reactions: Belizebaby
This can easily be prevented by not having Siri listen, but rather using the home button. As an added bonus, Siri isn't listening to you all the time. Still, very clever.
 
  • Like
Reactions: ignatius345
Reminds me of the time (long ago, pre-Siri) my Mac's display was waking randomly, while I was in the same room watching TV. Never knew what was causing it, until one day when the headphones were disconnected from it.

TV: "....we don't have time for that..."
Mac: <wakes display> "It's 7:15"

Feels funny hearing your appliances having a conversation and not being involved in it!
 
What they don't point out is they must of "trained" Siri on the victim phone with the voice of the other Siri device as Siri will not respond to other people saying Hey Siri except the owner. That is for the iPhone of course. HomePod is different story.

They really need to show more info. I can assume they picked the iPhone for more of a scare factor though left out that part. Now Alexa and many Android phones, they still respond to anybody.
 
If the “words” are still being uttered but just sub or ultrasonically, then software can filter out those frequencies before processing into commands.

Otherwise, fun security hack!
 
Agreed. But why wouldn't Apple have foreseen this and limited the frequency range in the first place? There's literally no need for phone mics to detect anything below/above human voice frequencies.
Not just Apple, they also mentioned Alexa.
 
  • Like
Reactions: jarman92
This is why Google/Android requires training. If the voice is not correct to wake up the assistant, then how would this attack work?

It hopes you are listening to the song and have Google Assistant/Siri already open and accepting commands.
Normally music stops when you launch the assistant.

So this threat is akin to going outside, stand on one foot, and catch rain in an upside down cup......
 
  • Like
Reactions: jagooch
Agreed. But why wouldn't Apple have foreseen this and limited the frequency range in the first place? There's literally no need for phone mics to detect anything below/above human voice frequencies.

Because all it would take is for one person with a super high or super low voice to file a discrimination lawsuit.
 
  • Like
Reactions: lkrupp
HomePod directs me to use my phone to unlock my front door or open my garage doors. This potential issue seems to be somewhat under control with iOS.
Sounds like a true smart speaker.. delegating all the smart works to the phone.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.