Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

diipii

macrumors 6502a
Dec 6, 2012
618
552
UK
I have used PCs and Macs for 40 years with no security whatsoever and never had a problem.
Maybe because, like most of you, I am just an ordinary bloke, hardly ever involved in international espionage, and no inclination to imagine myself anything else.
 

doc james

macrumors regular
May 3, 2007
102
91
United Kingdom
Wait, so if I get a dog whistle, Siri will finally understand me?

Balancing my bike at the traffic lights, looking cool AF triple tap AirPod "Set volume to 80%"

..."William Oliver Stone (born September 15, 1946) is an American writer and filmmaker"

Sometimes I think Siri just wants me to die single :(
 

Superhai

macrumors 6502a
Apr 21, 2010
716
523
My thought is that the word should be "inaudible" and NOT "subliminal". As in, "The devices can react to inaudible commands."
There are references to different research studies, one look at inaudible commands, and the other to embedded into other sounds or noise. I think the inaudible is easier to avoid, unless it uses effects of lower harmonics reacting to the actual body where the mic sits, to be able to produce local vibrations in the human voice band.

There is an interesting research paper: https://www.usenix.org/node/191969
 

M2M

macrumors 6502
Jan 12, 2009
348
488
Consequence of not using a proper analog anti-aliasing filter before the ADC? If the signal is not analog lowpass filtered to half the sample rate before sampling then those frequencies above half the sample rate 'alias' down into frequencies within the normal sample range. For example, if the sample rate is 40khz then sounds between 20khz and 40khz (outside the normal hearing range) sound identical to the sampler as sounds between 0hz and 20khz do (inside the normal hearing range). This is the audio equivalent to those Moire patterns you sometimes see in photos with textures finer then the pixel density of the camera or monitor.

If so this would imply they tried saving a few cents by either leaving out the front end analog lowpass filter or using a very cheap first-order filter which isn't something that could be fixed in firmware. A cheap filter can be tricked by just increasing the volume of the sound and since you can't hear it anyway it doesn't matter how loud they make it. This must be the approach they used since the article states "While DolphinAttack has its limitations — the transmitter must be close to the receiving device — experts warned that more powerful ultrasonic systems were possible."

Fixing this requires designing in stronger high-order front-end filtering which ultimately should only help Siri actually recognize you when you 'do' speak. And we already know she needs all the help she can get ;)

Here is a white paper describing the phenomenon...

http://www.ni.com/white-paper/54448/en/
One of the most useful comments I’ve read in a long time, if not ever on MacRumors! Thanks for this!
 
  • Like
Reactions: groovyd

groovyd

Suspended
Jun 24, 2013
1,227
621
Atlanta
... an interesting consideration here is that this 'hack' most likely could never occur from any of the sound making devices in your home currently as most if not all are already bandwidth limited by either a low sample rate or the speaker's capable frequency response to under 20khz (within normal hearing range). In other words for this attack to actually occur requires a very special sound reproduction system you don't just find in your average consumer device. Now perhaps some of the most esoteric audiophiles with the $10k stereo systems they might need to worry ;)
 
  • Like
Reactions: kdarling

manu chao

macrumors 604
Jul 30, 2003
7,219
3,031
Agreed. But why wouldn't Apple have foreseen this and limited the frequency range in the first place? There's literally no need for phone mics to detect anything below/above human voice frequencies.
Sure, but human voice frequencies don't necessarily match human hearing frequencies. Meaning the human voice could quite possibly produce frequencies that human ears cannot hear. Even if evolution will have tended to get rid of those non-useful frequencies (though who knows, some of those frequencies might have useful when interacting with animals), some might have been impractical to get rid of (ie, human vocal cords optimised for human hearing might still produce other frequencies as a by-product). Plus there is the whole ageing aspect in that older people cannot hear higher frequencies.
[doublepost=1526122003][/doublepost]
Because all it would take is for one person with a super high or super low voice to file a discrimination lawsuit.
Sure, because all those people with voices outside the human hearing range can communicate so well with other people already.
[doublepost=1526122312][/doublepost]
Wait, so if I get a dog whistle, Siri will finally understand me?

Balancing my bike at the traffic lights, looking cool AF triple tap AirPod "Set volume to 80%"

..."William Oliver Stone (born September 15, 1946) is an American writer and filmmaker"

Sometimes I think Siri just wants me to die single :(
Speech recognition is one step, but I got a Siri response with:

OK, I found this on the web for 'Set a four minute timer':
 
  • Like
Reactions: doc james

fairuz

macrumors 68020
Aug 27, 2017
2,486
2,589
Silicon Valley
ML is soft logic, very hard to alter for security purposes. I'll bet they didn't have any ultrasonic training samples either!
[doublepost=1526147787][/doublepost]
Reminds me of the mosquito ringtones that kids used for a while to hide calls from older folk :)

As some have already noted, this should prompt makers to filter input to voice frequencies. Easy.
That was a thing? Oh man, maybe my ears were always old. Also, that sounds like torture.
 

groovyd

Suspended
Jun 24, 2013
1,227
621
Atlanta
Agreed. But why wouldn't Apple have foreseen this and limited the frequency range in the first place? There's literally no need for phone mics to detect anything below/above human voice frequencies.

It's just considered good design in any sampling system, a fundamental requirement actually. I believe they are limiting the frequency range with regards to the Nyquist Theorem perhaps just not strong enough by using a low order filter typically good enough for general purpose audio processing just not good enough for 'security' purposes. As a privacy and security centric company we can rest assured they will step up their game in the next design. I'd bet a good bit the next round of phones won't have this problem. No doubt a team there is assigned to fix this moving forward as we speak.
 

Defactomonkey

macrumors member
May 11, 2012
49
34
Boston, MA



Researchers in the United States and China have been performing tests in an effort to demonstrate that "hidden" commands, or those undetectable to human ears, can reach AI assistants like Siri and force them to perform actions their owners never intended. The research was highlighted in a piece today by The New York Times, suggesting that these subliminal commands can dial phone numbers, open websites, and more potentially malicious actions if placed in the wrong hands.

A group of students from the University of California, Berkeley and Georgetown University published a research paper this month, stating that they could embed commands into music recordings or spoken text. When played near an Amazon Echo or Apple iPhone, a person would just hear the song or someone speaking, while Siri and Alexa "might hear an instruction to add something to your shopping list." Or, more dangerous, unlock doors, wire money from your bank, and purchase items online.

siri-iphone-x.jpg

The method by which the students were able to accomplish the hidden commands shouldn't be a concern for the public at large, but one of the paper's authors, Nicholas Carlini, believes malicious parties could already be making inroads with similar technology.
Last year, researchers based at Princeton University and Zheijiang University in China performed similar tests, demonstrating that AI assistants could be activated through frequencies not heard by humans. In a technique dubbed "DolphinAttack," the researchers built a transmitter to send the hidden command that dialed a specific phone number, while other tests took pictures and sent text messages. DolphinAttack is said to be limited in terms of range, however, since it "must be close to the receiving device."

In yet another set of research, a group at the University of Illinois at Urbana-Champaign proved this range limitation could be increased, showing off commands received from 25 feet away. For the most recent group of researchers from Berkeley, Carlini told The New York Times that he was "confident" his team would soon be able to deliver successful commands "against any smart device system on the market." He said the group wants to prove to companies that this flaw is a potential problem, "and then hope that other people will say, 'O.K. this is possible, now let's try and fix it.'"

For security purposes, Apple is stringent with certain HomeKit-related Siri commands, locking them behind device passcodes whenever users have passcodes enabled. For example, if you want to unlock your front door with a connected smart lock, you can ask Siri to do so, but you'll have to enter your passcode on an iPhone or iPad after issuing the command. The HomePod, on the other hand, purposefully lacks this functionality.

Article Link: Researchers Demonstrate Subliminal Smart Device Commands That Have Potential for Malicious Attacks

If they prevent ultrasound from being heard by a smartphone that is a major blow to indoor GPS apps such as Sonitor’s Forkbeard app. Hope the fix is more elegant than that.
[doublepost=1526269608][/doublepost]
Agreed. But why wouldn't Apple have foreseen this and limited the frequency range in the first place? There's literally no need for phone mics to detect anything below/above human voice frequencies.

This is not true at all. Ultrasound is a major player in indoor GPS.
 

manu chao

macrumors 604
Jul 30, 2003
7,219
3,031
If they prevent ultrasound from being heard by a smartphone that is a major blow to indoor GPS apps such as Sonitor’s Forkbeard app. Hope the fix is more elegant than that.
[doublepost=1526269608][/doublepost]

This is not true at all. Ultrasound is a major player in indoor GPS.
Would it be possible for them to split the signal path? First using a high-pass filter that only looks at the ultrasound range and if the signal contains indoor GPS information process as such? And then using a low-pass filter to only look at the range below ultrasound to process for any voice commands?
 

Defactomonkey

macrumors member
May 11, 2012
49
34
Boston, MA
Would it be possible for them to split the signal path? First using a high-pass filter that only looks at the ultrasound range and if the signal contains indoor GPS information process as such? And then using a low-pass filter to only look at the range below ultrasound to process for any voice commands?

The ultrasound contains a 10 bit binary string so it should be easy to allow this. Also it can be limited at the app level. Seems more of a fix to Siri than to the API.
 

jagooch

macrumors 6502a
Jul 17, 2009
781
238
Denver, co
This is pretty old news. We discussed this frequently over the last 4 years as it was discovered soon after the Amazon Echo was release. It’s education ( for them ) that some students decided to repeat the research already done by others, and it’s definitly something that Smart Speaker manufaturers should keep in mind when designing products .
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.