If it turns out you don't have to open the thing to turn off the beep, that's sorta a big barrier removed.
That's a pretty big "if" however, since at this point the hack requires physical attachment of wires to test points — a process that's far more complicated than simply ripping out the speaker.
At this point, I'm not even sure if an AirTag has a software update mechanism at all. I would expect it probably does, but whether that can be reverse-engineered is another matter entirely.
Agree, and if the threat was a serial murderer you're right its irrelevant. I'm thinking more about "started out slightly creepy and innocent" and ends with "Class A Felony" type -- and that can take more than 3 days. Apple's beep compensating control was always weak, and it'll turn out to be even weaker if it can be easily flash-disabled.
True, although the only compensating factor for that particular weakness is that you can't track an AirTag in even near-real-time if it's being carried by somebody without an iPhone or iPad to report its location.
Based on my own testing, its location will get reported when it's relatively stationary and near other iOS devices (e.g. in a store, coffee shop, or restaurant), but it doesn't get picked up when you're simply walking by people on the street, much less driving by.
Obviously, the tracking risk is far from negligible, but it's significantly different than when it's being carried by an iPhone user, where the victim's iPhone will be reporting its location every 2-3 minutes.
However, even if the AirTag firmware could be wirelessly flashed outside of Apple's own update mechanisms, it's not something that the average user is going to be equipped to do. Maybe the right kind of app on an Android device could push the code over Bluetooth, but you're not going to be able to pull it off with an iPhone — at least not without having a jailbroken one.
The serial seed / secret could be rotated every wakeup between a "nice" value and a "threat" value to tree the data stream. It would halv the datapoints but it seems unlikely that would be noticeable.
That would require some pretty sophisticated coding — quite possibly more than the AirTag is even capable of. After all, these aren't really "smart" devices per se. Under normal operating conditions they don't need to do all that much except transmit a rotating Bluetooth ID according to a pre-defined algorithm, make a sound when asked to, and maintain an internal clock that sounds an alert at a predefined value and resets every time they come back into proximity of their paired iPhone or iPad. Based on what Apple has said that predefined value can also be updated remotely — presumably through the paired iPhone.
Most of the heavy lifting is done by the iPhone, which would almost certainly still pick up the "threat" value in the data stream. You're right that the halved datapoint probably wouldn't be detected, but the nearby iPhone would still see what it thinks are two different AirTags in proximity.
Maybe, unless it was smart enough not to transmit the threat seeded ID to the mark's phone somehow .... more investigation required. I'm sure we'll see where the POC rubber hits the road soon.
Based on what I know of the Bluetooth LE spec, I'm not sure this is even possible. Bluetooth IDs are broadcast by nature. They're readable by anything — you can even get BTLE scanning apps on the App Store that will show you every Bluetooth device in proximity.
The only way to do this — in theory — would be to code the AirTag to stop broadcasting the threat seeded ID entirely when the target's iPhone was detected as being in proximity.
However, this assumes that the AirTag microcontroller can handle code this sophisticated in the first place, and that the AirTag even has the necessary hardware to arbitrarily scan for nearby Bluetooth IDs. The attacker would then also need to know the Bluetooth ID of the target's iPhone in order to add it to an exclusion list.
Ultimately, however, since this ID is necessary to actually
track the AirTag, it would defeat the purpose of the exercise. Essentially, the AirTag would be completely untrackable via the "thread seeded ID" whenever the target's iPhone was in proximity. Kind of makes it useless for stalking in this case, unless you're going after somebody who regularly leaves their iPhone behind.
Unless the attacker had multiple thread IDs and cycled between them to circumvent the tracking control?
That could conceivably work. The target's iPhone would see the multiple threat IDs as different AirTags and likely never consider itself as being followed by a single AirTag.
This would still be tricky, however, as the IDs would have to be associated to a known iPhone in the attacker's possession. Since only 16 AirTags can be associated with a single Apple ID, they wouldn't be able to use more than 16 threat IDs from the same device. That would probably be enough, however, and all of the evidence right now suggests that Apple's threat detection simply looks for unknown IDs — it doesn't associate them back to an Apple ID or an owner. In fact, it probably doesn't communicate back to the
Find My network at all (other than the normal location reporting for the AirTag in question, of course).
Again, though, this assumes a level of sophistication and power that Apple's microcontroller may not be capable of. It would also have to be a very targeted attack, since the attacker's individual IDs would have to be specifically planted into the compromised AirTag — and all of these would have to be gleaned from existing AirTags, since the rotation is established using a random seed during the pairing process.
In other words, if I wanted to plant 16 threat IDs onto another AirTag, I'd have to find a way to generate those 16 threat ID sets from my iPhone in the first place, and then figure out to load those 16 sets into the compromised AirTag, keeping in mind that all of those would have to rotate along the same cycle and timing in order to be valid for tracking. Does the microcontroller even have enough memory to store 17 different sets of IDs? Further, does some aspect of the AirTag hardware form part of the randomization seed? There's some fairly complicated public key cryptography going on in all of these exchanges as well.
In fact, the next tests I'd like to see one of these researchers attempt is to clone the IDs from one AirTag to another. That may not even be possible at a basic level.
Well, except the cost of the attack. It's usually expensive to follow a mark using trackers because you need a comms network (energy) or a tail (person). These resources are unnecessary when leveraging the Apple find my network.
Right, but my point is how you would leverage the
Find My network to do this in the first place. That strikes me as considerably more complicated than it sounds at first glance.
In fact, I suspect it would be far easier to change up the Bluetooth ID algorithm for more localized, short-range tracking. Plant a threat ID that's meaningless to the
Find My network, rotating it with the legitimate ID, and you'd have a way of persistently tracking an AirTag independently from Apple's network, and as long as the planted Bluetooth ID wasn't in the same class as an AirTag (likely defined by the standard manufacturer prefix), it would be ignored by nearby iPhones and iPads — it would be viewed as just some other random Bluetooth device that they don't need to care about.