Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Next up from security researchers
“SWALLOWING AIRTAGS COULD COMPROMISE YOUR DIGESTIVE SYSTEM… WHAT YOU NEED TO KNOW”
or
“if you glue your house key to your airtag and then lose it, AIRTAGS COULD ALLOW SOMEONE ENTRY INTO YOUR HOUSE!”

I'm actually surprised that there's no TikTok "Eat your AirTag and lets play hide and seek" challenge going on already.
 
I feel like people are focused on the proof of concept demo which redirects the finder to an arbitrary URL. The bigger point is that nothing in the device should be modifiable, yet something was. It doesn’t operate as intended.
 
Last edited:
  • Like
Reactions: thejadedmonkey
The URL thing was a POC of editable firmware, as others have noted. As the firmware was not cryptographically signed it's been proven the firmware can be altered much more easily than for example an iOS device. This is extremely unlikely to be resolved via a software update, nor would any potential fix survive scrutiny.

It's possible this might even be done without opening the airtag, in bulk, from information gained via reverse engineering.

Potential threat vectors:
  • Reprogramming a tag not to beep, thus circumventing the anti-stalking features.
    • At $29 this makes it extremely cheap/easy to track a massive number of targets. Say every employee in a target organization, or 100 random encounters in a cafe. Yes you could do this in other ways, but a hacked AirTag means you can leverage Apple's resource investment and tracking network to reduce the cost of attack. Also, found airtags are more or less innocuous looking.
  • Reprogramming a tag to report with two device IDs rather than one.
    • Hijack a user's existing tag which continues to function as expected, but also phones home to the attacker. This could be done by cloning a tag and then swapping it, or by altering a target's tag and replacing it. This allows an attacker to follow you everywhere forever, and you would have no mechanism to even discover the attack.
And for bonus points:
  • Reprogramming a tag as a side-channel vector.
    • Airtags will soon be everywhere, including high security facilities looking for RF devices. In many instances these devices will be discovered and dismissed, or remain undiscovered.
    • An airtag's ID number alone (even if it transmits no other information) can be rotated and used as a side channel information vector to re-transmit information about the local environment to a remote attacker.
 
The URL thing was a POC of editable firmware, as others have noted. As the firmware was not cryptographically signed it's been proven the firmware can be altered much more easily than for example an iOS device. This is extremely unlikely to be resolved via a software update, nor would any potential fix survive scrutiny.

It's possible this might even be done without opening the airtag, in bulk, from information gained via reverse engineering.

Potential threat vectors:
  • Reprogramming a tag not to beep, thus circumventing the anti-stalking features.
    • At $29 this makes it extremely cheap/easy to track a massive number of targets. Say every employee in a target organization, or 100 random encounters in a cafe. Yes you could do this in other ways, but a hacked AirTag means you can leverage Apple's resource investment and tracking network to reduce the cost of attack. Also, found airtags are more or less innocuous looking.
  • Reprogramming a tag to report with two device IDs rather than one.
    • Hijack a user's existing tag which continues to function as expected, but also phones home to the attacker. This could be done by cloning a tag and then swapping it, or by altering a target's tag and replacing it. This allows an attacker to follow you everywhere forever, and you would have no mechanism to even discover the attack.
And for bonus points:
  • Reprogramming a tag as a side-channel vector.
    • Airtags will soon be everywhere, including high security facilities looking for RF devices. In many instances these devices will be discovered and dismissed, or remain undiscovered.
    • An airtag's ID number alone (even if it transmits no other information) can be rotated and used as a side channel information vector to re-transmit information about the local environment to a remote attacker.

Re: the beep -far easier to just remove the voice coil or magnet, as has already been demonstrated.

Instead, I'd rewrite the firmware to use the speaker as a rudimentary mic, and exfiltrate data by modulating the radios.
 
  • Like
Reactions: jhollington
If that custom URL happened to contain a zero day vulnerability, then in theory your phone itself just got compromised/hacked by scanning a modified AirTag.
If there is a URL that contains a zero day vulnerability, the AirTag adds absolutely nothing to it. I'd put it into messages, or e-mails, to get it to millions of people, not one who finds an AirTag. That's $29.99 per attack!
 
Reprogramming a tag not to beep, thus circumventing the anti-stalking features.
I think somebody else already said this, but it's really far easier just to remove the speaker. Plus, the three-day delay before it makes even the tiniest sound makes this a non-issue for many would-be stalkers anyway — you've got a 72-hour free pass, which is plenty of time to figure out where a potential victim is going.

Plus, the sound isn't all that loud. As one journalist already discovered, it could be easily muffled by concealing the AirTag properly.



Reprogramming a tag to report with two device IDs rather than one.
I'm not sure this is even possible. The tags work over Bluetooth LE, which normally only reports with a single device ID. I don't think the Bluetooth spec allows any device to transmit more than one ID simultaneously. I suppose maybe if you packed another Bluetooth radio inside, but at that point, you'd have to modify the AirTag hardware such that it would not longer look like a normal AirTag.

Further, even if it were possible to do this, the device IDs that are reported have to make sense to Apple's Find My network, and they rotate on a random basis, which means there's presumably some algorithm that Apple uses to control the rotation, otherwise you wouldn't be able to locate your AirTag once the ID had switched. From everything I've studied, this randomization is synced between the AirTag and the owner's iPhone — it's not even stored on Apple's servers.

So, you wouldn't be able to simply make up an ID. You'd have to figure out how to clone the ID from an existing AirTag already paired with a valid iPhone or iPad. Again, however, this assumes you could even add make it transmit a second Bluetooth ID from a single Bluetooth radio in the first place.

  • Hijack a user's existing tag which continues to function as expected, but also phones home to the attacker. This could be done by cloning a tag and then swapping it, or by altering a target's tag and replacing it. This allows an attacker to follow you everywhere forever, and you would have no mechanism to even discover the attack.
What's interesting about this is that I'm fairly certain the victim's iPhone would still notify them that an unknown AirTag has been found moving with them.

Even if it were theoretically possible to make the AirTag transmit two separate IDs, the victim's iPhone would also pick up both of them — it would have to in order to report the location of the second "stealth" ID — and in doing so, it would notice that this ID represents an unknown AirTag hanging around. Essentially, it would see the single AirTag with two IDs as if it were two different AirTags — one known to the owner, and one "unknown" AirTag.

Some clever hacking could probably block the user from being able to trigger an locator sound using the second ID, which would undoubtedly confuse them as to where the AirTag is that's supposedly "following" them, but it would still continue to pop up notifications on a regular basis.

As far as I know, the anti-stalking notifications and location reporting are inseparable, and they're managed entirely on the iOS side. The same ID that's used to report a nearby AirTag to the Find My network is also used locally by iOS to determine if it's travelling with them. Disable one of those and you've disabled both of them, rendering the AirTag useless for stalking.

  • Reprogramming a tag as a side-channel vector.
    • Airtags will soon be everywhere, including high security facilities looking for RF devices. In many instances these devices will be discovered and dismissed, or remain undiscovered.
    • An airtag's ID number alone (even if it transmits no other information) can be rotated and used as a side channel information vector to re-transmit information about the local environment to a remote attacker.
This could arguably apply to any Bluetooth device, but the problem is that as soon as you change the algorithm by which the Bluetooth ID is rotated, you've essentially rendered it useless for traditional Find My tracking, and the owner would likely notice that their AirTag no longer works they way it's supposed to.

Now, if it's about simply planting unknown AirTags to be used for other tracking purposes, that's certainly a valid attack vector, but again there's nothing unique about AirTags in this regard, as you're simply tracking by the same Bluetooth ID found in every other Bluetooth device, although granted the potential ubiquity of AirTags might make them a more appealing Trojan horse for this kind of an attack.
 
Re: the beep -far easier to just remove the voice coil or magnet, as has already been demonstrated.

Instead, I'd rewrite the firmware to use the speaker as a rudimentary mic, and exfiltrate data by modulating the radios.
If it turns out you don't have to open the thing to turn off the beep, that's sorta a big barrier removed.

Would have to look at a wiring diagram to see what is possible there re microphone -- I doubt it. The IC on the thing is likely pretty limited in terms of what it could pick up environmentally. These vectors are pretty remote and probably in the realm of tailored access attacks.
I think somebody else already said this, but it's really far easier just to remove the speaker. Plus, the three-day delay before it makes even the tiniest sound makes this a non-issue for many would-be stalkers anyway — you've got a 72-hour free pass, which is plenty of time to figure out where a potential victim is going.

Plus, the sound isn't all that loud. As one journalist already discovered, it could be easily muffled by concealing the AirTag properly.
Agree, and if the threat was a serial murderer you're right its irrelevant. I'm thinking more about "started out slightly creepy and innocent" and ends with "Class A Felony" type -- and that can take more than 3 days. Apple's beep compensating control was always weak, and it'll turn out to be even weaker if it can be easily flash-disabled.

I'm not sure this is even possible. The tags work over Bluetooth LE, which normally only reports with a single device ID. I don't think the Bluetooth spec allows any device to transmit more than one ID simultaneously. I suppose maybe if you packed another Bluetooth radio inside, but at that point, you'd have to modify the AirTag hardware such that it would not longer look like a normal AirTag.

Further, even if it were possible to do this, the device IDs that are reported have to make sense to Apple's Find My network, and they rotate on a random basis, which means there's presumably some algorithm that Apple uses to control the rotation, otherwise you wouldn't be able to locate your AirTag once the ID had switched. From everything I've studied, this randomization is synced between the AirTag and the owner's iPhone — it's not even stored on Apple's servers.

The serial seed / secret could be rotated every wakeup between a "nice" value and a "threat" value to tree the data stream. It would halv the datapoints but it seems unlikely that would be noticeable.

So, you wouldn't be able to simply make up an ID. You'd have to figure out how to clone the ID from an existing AirTag already paired with a valid iPhone or iPad. Again, however, this assumes you could even add make it transmit a second Bluetooth ID from a single Bluetooth radio in the first place.


What's interesting about this is that I'm fairly certain the victim's iPhone would still notify them that an unknown AirTag has been found moving with them.

Even if it were theoretically possible to make the AirTag transmit two separate IDs, the victim's iPhone would also pick up both of them — it would have to in order to report the location of the second "stealth" ID — and in doing so, it would notice that this ID represents an unknown AirTag hanging around. Essentially, it would see the single AirTag with two IDs as if it were two different AirTags — one known to the owner, and one "unknown" AirTag.

Maybe, unless it was smart enough not to transmit the threat seeded ID to the mark's phone somehow .... more investigation required. I'm sure we'll see where the POC rubber hits the road soon.

Some clever hacking could probably block the user from being able to trigger an locator sound using the second ID, which would undoubtedly confuse them as to where the AirTag is that's supposedly "following" them, but it would still continue to pop up notifications on a regular basis.

As far as I know, the anti-stalking notifications and location reporting are inseparable, and they're managed entirely on the iOS side. The same ID that's used to report a nearby AirTag to the Find My network is also used locally by iOS to determine if it's travelling with them. Disable one of those and you've disabled both of them, rendering the AirTag useless for stalking.

Unless the attacker had multiple thread IDs and cycled between them to circumvent the tracking control?

This could arguably apply to any Bluetooth device, but the problem is that as soon as you change the algorithm by which the Bluetooth ID is rotated, you've essentially rendered it useless for traditional Find My tracking, and the owner would likely notice that their AirTag no longer works they way it's supposed to.

Now, if it's about simply planting unknown AirTags to be used for other tracking purposes, that's certainly a valid attack vector, but again there's nothing unique about AirTags in this regard, as you're simply tracking by the same Bluetooth ID found in every other Bluetooth device, although granted the potential ubiquity of AirTags might make them a more appealing Trojan horse for this kind of an attack.
Well, except the cost of the attack. It's usually expensive to follow a mark using trackers because you need a comms network (energy) or a tail (person). These resources are unnecessary when leveraging the Apple find my network.
 
If it turns out you don't have to open the thing to turn off the beep, that's sorta a big barrier removed.
That's a pretty big "if" however, since at this point the hack requires physical attachment of wires to test points — a process that's far more complicated than simply ripping out the speaker.

At this point, I'm not even sure if an AirTag has a software update mechanism at all. I would expect it probably does, but whether that can be reverse-engineered is another matter entirely.

Agree, and if the threat was a serial murderer you're right its irrelevant. I'm thinking more about "started out slightly creepy and innocent" and ends with "Class A Felony" type -- and that can take more than 3 days. Apple's beep compensating control was always weak, and it'll turn out to be even weaker if it can be easily flash-disabled.
True, although the only compensating factor for that particular weakness is that you can't track an AirTag in even near-real-time if it's being carried by somebody without an iPhone or iPad to report its location.

Based on my own testing, its location will get reported when it's relatively stationary and near other iOS devices (e.g. in a store, coffee shop, or restaurant), but it doesn't get picked up when you're simply walking by people on the street, much less driving by.

Obviously, the tracking risk is far from negligible, but it's significantly different than when it's being carried by an iPhone user, where the victim's iPhone will be reporting its location every 2-3 minutes.

However, even if the AirTag firmware could be wirelessly flashed outside of Apple's own update mechanisms, it's not something that the average user is going to be equipped to do. Maybe the right kind of app on an Android device could push the code over Bluetooth, but you're not going to be able to pull it off with an iPhone — at least not without having a jailbroken one.

The serial seed / secret could be rotated every wakeup between a "nice" value and a "threat" value to tree the data stream. It would halv the datapoints but it seems unlikely that would be noticeable.
That would require some pretty sophisticated coding — quite possibly more than the AirTag is even capable of. After all, these aren't really "smart" devices per se. Under normal operating conditions they don't need to do all that much except transmit a rotating Bluetooth ID according to a pre-defined algorithm, make a sound when asked to, and maintain an internal clock that sounds an alert at a predefined value and resets every time they come back into proximity of their paired iPhone or iPad. Based on what Apple has said that predefined value can also be updated remotely — presumably through the paired iPhone.

Most of the heavy lifting is done by the iPhone, which would almost certainly still pick up the "threat" value in the data stream. You're right that the halved datapoint probably wouldn't be detected, but the nearby iPhone would still see what it thinks are two different AirTags in proximity.

Maybe, unless it was smart enough not to transmit the threat seeded ID to the mark's phone somehow .... more investigation required. I'm sure we'll see where the POC rubber hits the road soon.
Based on what I know of the Bluetooth LE spec, I'm not sure this is even possible. Bluetooth IDs are broadcast by nature. They're readable by anything — you can even get BTLE scanning apps on the App Store that will show you every Bluetooth device in proximity.

The only way to do this — in theory — would be to code the AirTag to stop broadcasting the threat seeded ID entirely when the target's iPhone was detected as being in proximity.

However, this assumes that the AirTag microcontroller can handle code this sophisticated in the first place, and that the AirTag even has the necessary hardware to arbitrarily scan for nearby Bluetooth IDs. The attacker would then also need to know the Bluetooth ID of the target's iPhone in order to add it to an exclusion list.

Ultimately, however, since this ID is necessary to actually track the AirTag, it would defeat the purpose of the exercise. Essentially, the AirTag would be completely untrackable via the "thread seeded ID" whenever the target's iPhone was in proximity. Kind of makes it useless for stalking in this case, unless you're going after somebody who regularly leaves their iPhone behind.

Unless the attacker had multiple thread IDs and cycled between them to circumvent the tracking control?
That could conceivably work. The target's iPhone would see the multiple threat IDs as different AirTags and likely never consider itself as being followed by a single AirTag.

This would still be tricky, however, as the IDs would have to be associated to a known iPhone in the attacker's possession. Since only 16 AirTags can be associated with a single Apple ID, they wouldn't be able to use more than 16 threat IDs from the same device. That would probably be enough, however, and all of the evidence right now suggests that Apple's threat detection simply looks for unknown IDs — it doesn't associate them back to an Apple ID or an owner. In fact, it probably doesn't communicate back to the Find My network at all (other than the normal location reporting for the AirTag in question, of course).

Again, though, this assumes a level of sophistication and power that Apple's microcontroller may not be capable of. It would also have to be a very targeted attack, since the attacker's individual IDs would have to be specifically planted into the compromised AirTag — and all of these would have to be gleaned from existing AirTags, since the rotation is established using a random seed during the pairing process.

In other words, if I wanted to plant 16 threat IDs onto another AirTag, I'd have to find a way to generate those 16 threat ID sets from my iPhone in the first place, and then figure out to load those 16 sets into the compromised AirTag, keeping in mind that all of those would have to rotate along the same cycle and timing in order to be valid for tracking. Does the microcontroller even have enough memory to store 17 different sets of IDs? Further, does some aspect of the AirTag hardware form part of the randomization seed? There's some fairly complicated public key cryptography going on in all of these exchanges as well.

In fact, the next tests I'd like to see one of these researchers attempt is to clone the IDs from one AirTag to another. That may not even be possible at a basic level.

Well, except the cost of the attack. It's usually expensive to follow a mark using trackers because you need a comms network (energy) or a tail (person). These resources are unnecessary when leveraging the Apple find my network.
Right, but my point is how you would leverage the Find My network to do this in the first place. That strikes me as considerably more complicated than it sounds at first glance.

In fact, I suspect it would be far easier to change up the Bluetooth ID algorithm for more localized, short-range tracking. Plant a threat ID that's meaningless to the Find My network, rotating it with the legitimate ID, and you'd have a way of persistently tracking an AirTag independently from Apple's network, and as long as the planted Bluetooth ID wasn't in the same class as an AirTag (likely defined by the standard manufacturer prefix), it would be ignored by nearby iPhones and iPads — it would be viewed as just some other random Bluetooth device that they don't need to care about.
 


The inevitable race to hack Apple's AirTag item tracker has reportedly been won by a German security researcher, who managed to break into the device's microcontroller and successfully modify its firmware.


Thomas Roth, aka Stack Smashing, shared his achievement in a tweet and explained that re-flashing the device's microcontroller had enabled him to change the URL for Lost Mode, so that it opens his personal website on a nearby iPhone or other NFC-enabled device instead of directly linking to an official Find My web address.

Managing to break into the microcontroller is a crucial hurdle to overcome to if the aim is to further manipulate the device's hardware. As The 8-Bit notes:Roth also shared a video comparing a normal AirTag to his modified device.


How the hack might be exploited in the wild is unclear at this time, but the fact that it can be done may open up avenues for the jailbreaking community to customize the device in ways Apple didn't intend. On a darker note, it could also present opportunities for bad actors to modify the AirTag software for the purposes of phishing and more.

That's assuming Apple isn't able to remotely block such a modified AirTag from communicating with the Find My network. Alternately, Apple might be able to lock down the firmware in a future AirTag software update. Watch this space.

Article Link: AirTag Successfully Hacked to Show Custom URL in Lost Mode
"Yessss!!!! I'm a loser and don't have a life so I pointlessly hack into electronics like Air tags!"
 
Unless you're a high profile target. Then you just need to get the high profile target's kid to scan the airtag, infect his phone with a 0-day drive by, and suddenly you have network access to this high profile targets' home network.
If you are a high profile target and let your kids into your network instead of giving them their own, then you are a high profile dumb target.
 
Theoretically someone could hack an AirTag to show up to others phones as "lost", but instead of pointing to the Apple URL that assists in locating the owner of the tag, the URL would be changed to a custom one.
Well, an NFC tag and a small piezo in an AirTag housing would be perfectly sufficient for this dumb attack, there is no need to hack the original hardware to achieve this.

Never trust a URL, no matter if from an QR-Code, an NFC tag, or wherever.
 
If you are a high profile target and let your kids into your network instead of giving them their own, then you are a high profile dumb target.
High profile doesn't mean CEO. It might mean rank and file employees who have certain access privileges, such as a VPN login.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.