Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Okay, so I have to have a fake button which can fake a legitimate TouchID paired to a specific phone plus the phone has to be already power-on, post-legitimate passcode entry in order to break the TOuchID /pass code security ? Under what scenarios could this "possibility" (??) occur ?

... they had an obligation to warn their customers that the iOS update could trash a phone with compromised home button/touchID ...

As I understand it, the issue is that the phone does not check that the TouchID sensor is actually paired to the phone until you run the update. It doesn't check every boot; so you can have a mismatched device and sensor and it would still work.

Yeah, you and I just outlined such a scenario. Let's say you're a Russian anti-Putin campaigner, and the FSB somehow intercepts your device. Maybe they break in to your home or something, I don't care; it's not important. They plant this bugged TouchID sensor there. Then some time later, you're walking along and get harassed by a couple of big guys who grab your phone while it's still powered on. They have a way to make that sensor they installed replay the last fingerprint.

It's not important to Apple who is trying to break in to your device or why; they advertise a secure device. They will try to build a system which is impenetrable to all but the legitimate owner.

As for informing owners: yeah, they probably should have done a better job letting people know that. That said, I own an iPhone 5S and I've always known that only Apple can replace the TouchID sensor. I believe they mentioned it either during the keynote or in online documentation somewhere when it was first introduced.

You can't expect everybody to know that, though: they should have made it clearer to customers that the TouchID sensor shouldn't be replaced by unauthorised third-parties. That's a very fair criticism.

Way to split hairs. My point is that you can store even a malicious OS on a piece of hardware. It sits dormant until it recieves power, (you plugging it into your Mac), then it becomes a keylogger. At best.

It's not splitting hairs. The iPhone is a fully-encrypted-by-default device with biometric authentication; the Mac is neither of those things. The I/O ports on the iPhone are heavily restricted because it's a next-gen platform which doesn't need legacy hardware support or even the greatest I/O speed. Thunderbolt is an external PCIe cable - the exploit you mentioned is because of a legacy feature of PCIe - and it's like that because it's a pro-oriented cable for those who need maximum speed. Lightning (the only physical port the iPhone has) is very, very different to Thunderbolt or USB - it's built to emphasise safety, hence the chips in the cables (it's a fascinating approach to I/O).

Anyway, what is your point? That exploits exist on other platforms, so Apple should just give up trying to secure their iOS devices? As with the guy above; Apple will try their best to build a system which is absolutely impenetrable to everybody but the owner, and to the owner it will be very easily accessible.

And like I said, even if you could exploit an iOS device and get full privileges, I'm not even sure what information you'd be able to get. I do know, however, that you definitely would not anything from the Secure Enclave - so you won't be able to grab the raw decryption key, iTunes Store tokens, and whatever else it has. These components are isolated in hardware from the main CPU, so no amount of software with any level of privileges will help you read from it.
 
We could nit pick back and forth all decade on yes vs. no. And sorry, but the trial by jury 801 year old law isn't the version we have in the US. ;)
I probably should have limited my comment to US law.


Actually it is the same law that you have in the US. Granted it was considered and reaffirmed by the US government around 230 years ago, but it more or less formed the foundation of your constitution. https://www.loc.gov/exhibits/magna-carta-muse-and-mentor/magna-carta-and-the-us-constitution.html

The point being that age alone doesn't make something obsolete.
 
Last edited:
Egg Freckles did a nice article on this, some history (repair, Apple third party parts and device testing) and then the current issue. It's a pretty short article, check it out http://eggfreckles.net/notes/error53/

I'm surprised at the levels that Apple checks their iPhones for when repairing/replacing an iPhone. They've pulled my iPhone apart to check the screen and other components before replacing it for me, I asked and they said "you'd be surprised" at what parts get replaced in the devices (I pried but they didn't want to share more).

Gary
I used to work at apple. I had a phone come from Asurion where they completely manufactured or bought a fake iPhone 5 housing and put iPhone 5 genuine parts from a broken phone in it. That was the craziest thing I have seen. Well besides the people that buy broken phones and put counterfeit boards in them because they prey on people who had Apple Care or apple care + and were not aware of how the warranty worked. People do crazy things.
 
I used to work at apple. I had a phone come from Asurion where they completely manufactured or bought a fake iPhone 5 housing and put iPhone 5 genuine parts from a broken phone in it. That was the craziest thing I have seen. Well besides the people that buy broken phones and put counterfeit boards in them because they prey on people who had Apple Care or apple care + and were not aware of how the warranty worked. People do crazy things.

Voiding the warranty because someone used non-OEM parts to fix (or even build) a phone is one thing. I would agree anyone trying to get warranty on non-standard phones should be shown the door (and Apple has the right to refuse to fix or charge whatever fee they want to service such a phone).

Intentionally bricking a phone (and without warning the user of possible consequences of iOS updates) because someone used a new home (even OEM) home button is quite another; especially for a convenience/optional feature called TouchID. The primary mode of access to the cellphone is and should remain the passcode. Bad TouchID pairing should only result in loss of TouchID/Apple Pay functionality, not complete loss of the phone.
 
Voiding the warranty because someone used non-OEM parts to fix (or even build) a phone is one thing. I would agree anyone trying to get warranty on non-standard phones should be shown the door (and Apple has the right to refuse to fix or charge whatever fee they want to service such a phone).

Intentionally bricking a phone (and without warning the user of possible consequences of iOS updates) because someone used a new home (even OEM) home button is quite another; especially for a convenience/optional feature called TouchID. The primary mode of access to the cellphone is and should remain the passcode. Bad TouchID pairing should only result in loss of TouchID/Apple Pay functionality, not complete loss of the phone.
I don't know where you got all that from a reply to someone where I talked about a crazy instance of counterfeit parts on a phone being used. I didn't say they were in the right, but thanks for reading somewhere really deeply into my post and adding lots of things that were never said in the post.
 
Actually it is the same law that you have in the US. Granted it was considered and reaffirmed by the US government around 230 years ago, but it more or less formed the foundation of your constitution. https://www.loc.gov/exhibits/magna-carta-muse-and-mentor/magna-carta-and-the-us-constitution.html

The point being that age alone doesn't make something obsolete.

You are absolutely correct. For many though, a dust-off and review or "modernization" is recommended.
 
Why? A security feature could have been added or improved or modified in an update and thus work differently than it did before.
Why? Because it would be useless. A security feature so important, so furious that Apple feels that it needs to destroy the entire phone only works when the system is updated, is a pretty bad security feature. And since system updates can be few and far between, not to mention not mandatory to install, it's not only bad, it's completely and utterly useless.
[doublepost=1455328341][/doublepost]
Yeah, you and I just outlined such a scenario. Let's say you're a Russian anti-Putin campaigner, and the FSB somehow intercepts your device. Maybe they break in to your home or something, I don't care; it's not important. They plant this bugged TouchID sensor there. Then some time later, you're walking along and get harassed by a couple of big guys who grab your phone while it's still powered on. They have a way to make that sensor they installed replay the last fingerprint.
Why not just take the phone and lift the fingerprint right off the glass surface? Or better yet, force him to unlock Touch ID for you, you already harassed and stole from him, so unless this is some elaborate undercover Ian Fleming scheme, it would be far simpler to just force him to unlock the phone, and if he refuses, as opposed to forcing someone to reveal his passcode you don't actually need the person's cooperation, you basically have 50% chance on unlocking the phone if you cut off all 10 of his fingers and try yourself.

Besides, if this person uses Touch ID and since changing the sensor disables it, he would notice that something were wrong pretty much right away, and more to the point, if this were so important as Apple claims, the phone shouldn't be able to boot after the change, at all, thereby solving the ENTIRE problem.


You can't expect everybody to know that, though: they should have made it clearer to customers that the TouchID sensor shouldn't be replaced by unauthorised third-parties. That's a very fair criticism.
Some repair shops doesn't even install the new sensor at all, just repair the home button. Obviously that shop has informed the customer that the Touch ID would no longer work and obviously the customer has okayed that loss. What gives Apple the right to remotely brick this customers phone six months later during a system update without any warning of any kind?
[doublepost=1455329178][/doublepost]
Anyway, what is your point? That exploits exist on other platforms, so Apple should just give up trying to secure their iOS devices?
I think the point is that people don't buy Apple's explanation because it doesn't add up. If Apple were this adamant on security, it isn't a question that they should give up trying to secure their iOS devices, but also try to implement similar security measures on all their devices.

And seriously, if Apple were this adamant about security on iOS devices, they should make sure the phone is bricked on opening the device, or at least on boot-up if it, just like in this case, detects that the sensor has been replaced. Letting the user walk around with the phone potentially for all eternity just because he doesn't for some reason update the system isn't good enough, it just isn't.
 
Last edited:
Why? Because it would be useless. A security feature so important, so furious that Apple feels that it needs to destroy the entire phone only works when the system is updated, is a pretty bad security feature. And since system updates can be few and far between, not to mention not mandatory to install, it's not only bad, it's completely and utterly useless.
[doublepost=1455328341][/doublepost]
Why not just take the phone and lift the fingerprint right off the glass surface? Or better yet, force him to unlock Touch ID for you, you already harassed and stole from him, so unless this is some elaborate undercover Ian Fleming scheme, it would be far simpler to just force him to unlock the phone, and if he refuses, as opposed to forcing someone to reveal his passcode you don't actually need the person's cooperation, you basically have 50% chance on unlocking the phone if you cut off all 10 of his fingers and try yourself.

Besides, if this person uses Touch ID and since changing the sensor disables it, he would notice that something were wrong pretty much right away, and more to the point, if this were so important as Apple claims, the phone shouldn't be able to boot after the change, at all, thereby solving the ENTIRE problem.



Some repair shops doesn't even install the new sensor at all, just repair the home button. Obviously that shop has informed the customer that the Touch ID would no longer work and obviously the customer has okayed that loss. What gives Apple the right to remotely brick this customers phone six months later during a system update without any warning of any kind?
[doublepost=1455329178][/doublepost]
I think the point is that people don't buy Apple's explanation because it doesn't add up. If Apple were this adamant on security, it isn't a question that they should give up trying to secure their iOS devices, but also try to implement similar security measures on all their devices.

And seriously, if Apple were this adamant about security on iOS devices, they should make sure the phone is bricked on opening the device, or at least on boot-up if it, just like in this case, detects that the sensor has been replaced. Letting the user walk around with the phone potentially for all eternity just because he doesn't for some reason update the system isn't good enough, it just isn't.
Why would additional security be useless? If you have a lock that works and you add another layer on top of it to make it even more secure, how would that be useless?
 
Why would additional security be useless? If you have a lock that works and you add another layer on top of it to make it even more secure, how would that be useless?
Because it doesn't do what it's supposed to do; brick the device. A security measure that is so draconian and arguably then also so extraordinary important that it inexorably destroys the entire device is useless if it doesn't do anything to the device unless the system is updated, which may be never since it's not mandatory.
 
Because it doesn't do what it's supposed to do; brick the device. A security measure that is so draconian and arguably then also so extraordinary important that it inexorably destroys the entire device is useless if it doesn't do anything to the device unless the system is updated, which may be never since it's not mandatory.
I was addressing what it is not the result of what it ends up doing, which are separate pieces of it all.
 
I was addressing what it is not the result of what it ends up doing, which are separate pieces of it all.
I have no idea what you just said.

Edit: Oh, okay, I think I know what you meant.

Well, it can't very well be more safe if it doesn't actually do anything, can it. It doesn't matter if being and doing is separate pieces of it all when they are dependent on each other. Updating the system however isn't, yet the 'even more' security is dependent upon it being triggered instead of switching the sensor out despite the latter being the reason Apple put it in there.

Does that make any sense to you?
 
Last edited:
Because it doesn't do what it's supposed to do; brick the device. A security measure that is so draconian and arguably then also so extraordinary important that it inexorably destroys the entire device is useless if it doesn't do anything to the device unless the system is updated, which may be never since it's not mandatory.

Here's where all of your arguments fall apart, though: I don't think there is any intention for Apple to brick the device. I don't see any evidence of that, and I highly suspect that it would be illegal for Apple to do that under any circumstances whatsoever.

What is more likely is that this is a bug. Unauthorised hardware repairs might have been a poorly-tested scenario. They should be checking on every boot, and if the check fails they should fall back to a situation which does not leave the device permanently damaged - either going back to a passcode (since TouchID is entirely optional), or at the most extreme wiping the device. The TouchID/Secure Enclave system has a very secure design, but as with all software some errors sometimes emerge in the implementation.

Based on the extremely non-descriptive "Error 53" message, you can't say whether or not any part of this was intentional.

The only things which we know are intentional are the facts and policies that Apple has documented. You can see my technical description post for what those facts and policies are.
 
Here's where all of your arguments fall apart, though: I don't think there is any intention for Apple to brick the device. I don't see any evidence of that, and I highly suspect that it would be illegal for Apple to do that under any circumstances whatsoever.

What is more likely is that this is a bug. Unauthorised hardware repairs might have been a poorly-tested scenario. They should be checking on every boot, and if the check fails they should fall back to a situation which does not leave the device permanently damaged - either going back to a passcode (since TouchID is entirely optional), or at the most extreme wiping the device. The TouchID/Secure Enclave system has a very secure design, but as with all software some errors sometimes emerge in the implementation.

Based on the extremely non-descriptive "Error 53" message, you can't say whether or not any part of this was intentional.

The only things which we know are intentional are the facts and policies that Apple has documented. You can see my technical description post for what those facts and policies are.


I agree that I think it was just poorly coded error on APple's part and not malicious bricking. They're just really poor at communication when stuff like this happens, and they're really slow to respond.

I would like to believe that nobody at Apple thinks that completely bricking a phone for failed touchID sensor is a valid, and legal excuse.

Its like if your house door lock mechanism has been compromised by fake key. you replace the lock. You bar the door closed and use 2nd entrance.
You don't turn around and bloody burn your house down just because your lock is no longer secure.
(yes, crappy analogy)
 
Here's where all of your arguments fall apart, though: I don't think there is any intention for Apple to brick the device.
My arguments are all based on facts, real events that are taking place right as we speak. These don't "fall apart" just because you have an unsubstantiated theory that Apple has made a coding boo-boo.


Based on the extremely non-descriptive "Error 53" message, you can't say whether or not any part of this was intentional.
No, of course not, but that is a seriously flawed argument as you could basically make up any kind of scenario with that amount of theoretical speculation.
[doublepost=1455415505][/doublepost]
I agree that I think it was just poorly coded error on APple's part and not malicious bricking. They're just really poor at communication when stuff like this happens, and they're really slow to respond.
Except this has been going on since the inception of iOS9 which is, what, five months back in time and the latest update is 25 days old. I mean, that's not slow, that's... that's ridiculous.
 
I have no idea what you just said.

Edit: Oh, okay, I think I know what you meant.

Well, it can't very well be more safe if it doesn't actually do anything, can it. It doesn't matter if being and doing is separate pieces of it all when they are dependent on each other. Updating the system however isn't, yet the 'even more' security is dependent upon it being triggered instead of switching the sensor out despite the latter being the reason Apple put it in there.

Does that make any sense to you?
If a security measure wasn't there before and is being added or wasn't there before and is being enhanced then how would it appear there until something (like an update) would add it or enhance it?
 
Well, maybe that 3rd-party home button contains a bugging device which is capturing your fingerprint data. That way, somebody can steal your phone while it's powered on and instruct the tampered sensor to replay the last fingerprint.
It's not so much what could happen during a repair, but before you ever receive the phone. There had been reports in the past the some US agency picked up phones in the mail, modified them, and sent them on to the proper recipient. This way, if you buy a phone by mail order you don't need to fear that the NSA has replaced the finger print sensor, waits a year to steal your phone, and then can read anything on it.
[doublepost=1455472024][/doublepost]
I'm going to reiterate/paraphrase my earlier post here. Is Apple (now) admitting that their Touch ID system is hackable and that Apple Pay could also be? Because by bricking/preventing access based on a repair, it sounds like Apple believes there's a way to break inside. Not exactly what everyone was led to believe or what was insisted upon here.
That's the typical reaction that companies always have to fight: That by improving something they supposedly admit some wrongdoing. To most people it looks like Apple believes there is a way to break in unless Apple takes some action, and Apple takes the action. Some people don't like the action (for understandable reasons), and will suggest that Apple is a bit _too secure_ for their taste, but claiming that there is a security problem is totally misguided.
 
I used to work at apple. I had a phone come from Asurion where they completely manufactured or bought a fake iPhone 5 housing and put iPhone 5 genuine parts from a broken phone in it. That was the craziest thing I have seen. Well besides the people that buy broken phones and put counterfeit boards in them because they prey on people who had Apple Care or apple care + and were not aware of how the warranty worked. People do crazy things.

You worked in repair? And they're trying to get these replaced with AppleCare? I'm not sure how they are preying on people?

Gary
 
If a security measure wasn't there before and is being added or wasn't there before and is being enhanced then how would it appear there until something (like an update) would add it or enhance it?
I really don't know what you're on about. A security feature that is so relentless that it has to destroy the phone but does so only when the device is being updated, which isn't mandatory, can not possibly be regarded as a good security measure. It just can't. I therefore do not think this has anything to do with security at all, because this is Apple, the world's most wealthy company, and to think they would do something this half-baked, this improper, is just not on my radar. At all.
 
I really don't know what you're on about. A security feature that is so relentless that it has to destroy the phone but does so only when the device is being updated, which isn't mandatory, can not possibly be regarded as a good security measure. It just can't. I therefore do not think this has anything to do with security at all, because this is Apple, the world's most wealthy company, and to think they would do something this half-baked, this improper, is just not on my radar. At all.
Again, what it ends up doing and how/when it works are fairly separate items, and I'm addressing the former of the two. You might feel that they are connected and/or don't think it's because of one thing or another, and that's fine, it doesn't necessarily make it so, and thus my thinking about it.
 
It's not so much what could happen during a repair, but before you ever receive the phone. There had been reports in the past the some US agency picked up phones in the mail, modified them, and sent them on to the proper recipient. This way, if you buy a phone by mail order you don't need to fear that the NSA has replaced the finger print sensor, waits a year to steal your phone, and then can read anything on it.

I'd be more worried that the Chinese government has already made sure that every iPhone has a back door of some sort from the moment it's shipped, since they can make their factories do anything.

As for a government agency (or the Russian Mafia, which has pulled tricks like this before) grabbing a phone enroute and changing sensors, don't forget that Apple has a tool to resync phones with replacement sensors. That means the interceptors could have it, too. Then the replacement would never trigger the security lock.
 
Last edited:
  • Like
Reactions: samcraig
Again, what it ends up doing and how/when it works are fairly separate items, and I'm addressing the former of the two.
And, again, unless you update your system, it doesn't do anything. What it does and how it works is therefore inherently linked because the former rely on the latter.

It's like a battle tank, until someone actually presses the pedal and someone pushes the button, it's not a weapon, it's a lump of steel just sitting there, doing nothing, having no effect whatsoever. Therefore the former relies on the latter to even work at all.
 
Last edited:
And, again, unless you update your system, it doesn't do anything.
How would a new or modified functionality get there without an update? And what if that functionality needs to do some lower level checks that can be performed during an update or restore process but not really during typical boot up or normal use of the phone?
 
How would a new or modified functionality get there without an update?
It obviously can't, but Error 53 isn't new, it's five months old as it was introduced with iOS9.


And what if that functionality needs to do some lower level checks that can be performed during an update or restore process but not really during typical boot up or normal use of the phone?
Then it's a bad security measure because of that, instead.

There really is no other way to put it; this is a pretty shoddy security measure if it relies on unrelated, rare and non-mandatory events to function at all.
 
It obviously can't, but Error 53 isn't new, it's five months old as it was introduced with iOS9.



Then it's a bad security measure because of that, instead.

There really is no other way to put it; this is a pretty shoddy security measure if it relies on unrelated, rare and non-mandatory events to function at all.
Perhaps it's an additional security measure and not the only one, or the first one, or the main one. A silent alarm or panic button inside of a bank safe isn't the only or the main security measure, but it's still there and can be used if the situation arises and it can be useful.
 
It's not important to Apple who is trying to break in to your device or why; they advertise a secure device. They will try to build a system which is impenetrable to all but the legitimate owner.

Any device that uses a fingerprint sensor... without requiring other factors at the same time... is not impenetrable.

Especially with a relatively easily fooled sensor like Apple uses, the phone can be accessed by anyone who can get hold of good print images from someone they're targeting.

Touch ID was never meant to provide absolute security. It was meant to provide extra convenience.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.