Okay, so I have to have a fake button which can fake a legitimate TouchID paired to a specific phone plus the phone has to be already power-on, post-legitimate passcode entry in order to break the TOuchID /pass code security ? Under what scenarios could this "possibility" (??) occur ?
... they had an obligation to warn their customers that the iOS update could trash a phone with compromised home button/touchID ...
As I understand it, the issue is that the phone does not check that the TouchID sensor is actually paired to the phone until you run the update. It doesn't check every boot; so you can have a mismatched device and sensor and it would still work.
Yeah, you and I just outlined such a scenario. Let's say you're a Russian anti-Putin campaigner, and the FSB somehow intercepts your device. Maybe they break in to your home or something, I don't care; it's not important. They plant this bugged TouchID sensor there. Then some time later, you're walking along and get harassed by a couple of big guys who grab your phone while it's still powered on. They have a way to make that sensor they installed replay the last fingerprint.
It's not important to Apple who is trying to break in to your device or why; they advertise a secure device. They will try to build a system which is impenetrable to all but the legitimate owner.
As for informing owners: yeah, they probably should have done a better job letting people know that. That said, I own an iPhone 5S and I've always known that only Apple can replace the TouchID sensor. I believe they mentioned it either during the keynote or in online documentation somewhere when it was first introduced.
You can't expect everybody to know that, though: they should have made it clearer to customers that the TouchID sensor shouldn't be replaced by unauthorised third-parties. That's a very fair criticism.
Way to split hairs. My point is that you can store even a malicious OS on a piece of hardware. It sits dormant until it recieves power, (you plugging it into your Mac), then it becomes a keylogger. At best.
It's not splitting hairs. The iPhone is a fully-encrypted-by-default device with biometric authentication; the Mac is neither of those things. The I/O ports on the iPhone are heavily restricted because it's a next-gen platform which doesn't need legacy hardware support or even the greatest I/O speed. Thunderbolt is an external PCIe cable - the exploit you mentioned is because of a legacy feature of PCIe - and it's like that because it's a pro-oriented cable for those who need maximum speed. Lightning (the only physical port the iPhone has) is very, very different to Thunderbolt or USB - it's built to emphasise safety, hence the chips in the cables (it's a fascinating approach to I/O).
Anyway, what is your point? That exploits exist on other platforms, so Apple should just give up trying to secure their iOS devices? As with the guy above; Apple will try their best to build a system which is absolutely impenetrable to everybody but the owner, and to the owner it will be very easily accessible.
And like I said, even if you could exploit an iOS device and get full privileges, I'm not even sure what information you'd be able to get. I do know, however, that you definitely would not anything from the Secure Enclave - so you won't be able to grab the raw decryption key, iTunes Store tokens, and whatever else it has. These components are isolated in hardware from the main CPU, so no amount of software with any level of privileges will help you read from it.