Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It’s not an “issue” yet. It is an unsubstantiated claim. We should wait for independent confirmation. Right now people are just jumping to conclusions.

Sure, all we can do is speculate at the moment. However, the checkra1n exploit is well known, and has been demonstrated extensively, and I trust the Pangu Team to know what they are doing. If they say the T2 chip is based on the A10 processor, and that they have a chainable attack vector for this chip based on checkra1n, then I have no reason to doubt that.

Regardless, if the exploit is later proven, then my post still stands, and everyone here who is trying to pass off the exploit as "nothing to worry about", is greatly downplaying the situation.

This, all being written on a T2-based MacBook Pro...
 
  • Like
Reactions: andresro14
I think everyone is down playing the severity of this issue. The T2 exploit, if proven, is persistent until a full reboot of the system occurs, not until the external hardware is removed.

Imagine the situation where you're sitting in a coffee shop using your computer, you get up to get something from the café. An attacker only needs 5 to 10 seconds with your machine to exploit it whilst you're not there looking at it, once they've done that, the entire machine has been compromised. When you return, entering your password for your account, a keylogger in the T2 can capture your password, and then later use it for FileVault and Keychain decryption, and because the network connection is also accessible to the T2 chip, they can then use this to exfiltrate any file, login credential, fingerprint data, or anything they're interested in off of your computer, via the internet, without a single sign that your machine has been tampered with. All this from 5-10 seconds of unmonitored access with your machine.

Sure, the machine can be uncompromised with a reboot, but how often do we actually shutdown or reboot our macOS-based machines, rather than just shutting the lid? Further, the T2 chip can continue to operate even whilst the lid is closed, potentially waking up the machine via ACPI commands, connecting to a WiFi network, and downloading further malware onto the machine, whilst you think it's asleep in your backpack.

What's worse is that this exploit, as alluded to in the article, is likely present in the boot ROM, an area that is burnt into the chip at the factory and cannot be modified. Think of it like the exploit of the NVIDIA Tegra X1 that caused the first batch of Nintendo Switch units to have a permanent "jailbreak" that couldn't be patched.



The issue here, is that the T2 is the first thing to run in the entire system. It is literally the root of trust in any T2-based Mac devices. Once macOS has booted, it's too late. There is nothing they can do to mitigate the exploit in the operating system.




This is not true, at least for Windows and (most) Linux based operating systems. Both Spectre and Meltdown were patched via microcode updates, these are delivered directly from Intel themselves via Windows Update or packages from distribution repositories, and are applied directly to the CPU from there. They do not need vendor intervention to deploy these patches.

What vendors can do, however, is ship a patched version of microcode with their BIOS/firmware, so that even before Windows or Linux is installed and updated, the microcode in the CPU is patched. Like you said, this is up to each vendor to deploy, however the vast majority of users will have received the patched microcode via Windows update or other OS update methods.



This is also only partially accurate these days. The original exploit relied on extremely precise timestamp resolution available via specific APIs in browsers, to do cache timing analysis. When the exploit was originally revealed, these APIs were patched by browser vendors to intentionally "knee cap" the available resolution of these timestamps to prevent their use for timing analysis, but still providing more than enough resolution for web developers needs.

For Meltdown and Spectre to be effective from "simply visiting a compromised website", you'd need to be running a browser version from before the era of these exploits as well as running an unpatched CPU, Kernel and Operating System. Internet Explorer and Microsoft Edge do/did not expose an API for these high resolution timers, so are not affected, Chrome and Firefox both have aggressive update mechanisms, so most users will be on "patched" versions. I'm not aware if Safari exposed these high resolution timer APIs, but I suspect it has been patched also if it did.

DON'T use a system that you are that concerned about in public. HIPPA rules are the same way. If I have medical data and I am at a coffee shop, I need to take my computer with me to the counter. NOT leave it on the desk.
 
  • Like
Reactions: J.J. Sefton
That just means the code in the chip can’t be changed. The original Mac had code in ROM and errors in that code were patched at runtime. We don’t know if this is patchable or not until Apple speaks.
It’s not patchable in iOS and as the other user said it’s the first thing to boot, it can’t be patched.
 
How many exploits and hacks have we seen on Intel/AMD chips? How many on non-Apple ARM? How many on support chips (SSD-controllers, WIFI/4G-modems)?

How many in Win/Android vs macOS/iOS?

In the end nothing is ever gonna be 100% safe for ever, but so far Apple's track record is quite good.

Sorry to disagree, but I spent 28 years in IT, from DOS 2 to data center design and implementations, and Apple is not better than Linux or Microsoft (Intel). Apple does have a pretty tolerant fan/user base, and that same group (we) allow them to turn out patches every week or two (sometimes only a day apart). I agree with the emotional perception that Apple is doing a better job, but if you matched them up by equal machine instances (per capita), Apple far outpaces the Microsoft instances. I was in the first group of Platinum Beta Testers for Microsoft, as well as a long term MSDN developer, and I know we (as well as many home users) were regularly used as alpha and beta testers. I'm sure a large number of Microsoft users would stand in line to throw stones at the Apple store glass houses, but those rocks would bounce back and smack them in the melons.
It's a challenging argument to suggest that Apple is better than Linux, largely because Apple is Linux (with a whole lot of proprietary changes). Apple's roots are in BSD, OSF/1 and NeXT, all are variants of UNIX/Linux, most recently the code is more Linux than Unix. So I'm not going to stand on that ant hill.
I was not a first wave Mac 128k (I was in the Commodore 64 camp) user, so no Macintosh, no AppleII, although I did spark up interest with the iMac G3. I'm glad Apple survived the Windows 95 stranglehold on the computing market, and as such I am now fulling platformed on Apple ((iPhone 11, Mac mini, and MacBook Pro (2020)). I am also waiting baited breath for the new MacBook with (hopefully) the A14Z/not X chip.
The reason I include the history is to show I'm not an anti-Apple guy (on the contrary), but that I am invested in the development of their architecture, clean distributions, and solid hardware.
This being the case I wanted to make sure the field was level before we start bashing, or praising the big two.

p.s. I don't see Google making end-runs on either Microsoft/Intel, or Apple/Intel/AXX. The scope and abilities of the Chromebook are so limited, unless you live in the Google Office world, and never venture into the real world of Microsoft or Open Office.
 
And yet those exploits get fixed or patched, because if they didn't bother with it there is always a competitor nipping at their heels, not to mention a bunch of lawsuits waiting for them.

Man, they didn't bother with it.

That two mentioned security flaws are "worked around" by OS and compiler tool venders, to mitigate the possibility of executable codes triggering these problematic scheme.

And yes, Intel had faced 32 classic action lawsuits for this, and the "competitors" HAD ALREADY nipped at their heels.

You're literally living under a rock for the past few years aren't you ?
 
Another reason why Apple Silicon is a horrible idea. Apple isn't ready, willing, or able to do the groundwork necessary to keep their chips secure. Get used to the Mac going from one of the most secure platforms out there to being ridden with horrible, unpatchable bugs and security exploits.

It's one thing when you can make the OS a walled garden, like with iOS. When you can control the software, you don't need to worry about the hardware being buggy. But unless we're going to have the Mac App Store be the only source for Mac apps, get used to having your computer pwned on a daily basis once Apple Silicon is a reality.

Pwned on a DAILY basis? Lol, doubtful
 
Let's revisit that statement in a year or two, once Apple Silicon becomes a reality on Macs. Most likely it's going to be more like “Apple devices are trivial to crack, unlike a PC or Android phone”.

Lol at your pure speculation. “Most likely”? Where do people assume so much. That’s like me saying Apple will most likely never have exploitable hardware in the future.
 
...
It's a challenging argument to suggest that Apple is better than Linux, largely because Apple is Linux (with a whole lot of proprietary changes). Apple's roots are in BSD, OSF/1 and NeXT, all are variants of UNIX/Linux, most recently the code is more Linux than Unix. So I'm not going to stand on that ant hill.
...
This is patently absurd and shows you have zero idea of what you're talking about. macOS is based on Darwin, itself a Mach based micro-kernel. The idea is as little as possible runs in the kernel.
Linux is a monolithic, modular kernel were as much as possible runs in the kernel.
The design concepts are just about as diametrically opposed as you can get. macOS and Darwin have absolutely nothing to do with each other.

Yes, you can run GNU and other code (most of what the ill-informed call Linux is actually GNU software. Linus pulled a coup on marketing here) on both systems with proper compiler, linker, libraries and flags, but that doesn't in any way make the systems compatible with each other or similar in just about any way from an architecture standpoint.
 
  • Like
Reactions: Spock1234
did u miss the part where I said this is fixed in the A12 and intel chips have even worse security issues..

how can this security issue be fixed in a cpu that does NOT manage keyboard entry nor system encryption on a different platform managed by a different architecture and chip?

did you read the article at all or just skim through it or not understanding what is being discussed within it?
Am I missing something here ppl?
 
Two things:
1) this only affects the hardware level T2 encryption. It does not affect FileVault software level encryption.

On a T2 system, FileValut is T2 encryption. The data en/decryption is passed to the T2 to handle. The x86-64 CPU has nothing to do with decryption. If the CPU makes an authorized ask for data, then it gets the data.

Data on T2 drive is always encrypted. Even with FileVault off. With FileValut off it is just decrypted without authentication. With FileValut on then the user has to unlock a key (with their password) that in turn unlocks the other hidden key that the T2 used to encrypt the drive.




On a non T2 system the CPU does the encrypting/decrypting. On an external drive of a T2 things get complicated but T2 isn't doing it "under the covers".


Hijack the encryption process on a T2 system and FileValut is 'done'.

2) it requires someone to have physical access to your machine to connect a device and a simple restart seems it would resolve this until the hardware attack was repeated.
The risks to data loss seem to hover around zero unless you are known to have classified information on your machine and are not taking physical security measures to protect it.

It can be bad if don't know need to restart system. Or that something bad got shoveled into BridgeOS that the boot validation process doesn't catch. If the the user system password is stolen (keylogger) and then on a repeated attack that password is passed off then pretty much toast as far as data loss/exposure goes.

BridgeOS should be trying to stop stuff like that from happening after a fresh reboot. Apple was asleep at the wheel to open the door for the other two exploits. After two screw ups if betting the farm on them doing the right thing , then a bit on thin ice. Decent chance they messed that up too. ( although that could be correctable. )
 
  • Like
Reactions: brianmowrey
I hope this can somehow be fixed. If not, I literally have no choice but to switch to PC. Due to compliance requirements with my profession, I need reliable hardware based encryption. The T2 is perfect for my needs, barring a massive exploit.

If you allow someone direct, unfettered access to your PC they are hardly immune to an inserted hardware, "man in the middle" attack along similar vectors of attack. If the boot drive is physically removable getting into the middle is doable. Not as as trivial physical connection to start the exploit but still exploitable if open the door to doing anything physically to the device.

The drive is still encrypted here. It is keys and/or passwords that decrypted get loose.
 
I think everyone is down playing the severity of this issue. The T2 exploit, if proven, is persistent until a full reboot of the system occurs, not until the external hardware is removed.

Imagine the situation where you're sitting in a coffee shop using your computer, you get up to get something from the café. An attacker only needs 5 to 10 seconds with your machine to exploit it whilst you're not there looking at it, once they've done that, the entire machine has been compromised. When you return, entering your password for your account, a keylogger in the T2 can capture your password, and then later use it for FileVault and Keychain decryption, and because the network connection is also accessible to the T2 chip, they can then use this to exfiltrate any file, login credential, fingerprint data, or anything they're interested in off of your computer, via the internet, without a single sign that your machine has been tampered with. All this from 5-10 seconds of unmonitored access with your machine.

Errrr, no. That isn't the situation. They have to get the Mac into a DFU state and then run the hack. That isn't going to be complete in 5-10 seconds at all. And when they complete it the system would have been booted or they would have left it in a 'whacked' DFU state.




The issue here, is that the T2 is the first thing to run in the entire system. It is literally the root of trust in any T2-based Mac devices. Once macOS has booted, it's too late. There is nothing they can do to mitigate the exploit in the operating system.

This is done when macOS is not running. It is the secure processor OS and BridgeOS (or iOS on the iPhone) that are primarily being messed up. That has potential downstream bad outcomes for macOS but that isn't the root cause of the problem.
 
What about T1?

The T1 isn't hooked to any drive. It is basically a Apple Watch SoC (S2) that is just hooked to the TouchID sensor and the touchbar. If go back to the A7 era this stuff doesn't work (and stopped after A10). It is a different set of cores than Apple was using on the A-series so probably doesn't have the same quirks in the ROM. Not sure there much overlap with the Phone/T2 DFU mode as the Watch's DFU mode.

The T2 is where Apple tossed the kitchen sink under the control an A10. They should have put substantially more effort into seperating the operations of the Secure Enclave process from the rest of that kitchen sink of stuff.



It’s not patchable. It’s in the read only part of the chip.

While the ROM isn't patchable they could try to keep things from going further down the drain but limiting the exploits after. They could put code into the signed part of the BridgeOS/iOS kernel to look out for compromised systems.

Apple's boot code that is updatable doesn't have to utterly blindly trust the stack it finds itself running on. Apple put tons of faith into the root of trust is secure so they probably are not doing much checking once get started on the foundation that did the kick starting. It could help to go back and double check some stuff before complete the boot process. If they can abort and say "this system looks whacked ... do a deep reset" then at least folks would get an overt notice after the fact.

There is no good reason for anything to be logging keys in bridgeOS. So if kernel found something running later it should/could :'freak out' .

Apple can't close them but they can mitigate it somewhat. If only make the hack a bit harder to that will cut down on the attempted exploits.
 
Good, this chip is a nightmare for consumers who want their data back.

User's who have their password get their data back in the vast majority of T2 operations.

The two core problems here was that, if anything, the T2 is a bit too loose. It should have been tighter and more "paranoid". It ships with ROM code with a debug mode left on when booted into DFU mode. That is just a silly backdoor that was bound to turn into a problem later. The second problem is that the Security Enclave Processor and OS are too trusting of the ARM application processor. ( should have its own private , non-cache coherent scratch space to gets it memory key sorted out before loading the base secure processor OS. )

Not really an T2 problem. More of an A10 problem ( that all of the other Apple A10 , A9 , A8 ) devices have them too. The T2 inherited them because Apple was relatively sloppy and copied the problems over. Not sure who runs their security reviews, but whomever that is/was missed some bonehead moves in the code that should have been obvious. ( unless they were deliberately trying to leave some 'backdoors' in the system. )

Also another Apple can't walk and chew gum at the same time event, that they couldn't turn out a A11/A12 based 'T3' that would get rid of the problems.

The nightmare here is more so sloppy Apple execution than the chip itself.
 
  • Like
Reactions: brianmowrey
Another reason why Apple Silicon is a horrible idea. Apple isn't ready, willing, or able to do the groundwork necessary to keep their chips secure. Get used to the Mac going from one of the most secure platforms out there to being ridden with horrible, unpatchable bugs and security exploits.

It's one thing when you can make the OS a walled garden, like with iOS. When you can control the software, you don't need to worry about the hardware being buggy. But unless we're going to have the Mac App Store be the only source for Mac apps, get used to having your computer pwned on a daily basis once Apple Silicon is a reality.
Sounds like you don't understand anything Apple is doing ... or how technology works in general.
 
  • Like
Reactions: DaveN
The number of commenters here that profess having expertise yet clearly did not even bother reading the article is astounding.
 
  • Like
Reactions: andresro14
How many exploits and hacks have we seen on Intel/AMD chips? How many on non-Apple ARM? How many on support chips (SSD-controllers, WIFI/4G-modems)?

How many in Win/Android vs macOS/iOS?

In the end nothing is ever gonna be 100% safe for ever, but so far Apple's track record is quite good.
Whataboutism at its finest. This is a security chip. It's one single purpose is security. Apparently it has some major flaws there which appear to be unpatchable.

Also, you got to have you own standards. The market is not really something you should compare your security to.
That's measured on an absolute scale.
 
I need to get a new Macbook Air as my 2012 model died recently and is beyond repair. Do you think I should hold off buying a new one until they address this issue or do I have no choice but to buy one right now with this flaw? I was thinking of getting the new Apple silicon MBA but many on this site advised against it as there are bound to be teething problems with the new silicon chips so I was eyeing up the 2020 Intel model.
 
Last edited:
Also, I find it extremely disappointing that every time a security researcher voluntarily notifies Apple of such threats—be they patchable or otherwise—and after no doubt spending a lot of their own time uncompensated researching them, the response from Apple is always silence; or (if you’re lucky) a long, protracted delay before they even acknowledge your effort with a reply.

That’s quite pathetic really. The least they could do is get someone to call this guy personally to thank him and assure him that it’s being looked into. Perhaps even keep him in the loop on progress. From a public relations perspective that is the right thing to do. And Apple wonders why some people just go straight to the media instead! I don’t feel motivated to bug-test for Apple because of this.

Idealistic, the reality is that if Apple responded, then you would have 10000s of people sending in what they thought were vulnerabilities. Apple would have to take 2 or 3 of their 10 good software engineers off products just to respond to these submissions. That would totally overwhelm Apple and reduce software quality even further.

On second thought, I am not sure Apple's software quality can get much lower. So hmm, maybe you have a point after all.
 
  • Like
Reactions: simonmet
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.