Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

What are you doing with your MacPro to mitigate MDS-style CPU attacks?

  • Nothing (discuss)

    Votes: 56 60.9%
  • Avoiding browsing the Internet

    Votes: 3 3.3%
  • Retiring my Mac Pro

    Votes: 6 6.5%
  • Disabling Hyper-threading https://support.apple.com/en-us/HT210107

    Votes: 21 22.8%
  • Wait, what?

    Votes: 5 5.4%
  • Other (post in comments)

    Votes: 1 1.1%

  • Total voters
    92

Lycestra

macrumors member
Original poster
Oct 1, 2018
56
38
Cheesy Midwest
Apple and Intel have come out to say Mac Pro 2010 and older won't be updated to mitigate CPU bugs like ZombieLoad that are considered exploitable much more easily than Meltdown or Spectre.

A lot of my understanding of this comes from tsialex's postings in MP5,1: BootROM thread | 144.0.0.0.0 with 10.14.5 final

Figured it might be worth asking what people are planning to do about this, since the 2010 and 2009 Mac Pros can still run the latest OS.
 
Last edited by a moderator:
  • Like
Reactions: Fereous
Apple provided most mitigations with Safari. If you only use Safari, have good secure browsing habits and restrict which apps you install, you are reasonable safe even with SMT enabled.

Until a worm like Conficker is developed that can use the MDS vulnerabilities to exfiltrate data, probably via ad networks, common people (not targeted persons like Human rights activists around the globe, for example) should not worry to much. But if you are a targeted person, you shouldn't be using macOS anyway.

There are lot's of easier vectors than the MDS vulnerabilities. Remember that the most successful attacks are the social engineering ones. Don't click on everything, don't install everything. Use your best behaviour when on the internet, etc, etc.
 
I’ve switched off hyper threading and go on using my old Mac pros
For most people, SMT being disabled will be almost imperceptible, but doing that is a serious problem for people that have to do CPU intensive tasks like compiling and encoding video or run heavy multithreaded apps. No one that need SMT will want to lose the speed up.
 
Last edited:
According to AppleInsider article, Apple is saying the 2010 Mac Pro is not vulnerable to ZombieLoad, but could remain vulnerable to security attacks that are similar to 'ZombieLoad'. So apparently not much to worry about until some future security attack surfaces?

https://appleinsider.com/articles/1...-future-vulnerabilities-similar-to-zombieload

"While 'ZombieLand' itself will not affect these machines, because of the particular attack vector, Apple cannot fully patch against other such "speculative execution vulnerabilities" without Intel's help."
 
According to AppleInsider article, Apple is saying the 2010 Mac Pro is not vulnerable to ZombieLoad, but could remain vulnerable to security attacks that are similar to 'ZombieLoad'. So apparently not much to worry about until some future security attack surfaces?

https://appleinsider.com/articles/1...-future-vulnerabilities-similar-to-zombieload

"While 'ZombieLand' itself will not affect these machines, because of the particular attack vector, Apple cannot fully patch against other such "speculative execution vulnerabilities" without Intel's help."

This AppleInsider article is ********. ZombieLoad is just one of the MDS class of vulnerabilities, yesterday were divulged 4 different ones.

Btw, if Mac Pro Xeon processors were not vulnerable, why Intel shows example kernel code to mitigate the vulnerabilities?

Screen Shot 2019-05-15 at 17.16.09.png


https://software.intel.com/security...tel-analysis-microarchitectural-data-sampling
 
Last edited:
  • Like
Reactions: robotica
And more SMT-related vulnerabilities will surface - it's just a matter of time. Also, it really doesn't matter whether you are an expert or novice computer user.

Always address the root cause, in this case it's Hyper-threading's flawed design, so disable it and move on. Follow OpenBSD's lead.
 
This AppleInsider article is ********. ZombieLoad is just one of the MDS class of vulnerabilities, yesterday were divulged 4 different ones.

AppleInsider is apparently just reporting what they were told by Apple. Sounds like Apple is not be fully acknowledging the problem if more vulnerabilities that apply to the 2010 Mac Pro have already been divulged. Apple should have plenty of leverage with Intel to get microcode released to fix this issue if they wanted it fixed. It shouldn't be up to Mac Pro users to lobby Intel.
 
  • Like
Reactions: w1z
AppleInsider is apparently just reporting what they were told by Apple. Sounds like Apple is not be fully acknowledging the problem if more vulnerabilities that apply to the 2010 Mac Pro have already been divulged. Apple should have plenty of leverage with Intel to get microcode released to fix this issue if they wanted it fixed. It shouldn't be up to Mac Pro users to lobby Intel.
I doubt that AppleInsider did more than read the Apple support page Additional mitigations for speculative execution vulnerabilities in Intel CPUs, it's almost ipsis litteris including the Apple error, there is no Mac Pro late 2010.

If you read Apple Security Updates support document macOS Mojave 10.14.5, Security Update 2019-003 High Sierra, Security Update 2019-003 Sierra, you will see that Apple already knew the vulnerabilities, a lot of CVEs were closed that cryptically refer to the MDS vulnerabilities.

Anyway, I agree with your statement that Apple should actively engage Intel.
[doublepost=1558029242][/doublepost]Apple just added this:

Screen Shot 2019-05-16 at 14.53.31.png
 
Well, my Mac Pros no longer touch the internet.

I have my HP Z210 test bench up and running, and I am looking at migration strategies.
 
Hi @itadampf , have you successfully disabled HT on a 5,1 MP? Could you show us a screenshot of it disabled? I'm having trouble getting it disabled on 10.13.6.
Disabling Hyper-Threading works perfectly with 10.13.6 + Security Update 2019-003:

This is from Recovery, grabbed manually via screencapture command:

SMTDisable.Recovery.png


Now, after Hyper-Threading disabling and shutdown:

Code:
system_profiler SPHardwareDataType; sysctl machdep.cpu.brand_string; sysctl hw.physicalcpu hw.logicalcpu

SMT_disabled_terminal.png



Edit:

144.0.0.0.0 is a pre-requisite for SMTDisable plus 10.12.6 + 2019-003, or 10.13.6 + 2019-003 or 10.14.5.

Since it's a NVRAM setting, you do it once, all your macOS installs that have the mitigation (10.12.6 + 2019-003, or 10.13.6 + 2019-003 or 10.14.5) will respect that.
 
Last edited:
For most people, SMT being disabled will be almost imperceptible, but doing that is a serious problem for people that have to do CPU intensive tasks like compiling and encoding video or run heavy multithreaded apps. No one that need SMT will want to lose the speed up.

I'm a little skeptical here, because I would only expect it to be a big deal in these areas if you're bottlenecked by something like stall cycles induced by cache misses, where sharing the core itself wouldn't result in even more cache misses. Hyperthreading can give the cpu something to do when it would otherwise be waiting. It's just that on the tightly optimized code paths, it shouldn't be that much of a gain.


This AppleInsider article is ********. ZombieLoad is just one of the MDS class of vulnerabilities, yesterday were divulged 4 different ones.

Btw, if Mac Pro Xeon processors were not vulnerable, why Intel shows example kernel code to mitigate the vulnerabilities?

View attachment 837360

https://software.intel.com/security...tel-analysis-microarchitectural-data-sampling

Apple isn't generally forthcoming on these things, and they tend to be optimistic in their own favor. There are other possibilities, like that they already do something similar in kernel code or handle a portion of this at the compiler level (noting that they use a different version of clang than the current open source one).

Also a lot of the instructions there are just limiting runtime reordering and speculative execution of branches. If this is only used at the kernel level, this may not have a significant impact on compiling or video encoding anyway. The people who write those applications tend to treat memory allocation as an expensive operation, which it is, and may try to amortize that cost by using buffers for a long time. Memory pooling interfaces such as glibc's malloc implementation will also not necessarily unmap memory whenever it's freed, which limits kernel interaction.
 
I'm a little skeptical here, because I would only expect it to be a big deal in these areas if you're bottlenecked by something like stall cycles induced by cache misses, where sharing the core itself wouldn't result in even more cache misses. Hyperthreading can give the cpu something to do when it would otherwise be waiting. It's just that on the tightly optimized code paths, it shouldn't be that much of a gain.




Apple isn't generally forthcoming on these things, and they tend to be optimistic in their own favor. There are other possibilities, like that they already do something similar in kernel code or handle a portion of this at the compiler level (noting that they use a different version of clang than the current open source one).

Also a lot of the instructions there are just limiting runtime reordering and speculative execution of branches. If this is only used at the kernel level, this may not have a significant impact on compiling or video encoding anyway. The people who write those applications tend to treat memory allocation as an expensive operation, which it is, and may try to amortize that cost by using buffers for a long time. Memory pooling interfaces such as glibc's malloc implementation will also not necessarily unmap memory whenever it's freed, which limits kernel interaction.

I don't believe Intel too, on the Side Channel Vulnerability Microarchitectural Data Sampling article they show that the worst scenario is just 14% drop in performance for a Xeon E5-2699 with SMT disabled running storage benchmarks. I bet that this is carefully chosen and older processors like Westmere will be hit harder.

The new reality is that a good part of X86 ISA performance gains in the last decade will go away when everyone will have to disable branch prediction.
 
  • Like
Reactions: ssgbryan
Tried everything. Reset NVRAM. Still can't get it to disable. Looks like my BootROM version is hindering me. Unfortunately, I don't have a Mojave compatible GPU to upgrade the Bootrom to 144.
 

Attachments

  • Screen Shot 2019-05-16 at 11.00.22 PM.png
    Screen Shot 2019-05-16 at 11.00.22 PM.png
    204.1 KB · Views: 252
Tried everything. Reset NVRAM. Still can't get it to disable. Looks like my BootROM version is hindering me. Unfortunately, I don't have a Mojave compatible GPU to upgrade the Bootrom to 144.
138.0.0.0.0 is a firmware released months before the definitive Meltdown/Spectre corrections. Already has the current microcodes, but a lot of things changed in the 10 months between 138.0.0.0.0 and 144.0.0.0.0.
 
I don't believe Intel too, on the Side Channel Vulnerability Microarchitectural Data Sampling article they show that the worst scenario is just 14% drop in performance for a Xeon E5-2699 with SMT disabled running storage benchmarks. I bet that this is carefully chosen and older processors like Westmere will be hit harder.

The new reality is that a good part of X86 ISA performance gains in the last decade will go away when everyone will have to disable branch prediction.

I'm not so sure of that. First we're talking about disabling out of order execution, but this doesn't necessarily limit instruction prefetching or disable macrofusion of compare and branch type instructions.

A lot of the gains over the past decade have been on particular types of workloads, particularly anything that can be reduced to dense arithmetic sequences. We gained implementation of fma3 with independent issuing on ports that previously issued standalone multiply or add instructions. We also saw 256 bit operations added with AVX, up from 128 bit (AVX512 comes in too many variants and it's still too immature). Newer cpus (Haswell and newer) also issue independent loads on up to 2 ports in a given cycle. Newer cpus also allow folding loads that are not guaranteed to be aligned.

There are a few vtest instruction variants, which overlap with the ones I mentioned, but I'm not sure that intel really supports speculative execution based on that. If a more recent compiler is targeting that anything simd based, it's likely to apply partial unrolling anyway.

I could see the low level code taking more of a dive, since you have more branchiness. I would still expect it to be highly dependent on your workflow and how much time is spent in the kernel. I mean I wouldn't expect the need to put up a few memory barriers to wipe out a decade of performance gains in compute bound performance critical cases. The scnearios that look like they hurt based on your link are those with a lot of memory mapping and unmapping or possibly driver code?
 
Forgive my ignorance, but is this a simple fix that Apple and/or Intel is simply refusing to address? Is there a way to protect 2009 - 2012 Mac Pros *without* disabling hyper threading that just isn't available yet?

How can Apple or Intel possibly stand by the position of "not fixing" a gaping security hole in one of their products - especially products aimed at professionals? If enough of us make noise, maybe we can change their mind... But again, I have no real knowledge on this subject and defer to the wisdom of Alex and others.
 
I'm not so sure of that. First we're talking about disabling out of order execution, but this doesn't necessarily limit instruction prefetching or disable macrofusion of compare and branch type instructions.

A lot of the gains over the past decade have been on particular types of workloads, particularly anything that can be reduced to dense arithmetic sequences. We gained implementation of fma3 with independent issuing on ports that previously issued standalone multiply or add instructions. We also saw 256 bit operations added with AVX, up from 128 bit (AVX512 comes in too many variants and it's still too immature). Newer cpus (Haswell and newer) also issue independent loads on up to 2 ports in a given cycle. Newer cpus also allow folding loads that are not guaranteed to be aligned.

There are a few vtest instruction variants, which overlap with the ones I mentioned, but I'm not sure that intel really supports speculative execution based on that. If a more recent compiler is targeting that anything simd based, it's likely to apply partial unrolling anyway.

I could see the low level code taking more of a dive, since you have more branchiness. I would still expect it to be highly dependent on your workflow and how much time is spent in the kernel. I mean I wouldn't expect the need to put up a few memory barriers to wipe out a decade of performance gains in compute bound performance critical cases. The scnearios that look like they hurt based on your link are those with a lot of memory mapping and unmapping or possibly driver code?
You make good points, I didn't took in consideration the amount of kernel/driver code time versus the total application time.

If performance hit most affects kernel and driver code, it's not so horrible when you amount to everything. Storage benchmarks depends a lot on kernel and driver code, so probably it's why the performance hit is bigger there.

After re-thinking this, I'll probably wait for real world tests outside of the Intel lab conditions to reassess my expectations and see what real world scenarios have the most performance hit and with processors models. Maybe it's not so bad after all…
[doublepost=1558067735][/doublepost]
Forgive my ignorance, but is this a simple fix that Apple and/or Intel is simply refusing to address? Is there a way to protect 2009 - 2012 Mac Pros *without* disabling hyper threading that just isn't available yet?

How can Apple or Intel possibly stand by the position of "not fixing" a gaping security hole in one of their products - especially products aimed at professionals? If enough of us make noise, maybe we can change their mind... But again, I have no real knowledge on this subject and defer to the wisdom of Alex and others.

One thing that Intel is doing with this announcement of not supporting Nehalem and Westmere microcode corrections anymore is to indirectly force the replacement of Nehalem and Westmere processors still in use in the enterprise environment. Mac Pros is not the Intel focus, so they probably will easily absorb the flak of this commercial decision.

Microcode is needed to implement buffer cleaning between rings directly on the processor and not to depend on kernel mitigations that have bigger performance penalty. Some processors need both the microcodes updates and the kernel code, so it's not a solution that works for all generations of processors. Without the microcodes updates, the full mitigation for MP5,1 is to disable hyper-threading.
 
I'm not so sure of that. First we're talking about disabling out of order execution, but this doesn't necessarily limit instruction prefetching or disable macrofusion of compare and branch type instructions.

A lot of the gains over the past decade have been on particular types of workloads, particularly anything that can be reduced to dense arithmetic sequences. We gained implementation of fma3 with independent issuing on ports that previously issued standalone multiply or add instructions. We also saw 256 bit operations added with AVX, up from 128 bit (AVX512 comes in too many variants and it's still too immature). Newer cpus (Haswell and newer) also issue independent loads on up to 2 ports in a given cycle. Newer cpus also allow folding loads that are not guaranteed to be aligned.

There are a few vtest instruction variants, which overlap with the ones I mentioned, but I'm not sure that intel really supports speculative execution based on that. If a more recent compiler is targeting that anything simd based, it's likely to apply partial unrolling anyway.

I could see the low level code taking more of a dive, since you have more branchiness. I would still expect it to be highly dependent on your workflow and how much time is spent in the kernel. I mean I wouldn't expect the need to put up a few memory barriers to wipe out a decade of performance gains in compute bound performance critical cases. The scnearios that look like they hurt based on your link are those with a lot of memory mapping and unmapping or possibly driver code?
I was checking VMWare Fusion support documentation for the security corrections and VMWare published a table of the performance hit for ESXi on the support article: VMware Performance Impact Statement for ‘L1 Terminal Fault - VMM’ (L1TF - VMM) mitigations: CVE-2018-3646 (55767)

"this scheduler provides the Hyper-Threading-aware mitigation by scheduling on only one Hyper-Thread of a Hyper-Thread-enabled core."

It's a 22 to 32% performance hit to mitigate side channel attacks:

Screen Shot 2019-05-17 at 08.02.43.png
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.