Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Glad I got a practically maxed out 2019 5K iMac instead. To get something faster, I’d need to spend $8099, and even then the GPU would still be slower than my Vega 48, and it doesn’t even come with a display. That’s quite the disappointment for those who were waiting. My 2019 5K iMac cost half of that.

Lets be real here: The RAM on the 5K iMac is upgradable and you can use eGPUs. When I go to upgrade my machine in 5-6 years, it will probably be faster than a maxed out Mac Pro in 2019, and I’ll be spending in total about as much as a single base Mac Pro, less if you consider resale value, and even less if you factor in needing to buy a 5K display.

The base model Mac Pro, and even the step up from that is a bad deal no matter which way you look at it. 5K iMac and iMac Pro are much better deals and have reasonable upgradability. Only reason to get Mac Pro is if you need crazy GPUs or dozens of cores and a thousand plus gigs of RAM because you work for a Hollywood studio or some well funded medical research lab or are a millionaire YouTuber who needs to flex and records their videos in 8K for no good reason.
 
  • Like
Reactions: BOOMBA and IG88
I work in the music business and spend a lot of my time in tracking and mixing studios, sometimes crossing over into the area of film scoring and dubbing (which is by far the most lucrative part of the industry).

Bar one studio which seems to be talking about buying a new Mac Pro in order to keep a client happy (at a loss), everybody has started looking towards Hackintoshes. Most of the film mixing studios have already made the move. Even the businesses which have the money to pay for one of these machines (which is virtually none) are turning their backs.

All you people who say ‘Pros will see it as an investment’ are also forgetting that the ~£10k price difference between a well specced out MacPro vs a Hackintosh will also buy you a very nice set of monitors.
I’m a full-time audio professional myself and while I see the value in hackintoshes I’d much rather pay the $3k in apple-tax because to me their tactic of limiting outside hardware works. As much as I hate to pay the tax it is 40 hours of our cheapest local client rate, which is more enjoyable to me than 40(or more) hours of troubleshooting

But I also have spent WAY more on my audio gear than $15k.. heck I’ve spent more than that on a single mic and already have barefoots in the studio. I’d definitely agree with your point and say get the barefoots (or whatever monitors you like), pres and a great mic before a new mac pro.

In any recording/production environment there is a balance of fidelity and logistics. A base model with 2-8TB storage is a pretty good deal for me to buy now and upgrade later. It’ll be way faster than my upgraded 2010 mac pro! So yeah while there is marketing strategy and over-priced-ness there I’m glad apple left base-model-options
 
  • Like
Reactions: HappyIntro
I'm seeing single core 32 core threadripper scores of
1086-1384,
and multicore from 9925 to 30585

what's up with these ranges?
(FYI, I am building a 32-core Threadripper system to replace my trashcan, just waiting on the processor to come in stock)

I don't have a current gen TR, but I have a 1950x build at work and a 3900x at home.

What's your workload? These chips are immensely powerful - but only for certain workloads. If you can't take advantage of lots of threads, you may actually be better with a lower thread count chip.
 
  • Like
Reactions: IG88
TB is PCIe.

Please show some actual data on a Thunderbolt audio interface having so much more latency than an internal PCIe card that it makes a meaningful difference in audio.
TB3 is limited to 4 pcie lanes of the 16 available
TB3 is beyond an additional layer of abstraction
PCIe has more than 3x the total throughput

they are not the same
 
  • Like
Reactions: IG88
Gaming PCs are generally very good at sustained performance, better than iMacs and at least as good as the Mac Pro if not better as there are people that water cool their PC's for example.
Also synthetic benchmarks aside the faster PC or iMac(the i9 9900k variant) will win in real world applications as well. The Mac Pro doesn't use any special CPUs or GPUs, there's no secret souce, just standard computer parts put together in a nice case.
Nope.
They are not using “standard” computer parts.
they are using workstation grade parts, mostly designed by Apple with sustained performance in their mind.
Playing a game for 2/3 hours isn’t the same case scenario I depicted above (8 hrs / 5 day a week or more for years).
 
Nope.
They are not using “standard” computer parts.
they are using workstation grade parts, mostly designed by Apple with sustained performance in their mind.
Playing a game for 2/3 hours isn’t the same case scenario I depicted above (8 hrs / 5 day a week or more for years).
There's nothing wrong with what I said.
A Xeon CPU is a standard computer part. I can go and buy one right now. I can buy ECC RAM, a FirePro or Quadro GPU no problem.
And I don't know what Apple designed but regarding the Mac Pro they just changed the shape, size of the PCB and the cooling system nothing special or even necessary.

People can easily play games for more than 3 hours, they can play for more than 8 hours it doesn't matter, I don't know why you insist with this. A typical gaming PC can indefinitely sustain max GPU, CPU performance.
The stock Ryzen 3700x cooler allows the CPU to run at it's typical max boost clock no problem for as long as it needs to and so on.
I don't see anything special regarding Mac Pro's sustained performance. This has been something normal in the PC land for as long as I can remember.
 
Almost no one out there seems to have discussed or reviewed the new Mac Pro's performance when it comes to compiling code, other than unhelpful Geekbench reports that don't take into account real workflows where only perhaps Xcode and similar build-tools are used. All such reviews seem to focus on its GPU / VFX / Audio related capabilities. Here's my recent experience using the Mac Pro as a developer.

I just received a 16-core Mac Pro yesterday, with 32GB RAM. I'm a developer and was hoping to cut down on the time it takes to compile code that roughly takes 2 minutes on an 2019 iMac i9 3.6Ghz 8-core, 128GB RAM. I was hoping to bring this down to 1 minute at least with a 16-core, but was surprised that after numerous tests on numerous different projects, the new Mac Pro took either exactly the same amount of time to compile the same code on an iMac, or was at times only 0-5 seconds faster than the iMac in terms of compilation speed. The projects involved have a mix of C / C++ / Obj-C and Swift code. Surprisingly almost zero gains with the new 16-core.

I then compared this with a Macbook Pro 16inch (2.4Ghz 8-Core i9 with 64GB RAM) and the MacBook Pro only took 3-10 seconds longer to compile the same projects.

This was a pretty shocking discovery after having spent $9,000+ on the new Mac Pro. After a day of using the Mac Pro, I'm unimpressed with it's performance and have called Apple to return it. It seems the much cheaper iMac gives value for money and I'll wait for the 10th Generation Intel chips for the iMac instead, however long it takes for Intel & Apple to come out with these.

I've been a Mac Pro user for the past 12 years and have had the last two / three iterations along with a Macbook Pro. My Mac Pros have always blown away other hardware that Apple Produced until now. This was also my first iMac and I'm pleasantly surprised to see it performing so well. This may as well reflect on how poorly Xcode utilizes multiple cores but the monitoring I performed suggested otherwise - it was using all 16-cores at peak times during compilation, yet it didn't do that well compared to the new 8-core i9s.

For comparison, the same code takes around 2:50 to 3 minutes to compile on a Macbook Pro 15inch 4-core i7 (2015 model). So there's a big jump in performance gain between the 4-core i7 and 8-core i9 but negligible or none between the 8-core i9 and the 16-core Xeon W. Sad.
 
Last edited:
  • Like
Reactions: HappyIntro
There's nothing wrong with what I said.
A Xeon CPU is a standard computer part. I can go and buy one right now. I can buy ECC RAM, a FirePro or Quadro GPU no problem.
And I don't know what Apple designed but regarding the Mac Pro they just changed the shape, size of the PCB and the cooling system nothing special or even necessary.

People can easily play games for more than 3 hours, they can play for more than 8 hours it doesn't matter, I don't know why you insist with this. A typical gaming PC can indefinitely sustain max GPU, CPU performance.
The stock Ryzen 3700x cooler allows the CPU to run at it's typical max boost clock no problem for as long as it needs to and so on.
I don't see anything special regarding Mac Pro's sustained performance. This has been something normal in the PC land for as long as I can remember.
I don’t know why you insist with this nonsense, so I think it is a waste of my time to keep arguing.
A gaming computer IS NOT a workstation. No matter what you think.
 
I don’t know why you insist with this nonsense, so I think it is a waste of my time to keep arguing.
A gaming computer IS NOT a workstation. No matter what you think.
Workstation or Gaming PC it doesn't matter how it's called in the end it's just a computer.
So I don't see the nonsense in what I wrote. Maybe you could enlighten me.
If I built a PC with a Threadripper 3960x and a Titan RTX GPU and consider it both a workstation and a Gaming PC would I be wrong?
 
  • Like
Reactions: IG88
Almost no one out there seems to have discussed or reviewed the new Mac Pro's performance when it comes to compiling code, other than unhelpful Geekbench reports that don't take into account real workflows where only perhaps Xcode and similar build-tools are used. All such reviews seem to focus on its GPU / VFX / Audio related capabilities. Here's my recent experience using the Mac Pro as a developer.

I just received a 16-core Mac Pro yesterday, with 32GB RAM. I'm a developer and was hoping to cut down on the time it takes to compile code that roughly takes 2 minutes on an 2019 iMac i9 3.6Ghz 8-core, 128GB RAM. I was hoping to bring this down to 1 minute at least with a 16-core, but was surprised that after numerous tests on numerous different projects, the new Mac Pro took either exactly the same amount of time to compile the same code on an iMac, or was at times only 0-5 seconds faster than the iMac in terms of compilation speed. The projects involved have a mix of C / C++ / Obj-C and Swift code. Surprisingly almost zero gains with the new 16-core.

I then compared this with a Macbook Pro 16inch (2.4Ghz 8-Core i9 with 64GB RAM) and the MacBook Pro only took 3-10 seconds longer to compile the same projects.

This was a pretty shocking discovery after having spent $9,000+ on the new Mac Pro. After a day of using the Mac Pro, I'm unimpressed with it's performance and have called Apple to return it. It seems the much cheaper iMac gives value for money and I'll wait for the 10th Generation Intel chips for the iMac instead, however long it takes for Intel & Apple to come out with these.

The Mac Pro will only be of benefit to very select workloads. An iMac Pro already offers plenty of cores, and even the regular iMacs now do with Coffee Lake Refresh, as you've discovered.

If Apple were to offer a Coffee Lake Refresh version of the Mac mini, it would probably be quite a good deal for a compiling build host. But they don't, so the iMac may be the best bang for the buck.

If Apple ships a Comet Lake-S update for the iMac ("10th generation" for this particular profile), that should bump it to up to ten cores. There may also be some slight single-threaded improvements. For example, Coffee Lake-SR only goes up to DDR4-2666, and Comet Lake-S might (might!) bump that up to 2933. So, slightly faster memory. At the same time, packing even more cores in the same process inevitably means that a 10-core version of the same CPU will actually have lower clock rates than your current chip.

We probably won't see real single-perf bumps for another two generations: not with Comet Lake-S in 2020, and not with Rocket Lake-S in 2021, which is still 14nm, but with the generation after that. But for multicore, there's likely to be a ten-core Comet Lake-S-based iMac next summer.

This may as well reflect on how poorly Xcode utilizes multiple cores but the monitoring I performed suggested otherwise - it was using all 16-cores at peak times during compilation, yet it didn't do that well compared to the new 8-core i9s.

That's interesting, actually.

My understanding is that the Swift compiler is 1) very slow, but 2) fairly good at taking advantage of many cores. This is quite unlike, say, C#'s Roslyn compiler, which I've found to scale poorly to multiple cores (to be fair, it's a bit hard to compare JITC and AOTC compilers). However, as you've said, I've seen very little feedback on how the Mac Pro fares for developers. I have seen plenty of people praise the iMac Pro in that regard, but it's getting a little long in the tooth, and I'd be wary of choosing it over the iMac at this point. (The iMac Pro does have some non-obvious advantages, such as a better cooling system.)

I wouldn't entirely rule out that your Mac Pro is hampered by initial setup (Spotlight indexing, whathaveyou) and that your tests aren't reflective of long-term performance. Regardless, my personal recommendation, especially if you're happy to save multiple thousands of dollars, is to forego 'pro' altogether and just spec out a high-end iMac.
 
  • Like
Reactions: spymousetruck
Workstation or Gaming PC it doesn't matter how it's called in the end it's just a computer.

That’s exactly the reason why it is a waste of time to argue about that.
You should call HP, Dell and Lenovo to tell them to stop producing Workstations, sold at higher prices than a Mac Pro in many cases. Those are “just a computer” and a gaming PC is what everyone needs.
[automerge]1576760859[/automerge]
Almost no one out there seems to have discussed or reviewed the new Mac Pro's performance when it comes to compiling code, other than unhelpful Geekbench reports that don't take into account real workflows where only perhaps Xcode and similar build-tools are used. All such reviews seem to focus on its GPU / VFX / Audio related capabilities. Here's my recent experience using the Mac Pro as a developer.

I just received a 16-core Mac Pro yesterday, with 32GB RAM. I'm a developer and was hoping to cut down on the time it takes to compile code that roughly takes 2 minutes on an 2019 iMac i9 3.6Ghz 8-core, 128GB RAM. I was hoping to bring this down to 1 minute at least with a 16-core, but was surprised that after numerous tests on numerous different projects, the new Mac Pro took either exactly the same amount of time to compile the same code on an iMac, or was at times only 0-5 seconds faster than the iMac in terms of compilation speed. The projects involved have a mix of C / C++ / Obj-C and Swift code. Surprisingly almost zero gains with the new 16-core.

I then compared this with a Macbook Pro 16inch (2.4Ghz 8-Core i9 with 64GB RAM) and the MacBook Pro only took 3-10 seconds longer to compile the same projects.


32 Gb Mac Pro vs 128 Gb iMac i9 vs 64 Gb MBP 16”. 🤔

Is that making any difference ?
 
Last edited:
I don’t know why you insist with this nonsense, so I think it is a waste of my time to keep arguing.
A gaming computer IS NOT a workstation. No matter what you think.

Sorry friend... But you're the one whose actually wrong in a lot of this.


From a components standpoint, they're fundamentally the same. What workstations tend to offer that you don't have in gaming computers is professional support levels, and more dedicated drivers to handle the more specific workloads. In addition they may have some additional redundancies built in to maximize uptime.

But the fundamental architectures between the two are extremely similar.

They both focus around the same overall trifecta of compute.

where SOME differences arise are more born out of Intel's current production lineup and marketing. Intel's CPU's, whether they're Core i series or Xeon's share the same architecture and often same performance per clock since they're essentially the same die. Where Xeon's differ is that Intel has decided ONLY the Xeon lineup is allowed to have multi-CPU interconnects and ECC RAM support. Otherwise, they're pretty much identical.

This also becomes a moot point when you go into AMD's camp. AMD Ryzen Supports ECC natively on every Ryzen chip.

As for GPU's. again, the workstations class cards are not fundamentally different architecture to gaming. Same GPU chips themselves. They tend to be more expensive because they have specific dedicated driver support for different workloads. Plus often they'll come with more VRAM due. Gaming cards will save a few bucks for the end user by scaling down how much VRAM is on the card because gaming doesn't necesarily need as much as a full computational workload might have.


But other than that, A "Workstation" and a gaming PC are all basically the same fundamental architectures.

Again, the bulk of the difference between the two is "quality". Higher binned parts that are supposedly rated for longer lifespans and are less prone to failure.

But in regards to actual HOW the computers work? IDentical.

I have 20+ years experience working in IT, buying workstations for anything from developers, number crunchers, all the way down to office desktop workers. I have been building personal computers for 30+ years.

i HIGHLY recommend before you start trying to act all high and mighty that everyone else must be wrong, you actually research what you're claiming.
 
Being an IT guy with almost 30 years of experience in the field, I don’t have anything else to add.
Nobody wrote about different architecture.
Quality and reliability are exactly what I’m speaking about. Uptime is exactly what I’m pointing out.
A gaming PC is not a workstation, and it is not just a question of “drivers”.
I HIGHLY recommend to read the thread before jumping in patronizing
 
Last edited:
That’s exactly the reason why it is a waste of time to argue about that.
You should call HP, Dell and Lenovo to tell them to stop producing Workstations, sold at higher prices than a Mac Pro in many cases. Those are “just a computer” and a gaming PC is what everyone needs.
Actually I'm wasting my time.
Also I don't have to call anybody.
 
  • Like
Reactions: IG88
Being an IT guy with almost 30 years of experience in the field, I don’t have anything else to add.
Nobody wrote about different architecture.
Quality and reliability are exactly what I’m speaking about. Uptime is exactly what I’m pointing out.
A gaming PC is not a workstation, and it is not just a question of “drivers”.
I HIGHLY recommend to read the thread before jumping in.

The point you're making is irrelevant.

hile binned differently, the binning process essentially just says "We have tested this and believe it to be more reliable"

that does not necessarily translate to real world reliability. you pay extra for that when you pay for the workstations, but it in no way guarantees (parts are not generally guaranteed from the manufacturers such as intel / nvidia for that sort of failure).

the workstations are so expensive because alongside with those higher binned parts, you also tend to get extremely robust manufacturer warranties to cover failure. This is because as we know in business, downtime = loss of money.

Gaming computers, while fundamentally the same from a hardware perspective don't tend to come with the same warranty support. Dell will come onsite within 4 hours to replace hardware on workstations. They will not do that for a standard desktop.

it's the business side of the delivery that really is what defines the variance between workstations and standard desktop PC's (which gaming is part of).

the other exception to above being if you are getting server stuff in your workstation (which you can do). Again, not architecturally different (until you get into multi-CPU units, but that's mostly cause the consumer side has stopped pushing multi-CPU devices once core counts have climbed in single chip units)

I've been trying to read the meandering of this conversation, but you seem to want to take an oppositional stance to anyone just to continue your claim that a workstation is fundamentally different somehow to a desktop or gaming computer.

From a technological standpoint, the differences are minor or non existant. The bulk of the difference is in business support, software support, and warranty support. Plus, a little nicer "quality" which you all pay for
 
  • Like
Reactions: ssgbryan and IG88
The point you're making is irrelevant.

hile binned differently, the binning process essentially just says "We have tested this and believe it to be more reliable"

that does not necessarily translate to real world reliability. you pay extra for that when you pay for the workstations, but it in no way guarantees (parts are not generally guaranteed from the manufacturers such as intel / nvidia for that sort of failure).

“We have designed it, we have tested this and believe it to be more reliable” is exactly the core of the business, when a workstation is involved.
Because downtime sometimes is not only undesirable: it is something you cannot allow.

That’s why you pay for a workstation 5 times the money you’d pay for a “regular” desktop with similar performance on a benchmark.
 
  • Like
Reactions: chucker23n1
“We have designed it, we have tested this and believe it to be more reliable” is exactly the core of the business, when a workstation is involved.
Because downtime sometimes is not only undesirable: it is something you cannot allow.

Yes, In that we're in agreeance. Hence why for these workstation devices we are willing to pay a lot more. it's the support, warranty and reliability.

Nothing to do typically with performance, or the internal parts themselves.
 
Yes, In that we're in agreeance. Hence why for these workstation devices we are willing to pay a lot more. it's the support, warranty and reliability.

Nothing to do typically with performance, or the internal parts themselves.
So you really didn’t read my comments.

Post #196:

Reliability and sustained performance.
Those are the “magic” words behind a workstation.
Most of the users here just aren’t aware of that and they are trying to judge a workstation by enthusiastic computer standards.
All I see in this threads are benchmarks. But what about the capability of maintain the same performance 8 hrs a day, 5 days a week ?


English isn’t my first language, even if I’m speaking it every single day for most of my day, but I’m keep saying that since the beginning.
people here focused on benchmarks, but performance are not the point in judging a workstation (not the ONLY point, at least).
 
So you really didn’t read my comments.

Post #196:

Reliability and sustained performance.
Those are the “magic” words behind a workstation.
Most of the users here just aren’t aware of that and they are trying to judge a workstation by enthusiastic computer standards.
All I see in this threads are benchmarks. But what about the capability of maintain the same performance 8 hrs a day, 5 days a week ?


English isn’t my first language, even if I’m speaking it every single day for most of my day, but I’m keep saying that since the beginning.
people here focused on benchmarks, but performance are not the point in judging a workstation (not the ONLY point, at least).

There's a lot of posts, I appologize if I did miss one or two or misunderstood.
 
I don't have a current gen TR, but I have a 1950x build at work and a 3900x at home.

What's your workload? These chips are immensely powerful - but only for certain workloads. If you can't take advantage of lots of threads, you may actually be better with a lower thread count chip.
Cinema 4D and Adobe Apps
 
The funny thing is that the silicone found in the 8 core 9900K is of higher quality than the silicone found in the 8 core Xeon processor. Yeah its just a detail.
The top quality Xeon silicone is found in the 28 core part.
 
Last edited:
  • Like
Reactions: IG88
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.