Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I work in the music business and spend a lot of my time in tracking and mixing studios, sometimes crossing over into the area of film scoring and dubbing (which is by far the most lucrative part of the industry).

Bar one studio which seems to be talking about buying a new Mac Pro in order to keep a client happy (at a loss), everybody has started looking towards Hackintoshes. Most of the film mixing studios have already made the move. Even the businesses which have the money to pay for one of these machines (which is virtually none) are turning their backs.

All you people who say ‘Pros will see it as an investment’ are also forgetting that the ~£10k price difference between a well specced out MacPro vs a Hackintosh will also buy you a very nice set of monitors.

Look, I'm not sure what country or what scope of project you're working on - but outside of single-seat freelancers, no reliable studio is going to gamble on running hackintoshes. Or if they are they've got a lot more stomach for needless risk than anyone I know running any kind of post house.

*IF* it was just me doing freelance work, *AND* I was comfortable knowing that at any time a OS, app, or driver update could bork things and send me scrambling to forums for help, *AND* I was okay with pirating an OS for commercial work - maybe. But I don't know a single studio that would make that trade for a couple of thousand dollars in savings. For anyone doing any kind of volume of work, the price of hardware is kind of incidental. I mean it's not nothing, I'd always prefer to spend less money whenever possible - but over the lifespan of a studio unit, we're talking a savings of a couple of bucks a month for a tremendous amount of "total downtime" risk.
 
Giving a cursory look at these scores? How come my 2015 iMac 5k beats these in all numbers? Did geekbench come out with a new version where what was formerly benching 17000+ is now benching 2000 just to make it less awful for people with high end iMac Pro and Mac Pro machines from seeing too many digits in their bench score?
 
Ryzen in the future Mac Pro is inevitable.

Intel could turn it around. Who knows what the future holds.
[automerge]1576702161[/automerge]
for most people, a 5K iMac offers the best value for money over the mac lineup. The iMac Pro and Mac Pro are unnecessary and offer much lower value propositions. If you need more power than a 5K iMac, then offload the work to a server.

What server can you offload a Mac app to?
 
Giving a cursory look at these scores? How come my 2015 iMac 5k beats these in all numbers? Did geekbench come out with a new version where what was formerly benching 17000+ is now benching 2000 just to make it less awful for people with high end iMac Pro and Mac Pro machines from seeing too many digits in their bench score?

lol yes I believe Geekbench 5 changed their scoring system to a new baseline.
 
No offense but AMD is so far ahead of intel right now that the Athlon64 days seem like a joke.
Ryzen 3950x is AMD's mainstream consumer part and it can compete with Intel's HEDT parts no problem,

A $749 CPU is "mainstream"? (AMD actually categorizes it as "enthusiast". Right below HEDT, but well above "mainstream" and "performance".)

The Athlon 64 days were fascinating. Zen is pretty good as well. It's quite a stretch to call AMD "far ahead", though.

So yeah it's a shame Apple doesn't offer a Mac Pro variant with an AMD CPU. The 7nm, Zen 2 architecture is really impressive.

Maybe.

It's not gonna change, though. AMD didn't have its act together for an entire decade, and Apple isn't gonna move based on a brief resurgence. The real win for everyone is that AMD is forcing Intel (or at least being perceived as forcing Intel) to reduce pricing and increase core count. That's about it.

We'll see how Tiger Lake, etc. pan out.
[automerge]1576704260[/automerge]
currently I don't see any reason for them not to jump ship completely to AMD.

And I can't think of a reason to do so. To what end? To appease some MacRumors forums poster who was never going to buy that product anyway?

Billing them for what exactly? What could this Mac Pro do faster than a *nix box in a data center? FileMaker Server? I'm not aware of any way to manually allocate cores or threads in FileMaker Server.

I mean, you're not going to do rendering or post-production in a data center. I suppose you could virtualize and assign cores that way, but again, what Mac-specific application would these customers need so badly which couldn't be delivered more powerfully and less expensively by hosting on another OS?

Xcode.

And, uh, rendering in a data center is definitely a thing, and has been for a long time? Why wouldn't you do that? (Before you say 'GPU': too imprecise.)
 
Last edited:
Upgrade-ability: Most machines will never see a huge CPU upgrade because it'll be years down the line and the state-of-the-art will no longer fit in that socket. RAM is possible, but my laptop can do that. GPU: This is probably the most practical upgrade. Ok, I concede the Mac Pro has better practicality for GPU upgrades. Is it worth $3100 over a mobile workstation that also gives:
Portability, comparable processing power, a longer warranty, CUDA cores? I can buy a second workstation in 2-3 years for the money I saved, with a better GPU, and still break even.

Yes, congratulations: a laptop is more portable, and a tower is more upgradeable. You've figured out that your machine is something completely different than a Mac Pro.
[automerge]1576705054[/automerge]
Giving a cursory look at these scores? How come my 2015 iMac 5k beats these in all numbers? Did geekbench come out with a new version where what was formerly benching 17000+ is now benching 2000 just to make it less awful for people with high end iMac Pro and Mac Pro machines from seeing too many digits in their bench score?

Geekbench 5 has a different scoring baseline than Geekbench 4.
 
  • Like
Reactions: ignatius345
And you don’t have to buy a $5,000 monitor or $1,000 stand when you buy an iMac Pro.

The Mac Pro can run whatever monitor you want to plug into it. You don't "have to" buy Apple's new monitor. It's for high color accuracy for specific types of use (video, high end photography), and is meant to compete at the very high end of the display market where 4- or even 5-figure prices are the norm. Mac Pro users who don't need that kind of accuracy and need the thing more for raw processing power may just plug in a "good enough" Dell or whatever and call it a day.

I like the iMac form factor too, but decoupling the display from the rest of the machine allows users to tailor their desktop machine to their needs -- and to upgrade the monitor and CPU on totally separate timelines.
 
Last edited:
I am just amazed at how little all (most) of the supposed Mac-heads here know about Xeon level workstations. Wow. All of Apple's competitors in the workstation space are peeing their pants right now. Y'all really have no concept of the performance level of this thing (and what you're getting for the $). I'll just leave it at that.
 
  • Like
Reactions: throAU
After looking at this page https://www.cpubenchmark.net/singleThread.html. I don't quite understand how the Mac Pro with the latest Xeon is get such a lower number compared to other Xeon processors in that list.
[automerge]1576630225[/automerge]


Oh I thought by now Adobe upped their game with multi-core support.

More cores = lower clocks. Intel have been sitting on ass with regards to instructions per clock for a decade pretty much.
 
I work in the music business and spend a lot of my time in tracking and mixing studios, sometimes crossing over into the area of film scoring and dubbing (which is by far the most lucrative part of the industry).

Bar one studio which seems to be talking about buying a new Mac Pro in order to keep a client happy (at a loss), everybody has started looking towards Hackintoshes. Most of the film mixing studios have already made the move. Even the businesses which have the money to pay for one of these machines (which is virtually none) are turning their backs.

All you people who say ‘Pros will see it as an investment’ are also forgetting that the ~£10k price difference between a well specced out MacPro vs a Hackintosh will also buy you a very nice set of monitors.
Everybody is building a hackintosh is such BS. Any studio worth its weight is going to prioritize reliability and workflow over saving a few bucks. As soon as you run into your first problem with a hackintosh (and believe me, you will) it just completely kills both reliability and your workflow. Hackintoshes are not for professionals trying to get work done at a studio. They can be great tinkering devices, but I wouldn't want to depend on it when you're under a deadline.
 
The love-fest for the AMD processors is amusing. I am old enough to remember when AMD was "going to crush" Intel with their their 486's, until they didn't. Then it was the K6 as a Pentium MMX killer, even competing with Pentium II's, until they didn't. After that it was the Athlon64 and Opteron beating the Pentium III/P4 and AMD was going to dominate, until they didn't. And now here we are today, once again supposedly AMD is going to kill Intel with their processors. History tells me that they won't, yet again. They lack the capital to compete in the long haul, and always have. They play a great role disrupting Intel and forcing them to move faster than they would like, but the promises of AMD overturning Intel's hegemony ring hollow. I applaud what they do, and have owned and used many AMD processors, but in my nearly 20 year career in high tech, at only one point did any of the data center operations I managed use any AMD processors, and that was for just a brief period of time; their ecosystem is just never as robust and well-developed for the high end and enterprise. Things like chipsets matter, for example. Stability and reliability matter more than raw performance and price when there's big money on the line. The Intel processors may lag for some functions right now, but not for others, and the price delta isn't that dramatic at the high end either.

What will be interesting is to see if the processors will be upgradeable in the Mac Pro, as I'm sure Intel will respond to AMD's recent successes, as they always have.
 
  • Like
Reactions: sage73
The love-fest for the AMD processors is amusing. I am old enough to remember when AMD was "going to crush" Intel with their their 486's, until they didn't. Then it was the K6 as a Pentium MMX killer, even competing with Pentium II's, until they didn't. After that it was the Athlon64 and Opteron beating the Pentium III/P4 and AMD was going to dominate, until they didn't. And now here we are today, once again supposedly AMD is going to kill Intel with their processors. History tells me that they won't, yet again. They lack the capital to compete in the long haul, and always have. They play a great role disrupting Intel and forcing them to move faster than they would like, but the promises of AMD overturning Intel's hegemony ring hollow.

The big difference this time is that Intel already lost one antitrust case, and doesn't want another [1]. In the past when AMD was delivering better performance for less money, Intel forced PC makers to delay or not use AMD. Intel is actually having to compete now. There's a reason Intel had to slash prices over the summer.

With Threadripper 3 parts released and being released (64 core version is apparently coming soon), Intel is really behind. I don't think we have seen performance jumps like this in CPUs in a decade. Reviews are using words like 'embarrassing' [2] and 'bloodbath' [3]. AMDs biggest problem now is keeping up with demand.

[1] https://www.cnet.com/news/intel-to-pay-amd-1-25-billion-in-antitrust-settlement/
[2] https://www.digitaltrends.com/computing/amd-threadripper-3-news-rumors-price-release-date-specs/
[3] https://www.anandtech.com/show/1504...0x-and-3970x-review-24-and-32-cores-on-7nm/15
 
Giving a cursory look at these scores? How come my 2015 iMac 5k beats these in all numbers? Did geekbench come out with a new version where what was formerly benching 17000+ is now benching 2000 just to make it less awful for people with high end iMac Pro and Mac Pro machines from seeing too many digits in their bench score?


I just ran a geekbench 5 on my late 2015 iMac i7 .. intel kinda sucks these days
 

Attachments

  • Screen Shot 2019-12-18 at 7.51.48 PM.png
    Screen Shot 2019-12-18 at 7.51.48 PM.png
    274.1 KB · Views: 103
Last edited:
  • Like
Reactions: IG88
The big difference this time is that Intel already lost one antitrust case, and doesn't want another [1]. In the past when AMD was delivering better performance for less money, Intel forced PC makers to delay or not use AMD. Intel is actually having to compete now. There's a reason Intel had to slash prices over the summer.

With Threadripper 3 parts released and being released (64 core version is apparently coming soon), Intel is really behind. I don't think we have seen performance jumps like this in CPUs in a decade. Reviews are using words like 'embarrassing' [2] and 'bloodbath' [3]. AMDs biggest problem now is keeping up with demand.

[1] https://www.cnet.com/news/intel-to-pay-amd-1-25-billion-in-antitrust-settlement/
[2] https://www.digitaltrends.com/computing/amd-threadripper-3-news-rumors-price-release-date-specs/
[3] https://www.anandtech.com/show/1504...0x-and-3970x-review-24-and-32-cores-on-7nm/15

That antitrust settlement is ten years old and many of the limitations included in it dated back even further. It's irrelevant.

The keeping up with demand issue isn't new; AMD couldn't keep up with demand back when they were making 386 clones, either. Performance? The original Athlons and Opterons crushed competing Intel processors.

Look at 2018 revenue--Intel is over 10x the size. One decent performing processor family is not going to change that calculus.

Reviews use terms like that to get clicks. Reading over that review, I didn't find the data supported such a dramatic claim. The performance on many of the benchmarks was close enough to be negligible, and while the TRS performed extremely well on some tests, so did the Intel, especially for tasks that take advantage of AVX-512, where there are huge performance benefits to be realized.

Has AMD helped push Intel? Yes but that's been the case since the 1970's. It's been 50 years and they haven't supplanted Intel yet. I wouldn't hold my breath that somehow the latest processor will change everything.
 
  • Like
Reactions: Zen_Arcade
I was considering a 12 core, even though I'd rather have a 16 core or better. But, the 8 core scores less in the single core benchmark than my 6 core 2018 i5 Mac mini. And both are terrible compared to my Threadripper 2950X. Of course a sustained workload is far better measure than a benchmark.
 
So much love for Intel in this thread. If AMD was still stuck on Bulldozer, we'd still have 4 core/8 thread as the top of the line choice CPU's in iMacs in 2019...

I may hate that Apple is in love fest with AMD GPU's instead of offering Nvidia, but Intel has purposely held back desktop CPU's to 4 cores for the last 5+ years for milking the cash cow. The only reason Intel is having major shortages is that they're still stuck on 14nm and they're now being forced to build much bigger dies than they were originally planning to as they're now actually making 8 core CPU's in the consumer space.

Also, Intel is not catching up anytime soon. All of next year's 10nm Intel CPU's are all low power chips... When Intel finally releases 10nm desktop CPU's, AMD will be on 5nm...
 
if you’re using a simplistic CPU benchmark to evaluate a server or workstation, you’re making a mistake, or at least probably shouldn’t be considering a server or workstation.

Server-class CPUs like Xeon and EPYC are not optimized for peak single-thread performance, in part because that’s not what the target market runs, and because there are many points in CPU and system design where the trade offs are between single-core performance, multi-core performance, and reliability.

Memory bandwidth vs. latency is an example. A bigger cache increases latency while also increasing bandwidth, but only for applications which can take advantage of the larger cache. Similarly, more memory controllers increase bandwidth at the expense of additional on-chip routing which adds latency. For memory-bound workloads, that’s an easy choice—and GPUs require a lot of both memory and PCIe bandwidth. (At least, for work involving lots of data—but not games or small benchmarks.)

There are more specialized benchmarks suitable for evaluating servers and workstations, for instance those published by SPEC. Obviously the best benchmark is your own application, if you can find a system to test on. Key software companies have likely been optimizing their code for some months already, but probably have more months of optimization ahead, and smaller developers may not have had early access at all. It will take some time before enough users have their hands on the Mac Pro to evaluate how well it serves their needs.

As a software developer, I know the Mac Pro is not for me: compilation doesn’t tax the CPU or memory subsystem enough. An iMac Pro is much cheaper and will serve just as well, and even the Core i9 is fine if I don’t mind the lack of ECC RAM. But if I were working with high resolution video, and had the need to apply filters, transcoding, color adjustments, etc. then the memory bandwidth in the Mac Pro could make a huge difference. If I were training AI models, it’d work well, but I might prefer a more traditional server with multiple Xeons. For photography? I’d have to measure it, or learn from others doing so.

A side note: Desktop applications, and even some server applications, tend not to scale well to multiple sockets; NUMA introduces varying latencies which are hard to work around, so it’s preferable to stick to one socket. AMD‘s first EPYC suffered from a NUMA design as well, one reason why it wasn’t adopted more broadly.

(I don’t work for Apple, but I do work for a large computer manufacturer. I’m speaking purely for myself here.)
 
A $749 CPU is "mainstream"? (AMD actually categorizes it as "enthusiast". Right below HEDT, but well above "mainstream" and "performance".)

The Athlon 64 days were fascinating. Zen is pretty good as well. It's quite a stretch to call AMD "far ahead", though.
It's not an exaggeration at all to say that AMD is far ahead.
The Ryzen 3950x it's still a mainstream part, AMD's top mainstream part but a mainstream none the less.
AMD's HEDT line is called Threadripper and their datacenter/server line is called Epyc.
There's an obvious distinction between these CPU product lines.
Sure the 3950x isn't exactly cheap but how much would you ask for your CPU if it can output almost double the performance at the same power Vs the competition? The 3950x can even go to toe with the i9 10980XE. Intel was basically forced to slash in half the prices of their HEDT CPUs just to stay relevant.

Maybe.

It's not gonna change, though. AMD didn't have its act together for an entire decade, and Apple isn't gonna move based on a brief resurgence. The real win for everyone is that AMD is forcing Intel (or at least being perceived as forcing Intel) to reduce pricing and increase core count. That's about it.

We'll see how Tiger Lake, etc. pan out.
[automerge]1576704260[/automerge]
Intel didn't have their act together for the last 3 years. According to you that must be better.
Tiger Lake? Intel has scheduled the launch their first 10nm server CPUs by the end of 2020 and I don't think we will see anything in volume until 2021. Next gen desktop parts from Intel will still be on 14nm.
By 2021 AMD will launch their 5nm Zen 4 arhitecture. So yeah it's ridiculous how far ahead AMD is.
If Intel gets their act together maybe they will start recovering with the launch of their 7nm process, but that's in 2022.
 
I didn't say a workstation is defined by benchmarks, so calling someone 'totally wrong' for something they didn't say is illogical.

A workstation is defined by this : Can you get your work done as efficiently and happily as possible. Not benchmarks, real world apps. In many use cases, the top spec iMac or a decent gaming PC will indeed beat the new Mac Pros. That's down to some apps really liking single core performance. That's also down to the fact that PCs (Windows or Linux) have much higher GPU performance, even with the same graphics cards.

It's tough living on macOS and in a highly controlled eco-system that is entirely dependent on what Apple allows you to have in your system. If that's your choice, fine. Don't judge others for wanting more options, affordability and openness. Their computers are workstations if that's what they choose to call them, even if it is an $1000 Linux system just for Blender.
Wrong, again.
real world apps are different from benchmarks.
having higher numbers on a benchmarks doesn’t mean the work will be done more efficiently.
GPU performance aren’t everything.
In many cases top spec iMac or a gaming PC beat a Mac Pro OVER SOME USELESS BENCHMARKS, but what about sustained stable performance?
Living on macOS and its controlled ecosystem is exactly what many users want And love.
[automerge]1576749419[/automerge]
"Workstation" is a nebulous term. There used to be a big difference between a workstation and a home PC. In the early 90s, a workstation ran an OS with preemptive multitasking - Unix or Windows NT. It had ethernet. All that stuff made it into regular PCs.

What makes a workstation today? ECC RAM? All Ryzen CPUs support ECC now. An enterprise support contract? Apple will sell you that even for an iMac, so is an iMac a workstation?

People want to believe there's some magic secret workstation sauce in the Xeon, but there's very little in it to justify its existence today. It's just a higher priced SKU for Intel.
Reliability and sustained performance.
Those are the “magic” words behind a workstation.
Most of the users here just aren’t aware of that and they are trying to judge a workstation by enthusiastic computer standards.
All I see in this threads are benchmarks. But what about the capability of maintain the same performance 8 hrs a day, 5 days a week ?
 
Last edited:
Wrong, again.
real world apps are different from benchmarks.
having higher numbers on a benchmarks doesn’t mean the work will be done more efficiently.
GPU performance aren’t everything.
In many cases top spec iMac or a gaming PC beat a Mac Pro OVER SOME USELESS BENCHMARKS, but what about sustained stable performance?
Living on macOS and its controlled ecosystem is exactly what many users want And love.
Gaming PCs are generally very good at sustained performance, better than iMacs and at least as good as the Mac Pro if not better as there are people that water cool their PC's for example.
Also synthetic benchmarks aside the faster PC or iMac(the i9 9900k variant) will win in real world applications as well. The Mac Pro doesn't use any special CPUs or GPUs, there's no secret souce, just standard computer parts put together in a nice case.
 
  • Like
Reactions: Ulfric and Marekul
I wonder who still uses internal audio interfaces these days. Most expensive gear like, let's say, an Apogee Symphony or an UAD Apollo x8p are external modules. A MacMini is probably a better choice... less space in the desk, portability if needed and enough processing power for tons of plugins.
Incorrect — it is ideal to have the interface in the computer and converter as external modules

The apollo and Symphony products are both converters and interfaces. The converter takes an analog audio signal and turns it into ones and zeros, the interface bridges the digitized audio into the computer. (Or from the computer to be converted back to analog to send to speakers, headphones, more gear, etc)

A good converter takes up space, but a good interface does not need space and the primary desired quality in an interface is latency. Latency is best over pcie. TB3 has limitations

For an audio professional like me that works full-time having a mac with both native pcie and thunderbolt is kinda required. Some audio pros wouldn’t need it but I run a decently-sized recording studio and this is great news for us

we DO need cpu for virtual instruments but mostly we need a lack of throttling of the cpu which the mac pro is ideal
 
Latency is best over pcie. TB3 has limitations

TB is PCIe.

Please show some actual data on a Thunderbolt audio interface having so much more latency than an internal PCIe card that it makes a meaningful difference in audio.
 
  • Like
Reactions: daftna
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.