Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Ursadorable

macrumors 6502a
Original poster
Jul 9, 2013
676
933
The Frozen North
I purchased a Razor Core X as well as a AMD Radeon RX5500 XT for my Mac Mini.

I'm extremely disappointed in the 3D performance of the eGPU.. so for laughs, I grabbed a mini PC lenovo with a Ryzen Pro 7 CPU with integrated Radeon graphics.

According to Passmark, the integrated Radeon GPU has a performance rating of "1990", and my 5500XT has a rating of "9278". Over 4 times faster.

The 3D performance in say the game "Surviving Mars" gets about 12 FPS on my eGPU. On the integrated Radeon GPU, 38 FPS.

The eGPU is on it's own thunderbolt bus off the Mini. Even System Info shows it's utilizing 40Gbps for the Razor Core X.

Is performance supposed to be this terrible?

As a side note, all my monitors are plugged into the eGPU, so the programs are not running on the Mini's integrated graphics.. verified with activity monitor.
 
In my experience with GPU’s it is far far better to have them in a PC.

On the mac with a egpu w5700 it was pretty average experience
Bootcamp with a egpu 2080 super was good
PC with same 2080 super was a revelation.

I dont use my macs for 3D anymore after wasting a lot of money on the above.

Unfortunately you have to experience all this yourself as so many people [especially on this forum] defend the mac and think the 3D performance is good. It certainly isnt in comparison to a PC using the exact same apps in macos. Even the same apps in bootcamp on the same computer run better.......

I am hopeful Apple show us something amazing at WWDC in regards GPU performance for 3D. They need it if they are keen on developing AR and VR.
 
Unfortunately you have to experience all this yourself as so many people [especially on this forum] defend the mac and think the 3D performance is good. It certainly isnt in comparison to a PC using the exact same apps in macos. Even the same apps in bootcamp on the same computer run better.......

I am hopeful Apple show us something amazing at WWDC in regards GPU performance for 3D. They need it if they are keen on developing AR and VR.

Yeah, The M1 came out literally 2 days after I received my 2018 MM, but due to memory restrictions, and only two thunderbolt ports, I decided to wait for the M2 MM and see what it has.

I'm suspicious that with AR and VR, Apple is going to offload most of the graphics processing to the device rather than the computer. Mac's have been notorious for lower performing GPUs in their products, which is why I thought the Mini and an eGPU would help substantially.. obviously I was mistaken.
 
I’ve had great results pairing the blackmagic rx580 w a maxed 2018 Intel mini.

I rarely use it for gaming, but was able to run DOTA 2 at full settings with a good frame rate on the XDR Pro’s native resolution.

I haven’t tried that game since I picked up the XDR, but for limited use cases it does accelerate encoding to specific formats.

The BM egpu is so good at evening out the capability of the 2018 mini, I have kept an eye on the Vega 56 pricing on eBay. It was in a decent place but presumably because of crypto it is now selling for more than I’d invest in this architecture at this point.

I do think Apple Silicon discrete GPU, including a potential eGPU, and the software support they offer is the most interesting question about upcoming hardware from Apple.
 
  • Like
Reactions: BigMcGuire
It's not just Surviving Mars sadly.. it's with every game that utilizes 3D graphics. Even ones with low texture resolutions.
Can't say I've had the same experience. With a Radeon VII, I got a nice boost in performance for the apps I'm using. Sure, not as much as via direct PCIe connection, but it's there. With all the work from home going on right now, I took the Radeon VII and put it in Gigabyte Mainboard with 10700K and installed macOS on that. Much better, as expected.

Nvidia vs AMD comparisons make no sense. Of course the Titan RTX I have is better than the Radeon VII, but since Nvidia doesn't work with modern macOS, there's no point in comparison. I'm running the Titan with Linux since I need CUDA. In any case, the advent of AS means the death of eGPU. We'll have to wait and see what's next from Apple when it comes to GPUs.
 
Nvidia vs AMD comparisons make no sense. Of course the Titan RTX I have is better than the Radeon VII, but since Nvidia doesn't work with modern macOS, there's no point in comparison. I'm running the Titan with Linux since I need CUDA. In any case, the advent of AS means the death of eGPU. We'll have to wait and see what's next from Apple when it comes to GPUs.
Given lack of NVIDIA support, I’m not sure how AS can hope to become relevant unless it releases GPUs open and able to run AI/ML.

I’m not yet convinced eGPUs are with AS. My hope is Apple will just release eGPUs that only work with AS products.

This would allow any Mac user—including those running MacBook airs or Mini—to be able to add massive amounts of gpu compute.

I also presume an AS eGPU could just daisy chain to an XDR or other TB monitor. And that it would be fanless and the smallest possibly the size of the Apple TV puck.

Using my imagination here, but it would be a bummer to otherwise have to wait on this huge CPU compute jump with a lagging internal gpu for each product, particularly with the mini.
 
Hmm, I'm not entirely sure how eGPU support works on the Mac.
I was doing some hi-res scans earlier and on a 1.7GB TIFF I was maxing out the 8GB of VRAM in Preview but Photoshop CS6 had no issue with it.

I think maybe the issue you're having is that MacOS just isn't eGPU optimized.
 

Attachments

  • Screen Shot 2021-04-05 at 9.45.47 PM.png
    Screen Shot 2021-04-05 at 9.45.47 PM.png
    14.4 KB · Views: 128
Just an update, I did some benchmarking with my eGPU, along with Intel integrated graphics and the M1 Mini.

The Intel integrated scored 4940, M1 was 22246 and the MM 2018 w/eGPU (5500XT) scored 42717.

Pretty insightful.

Thunderbolt 3 seems to be the bottleneck here btw.
 
Given lack of NVIDIA support, I’m not sure how AS can hope to become relevant unless it releases GPUs open and able to run AI/ML.
That ship's sailed. Developing an app for macOS/iOS is one thing, training a model another. Apple is fine running inference, but I don't think there's any hope for training and optimization. While they do have their own Tensorflow fork now, it's pretty much broken. This might change over the years, but it's not only the framework, it's also the tools needed. Nvidia is ruling that market, there's no way around them. I've given up on Apple in that area right now. For mobile use, in addition to my MBP16, I'm running a Razer 15" with RTX5000. For my home office desktop a Titan RTX, in the office a Dell workstation with Xeon Platinum + RTX8000 and the heavy lifting Nvidia GPU clusters. Given the software tools alone, Nvidia is way more than a decade ahead of everyone else including AMD. I don't like Nvidia, but nothing is going to change that in the next 5-10 years. If one could change this it would be Apple, but they're not even trying. It would be nice to see a stack of 1000 Mac minis including the software tools for these tasks. Don't get your hopes up.

I’m not yet convinced eGPUs are with AS. My hope is Apple will just release eGPUs that only work with AS products.
We have SoC now, which includes GPUs in the smallest systems available. It's also based on unified memory, which is now a programming paradigm in macOS. I doubt they're going to change this. Besides, there's no reason to if every SoC comes with a GPU. Power is just a question of what SoC to buy then and with the rumors of the "Mac" coming back (call it Mac mini Pro if you like) there's something for everyone as soon as they completed the transition. The rest is just software and that is a mess right now and even then I'm not sure Macs will be for everyone in the future.

The Intel integrated scored 4940, M1 was 22246 and the MM 2018 w/eGPU (5500XT) scored 42717.
Not sure about the specific score, but that sounds ok to me so far.
 
We have SoC now, which includes GPUs in the smallest systems available. It's also based on unified memory, which is now a programming paradigm in macOS. I doubt they're going to change this. Besides, there's no reason to if every SoC comes with a GPU. Power is just a question of what SoC to buy then and with the rumors of the "Mac" coming back (call it Mac mini Pro if you like) there's something for everyone as soon as they completed the transition. The rest is just software and that is a mess right now and even then I'm not sure Macs will be for everyone in the future.
Maybe I’m missing something, but how then does Apple make an AS Mac Pro?

The machine relies on GPU cards. Apple would have to build its own cards to support some AS SoC, right?
 
Maybe I’m missing something, but how then does Apple make an AS Mac Pro?

The machine relies on GPU cards. Apple would have to build its own cards to support some AS SoC, right?
We don't know how Apple is going to do it. I'd guess a "larger" SoC with more power. The question is, what would they offload to a potential GPU or is the graphics power in the SoC enough. There's always the need for additional compute power and for that, they might introduce a modular external card (probably not USB-C/TB). We've already seen that with the current MacPro and the Afterburner card, which is not a regular GPU. The Nvidia ship is long sailed and my guess is AMD is not far off, which leaves Apples own GPUs and potential compute cards.
 
I know the feeling, when you’ve done the math beforehand and spend months planning to devote your time and money to a project, and then commit and spend it, and acquire all the pieces, and put them together and... it’s garbage. & then your only way out of it is to lose a ton of $ selling it all off with absolutely nothing to show in the end but a shrunken bank account and some gray hair.

This is why people take up fishing instead. Low investment, low risk/reward, just fresh air and pretty reflections in the water.
 
We don't know how Apple is going to do it. I'd guess a "larger" SoC with more power. The question is, what would they offload to a potential GPU or is the graphics power in the SoC enough. There's always the need for additional compute power and for that, they might introduce a modular external card (probably not USB-C/TB). We've already seen that with the current MacPro and the Afterburner card, which is not a regular GPU. The Nvidia ship is long sailed and my guess is AMD is not far off, which leaves Apples own GPUs and potential compute cards.
I don’t see them dumping the relatively new Mac Pro enclosure, or backing out of the recent commitment they made to pro users.

So a modular card or series of AS-based card choices seems the only likely way to move forward.

And if they are already going to manufacture those, I don’t see why Apple wouldn’t also offer an eGPU.

The soc graphics is not so much better than intel’s integrated that it really moved the needle on the laptop line.

None of them best the Apple / Blackmagic collaboration using AMD cards iirc.

So now that all these products that had an officially supported eGPU, and MacOS official eGPU support, no longer have one—it seems obvious Apple would fill in the gap.

Another way to think about it would be the XDR Pro is to the LG ultra fine 5k as a new Apple eGPU is to the Blackmagic Vega 56.
 
The soc graphics is not so much better than intel’s integrated that it really moved the needle on the laptop line.
The M1 SoC outperforms the Nvidia 1050Ti and AMD 560. So I'd disagree that it isn't much better than Intel iGPU. I guess it depends on the definition of "much better". When you take the Neural Engine into account, it outperforms a Nvidia 2080Ti in cases where memory isn't an issue. For models/datasets that require a lot of memory the 2080Ti is still a better choice as it comes with 11GB on the device (+ whatever you have in RAM) vs 16GB shared memory on the SoC. All of that with much lower power requirements, the 1050Ti is 75W alone. So imagine what Apple can do with high performance SoC that have higher power consumption and better cooling.
 
The M1 SoC outperforms the Nvidia 1050Ti and AMD 560. So I'd disagree that it isn't much better than Intel iGPU. I guess it depends on the definition of "much better".
Hey now, I’m talking about in strict comparison to the only officially supported eGPUs that exist.

For a long time, people have said the Blackmagic eGPUs are overpriced and under powered.

However, from what I can tell the M1 vastly underperforms the RX 580.

The difference against the Vega 56 Blackmagic is even bigger!

Read any review or comments about the Apple / Blackmagic eGPUs over the past few years and you’ll see people taking about how underpowered and old these solutions are.

To people who value compute, the Intel-compatible Blackmagic eGPUs are barely even credible in comparison to late gen graphics from either AMD or NVIDIA.

So I acknowledge it the M1’s SoC is “much better” compared with the junk Intel attached to everything 2018 and prior, but it is not in the same ballpark as what the machines would be able to do with discrete GPU.

Even if the M2 gen doubled the performance of the SoC graphics, it would not beat the long-derided RX580.

So my point is that if Apple decided there was a market in need of GPU heavy operations or gaming before such that they collaborated with Blackmagic and went to great pains to make those hardware devices work flawlessly with MacOS they are in a bind until they come up with something for the mini and laptop lines.

I realize the gflops benchmark is not a complete story of performance, but I think the broader points still stand.

Either Apple was wrong about a graphics hungry market in laptops and the mini before*, or there is a big hole with no apparent near term pathway via pure SoC.

Anecdotally, many people here on MacRumors and other sites have said they are not going to buy the M1 specifically because it doesn’t offer eGPU compatibility.

Most importantly without an eGPU for Mac laptops and / or discrete gpu options for the Mini and Mac Pro, Apples entire line is behind past generation graphic capabilities and will not beat it with SoC.

So I return to my main point, Apple has no choice but to offer some kind of Apple Silicon compatible discrete graphics products.

* I personally don’t think this is the case. Both from my own experience needing to drive the XDR Pro and occasional GPU heavy operations, and because I’ve been watching eBay’s dealflow for these products for over a year and they remain popular.
 
Last edited:
For a long time, people have said the Blackmagic eGPUs are overpriced and under powered.
Well, that depends on how you look at it. They are very expensive in comparison to buying a GPU + eGPU enclosure. I did this with a Radeon VII and a Razer enclosure and Blackmagic has nothing that can beat that combo. Sure, the Blackmagic looks much better, but I personally don't care about that. Just hide it under the desk in that case.
However, from what I can tell the M1 vastly underperforms the RX 580.

The difference against the Vega 56 Blackmagic is even bigger!
Well, this isn't really a benchmark. It just tells you how many FP32 operations can be executed per second. You can't really compare this as the architecture is so different. This is also more of a specification than a benchmark. Ideally you want a benchmark in your area of application or as many different ones as possible and average them. Look at other benchmarks and you'll see the M1 is faster than a RX580:
https://technical.city/en/video/Apple-M1-8-Core-GPU-vs-Radeon-RX-580
https://www.techspot.com/news/87605-apple-m1-chips-outperforms-gtx-1050-ti-radeon.html
https://forum.affinity.serif.com/index.php?/topic/124022-benchmark-1900-results/
https://medium.com/analytics-vidhya...-2080ti-in-tensorflow-speed-test-9f3db2b02d74
Many more out there.

If you want to know why GFLOPS is saying nothing about actual performance, start here: https://www.kdnuggets.com/2020/05/tops-just-hype-dark-ai-silicon-disguise.html
It's common that systems with less OPS perform better than those with more. It really depends on the architecture and the cases it can be used as measurement are only within the same chip generation from the same manufacturer. You can't even compare AMD vs Nvidia GPU that way.
Even if the M2 gen doubled the performance of the SoC graphics, it would not beat the long-derided RX580.
Well, the M1 already is. ;)
Anecdotally, many people here on MacRumors and other sites have said they are not going to buy the M1 specifically because it doesn’t offer eGPU compatibility.
Their loss, then they probably won't buy Apple in the future.
 
Their loss, then they probably won't buy Apple in the future.
I know I'm probably on my last Mac because of the absolute disaster that is trying to do anything remotely different than Apple's weirdo core vision. All I want is the legendary xMac and access to my 32-Bit software.
Assembling this eGPU and ram upgrade in my Mini was already proven to me that Apple doesn't want my business going forwards.
 
I know I'm probably on my last Mac because of the absolute disaster that is trying to do anything remotely different than Apple's weirdo core vision. All I want is the legendary xMac and access to my 32-Bit software.
Assembling this eGPU and ram upgrade in my Mini was already proven to me that Apple doesn't want my business going forwards.
Just use something else. 🤷‍♂️
I do the same. macOS has become my tool for "daily" tasks. Email, browsing, some photo/video editing, reading, writing, etc.

When Apple ditched Nvidia things became difficult, but manageable for me on AMD GPUs. The software Zoo changed a lot recently. To the point where I have to spent so much time fixing other peoples mess, that it's not feasible anymore for me. I'm now running a Linux box with Titan RTX in the home office, a Dell Linux box with RTX8000 under the desk in the office and Dell/Nvidia GPU cluster for the heavy lifting in our datacenter (well, IT guys are running those). In addition to my fully loaded MBP16, I also have a Razer laptop now with RTX5000 in case I need to be mobile for CUDA.

I don't' know what's going to happen in the future. But with the current software ecosystem, I'd sure like to see a MBA 15/16" for my daily tasks or even a 15/16" iPad Pro that offers a full macOS when docked to the Magic Keyboard and iOS when used as a tablet and could be used in a multi-monitor environment, maybe with a docking station. I'd happily carry such an iPad together with a Razer/Lenovo laptop for my CUDA needs. Or maybe we'll see the software support for macOS for such cases... which I doubt. I'm having a very close eye on Apples Tensorflow fork right now.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.