Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Nvidia does support OpenCL, I think Metal too. If we're pointing fingers, Apple is definitely the biggest offender in the GPU space. They've had ridiculously tight control forever, and for that reason nobody uses Macs for graphics-intensive tasks, so ofc Nvidia won't bother making drivers. And Apple still doesn't support OpenGL properly; they want everyone to use their proprietary Metal APIs.

They don’t support them well, though.
 
They support the same or higher OpenCL and OpenGL versions that macOS does. Especially higher for OpenGL. Apple's the one who doesn't support the open standards.
But their consumer cards are gimped for compute.
 
They support the same or higher OpenCL and OpenGL versions that macOS does. Especially higher for OpenGL. Apple's the one who doesn't support the open standards.

Okay, you got me on OpenCL. Turns out Apple is trying to push that out for Metal.
 
  • Like
Reactions: sd70mac
But their consumer cards are gimped for compute.
Do you have numbers for that? I'm actually not sure because my info is likely outdated. When I was doing ML research a year ago, everyone used GTX, and they were winning in the benchmarks. But many (like me) had no choice because they needed CUDA. For similar reasons, comparisons are hard to find.
 
Last edited:
  • Like
Reactions: sd70mac
I'm not fond of AMD either, but RX580 is still way faster than what you have in the MBP. If you don't need Nvidia-specific features, it's a decent card.
[doublepost=1522436147][/doublepost]
Do you have a source for the speed comparison? I've always seen that Nvidia cards are faster for machine learning computations too. I always thought Apple's reason for using AMD was to fight against CUDA and the rest of Nvidia's anti-open stance.

I need Nvidia to run CUDA stuff. Looks like I need to forget about buying a laptop from Apple for a few years.
 
I need Nvidia to run CUDA stuff. Looks like I need to forget about buying a laptop from Apple for a few years.
CUDA is such a PitA under macOS anyway that it's not worth the trouble, whether it's Nvidia's or Apple's or the community's fault for lack of support. I've gone down that annoying path both with TensorFlow and PyTorch using a Mac Pro w/ GTX 1060. I'd only do it with a dedicated Linux box. Was a piece of cake to set everything up on my custom build.
 
Do you have numbers for that? I'm actually not sure because my info is likely outdated. When I was doing ML research a year ago, everyone used GTX, and they were winning in the benchmarks. But many (like me) had no choice because they needed CUDA. For similar reasons, comparisons are hard to find.

Way off topic, but since you seem to know about ML I figured I’d ask. What do you think of CoreML?
 
  • Like
Reactions: sd70mac
Do you have numbers for that? I'm actually not sure because my info is likely outdated. When I was doing ML research a year ago, everyone used GTX, and they were winning in the benchmarks. But many (like me) had no choice because they needed CUDA. For similar reasons, comparisons are hard to find.
You can look up the chip tables on wikipedia.
 
I don't give a **** what Apple do with AMD or Nvidia .. So long as there is a "pro" setup.. All professionals care is that the damn thing "just works" for what they need. Nvidia or AMD, nobody gives a ****!! So long as it is powerful enough somehow, we don't care, it's not our money!! I have 2x 1080ti in my own personal linux rig and that works fine. In my PC I have 2x WX9100 working. I also use my iMac Pro 10 core for normal day to day work.. My laptop is the only one that struggles to keep up with everything else, and even that's not apples fault!.. it's called progress...Everyone always has to complain about something, so while they're having a go at Apple they're forgetting how bad others are!!!
 
Latency. Both will exhibit some form of latency, but streaming will suffer more. Neither will be too bad on single player games, but multiplayer is where you will really notice a difference.

How does that lag manifest itself? Is it like an input latency when looking around in a first person game? I'm wondering if it's something that can be fixed (or at least improved) in software or if it takes a move to the next version of Thunderbolt to hopefully get improved.
 
  • Like
Reactions: sd70mac
I don't give a **** what Apple do with AMD or Nvidia .. So long as there is a "pro" setup.. All professionals care is that the damn thing "just works" for what they need. Nvidia or AMD, nobody gives a ****!! So long as it is powerful enough somehow, we don't care, it's not our money!! I have 2x 1080ti in my own personal linux rig and that works fine. In my PC I have 2x WX9100 working. I also use my iMac Pro 10 core for normal day to day work.. My laptop is the only one that struggles to keep up with everything else, and even that's not apples fault!.. it's called progress..

Agreed. Whatever card a person gets for this it will work just fine.
 
  • Like
Reactions: sd70mac
You can look up the chip tables on wikipedia.
If you're referring to flops and memory bandwidth, which is the only thing you'll find on Wikipedia, those don't mean much without an application benchmark. The drivers alone are going to throw complications in.
[doublepost=1522449673][/doublepost]
I don't give a **** what Apple do with AMD or Nvidia .. So long as there is a "pro" setup.. All professionals care is that the damn thing "just works" for what they need. Nvidia or AMD, nobody gives a ****!! So long as it is powerful enough somehow, we don't care, it's not our money!! I have 2x 1080ti in my own personal linux rig and that works fine. In my PC I have 2x WX9100 working. I also use my iMac Pro 10 core for normal day to day work.. My laptop is the only one that struggles to keep up with everything else, and even that's not apples fault!.. it's called progress...Everyone always has to complain about something, so while they're having a go at Apple they're forgetting how bad others are!!!
Problem is that what "just works" often depends on the vendor, drivers, and other nonsense for GPUs. It's hard to make sure GPU acceleration works for all your pro apps.
 
If you're referring to flops and memory bandwidth, which is the only thing you'll find on Wikipedia, those don't mean much without an application benchmark. The drivers alone are going to throw complications in.
It would not mean much if there were not such big differences between comparable cards.
 
  • Like
Reactions: sd70mac
Yes it does. Not officially supported by apple of course.

I have the eGPU dev kit from apple, the one that come with the Pulse 580.

To make it work I bought apple TB2 to TB3 adapter, which is bidirectional. I plugged the male end of the adapter in the eGPU breakout box, then used a standard thunderbolt cable from my Mac mini 2012 to the female end of the adapter.
[doublepost=1522450973][/doublepost]I have the same setup. It's been working perfectly since Nov. with a MacMini 2012 over TB1. Can you confirm that it still works with the official 10.13.4 release? I'm holding off the install until I know for sure.
 
  • Like
Reactions: sd70mac
If you're referring to flops and memory bandwidth, which is the only thing you'll find on Wikipedia, those don't mean much without an application benchmark. The drivers alone are going to throw complications in.
[doublepost=1522449673][/doublepost]
Problem is that what "just works" often depends on the vendor, drivers, and other nonsense for GPUs. It's hard to make sure GPU acceleration works for all your pro apps.
Yes, I agree with you. Whatever works works!! I use Avid, DaVinci and FCP and nothing else. For my photography I use Phase One software. Each to their own !! I'm not here to make enemies. I'm a lover not a fighter!!!
 
  • Like
Reactions: fairuz
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.