Your "short answer" is to a different question.Short answer - NO DRIVER'S YET! Check out this thread:
I thought of another "pro". If the cMPs with Pascal GPUs continue to destroy the MP6,1 on compute benchmarks, maybe Apple will be shamed into offering an Nvidia option on the MP7,1....
I wonder if Tim Cook's equivalents at Nvidia (you know, the bean counters) have run the numbers on the pros and cons of creating Pascal drivers for Apple OSX.
The "pro" would be that for a modest cost Nvidia could help MVC sell some cards to cMP users.
The "con" would be that dropping Apple OSX support might tip more people into realizing that Apple is abandoning higher performance systems for iToys with big screens. Those people might leave the Apple ecosystem (and the "Objective-C/OpenGL/OpenCL/Metal/Vulkan/Swift/whatever-the-API-of-the-week-is" ecosystem) and jump to the CUDA ecosystem.
Not much of a "pro", and a pretty convincing "con".
Your "short answer" is to a different question.
If the cMPs with Pascal GPUs continue to destroy the MP6,1 on compute benchmarks, maybe Apple will be shamed into offering an Nvidia option on the MP7,1.
As we know, performance per watt and per clock the Fiji beats Pascal at most of the OpenCL operations that Apple and Adobe care about. If it's about CUDA rendering then just attach a dedicated rendering system or farm to the workflow.
If we get Pascal it will be in MacBook Pro and midrange iMac.
1920 CC's x2 x 1.693 GHz /154W = 43,3 GFLOPs/watt.Perf per watt per clock is not a meaningful metric. Perf per watt absolutely is, and Pascal continues the trend of NVIDIA being miles ahead of AMD on that front.
Perf per watt per clock is not a meaningful metric. Perf per watt absolutely is, and Pascal continues the trend of NVIDIA being miles ahead of AMD on that front.
The 1080 loses badly in OpenCL against Fiji based chips including the power efficient Nano when running specific OpenCL kernels that are a good match for the GCN architecture despite the fact that the Radeon cards are running at almost half the clockspeed.
You do not understand what Luxmark does and what does CompuBench with Optical Flow and FaceDetection?Fixed that for you. Clock speed has nothing to do with it, I'm not sure why you keep bringing it up as something that is bad for NVIDIA (why wasn't AMD able to raise their clocks more when moving to a new process?). Attaching other OpenCL results that contradict yours, since I can cherry-pick results too.
You do not understand what Luxmark does and what does CompuBench with Optical Flow and FaceDetection does?
Luxmark is benchmark for rendering in OpenCL, Optical Flow and Face Detection are Image Analysis benchmarks.
Two different things. Nvidia is better at one thing AMD is better at second. You choose which is better for your needs.
Just because OpenCL is used for benchmarks does not mean everything you need to know.
P.S. None of you see that GTX 1080 is slower than Titan X in compute rendering?
Proofs. Not your logic based on encyclopaedia of optimising compute algorithms for GCN or CUDA architectures.Luxmark is a ray-tracing benchmark that uses a particular algorithm that suits GCN more than the NVIDIA architectures. Those two CompuBench tests use OpenCL algorithms that suit the NVIDIA architecture more than GCN. That's the whole point I've been trying to make to you guys for months now. You can tune an OpenCL kernel so that it will work very well on one architecture (or even one specific GPU) at the expense of running poorly on other architectures or GPUs. There is no such thing as an OpenCL kernel that runs optimally on every single GPU from every single vendor.
A reasonable conclusion to draw is this: GCN runs Luxmark well, so if all I need is to run Luxmark or LuxRender, then I should buy a GCN GPU. Same applies to any one specific benchmark, if that's all you care about then by all means buy the GPU that runs that one benchmark or application the best.
An unreasonable conclusion to draw is this: GCN runs Luxmark well, so it's a superior compute architecture in all cases to Pascal.
There are plenty of CUDA-based ray tracing solutions that perform better than Luxmark on NVIDIA GPUs, probably because they've been tuned to run well on NVIDIA GPUs by definition.
Proofs. Not your logic based on encyclopaedia of optimising compute algorithms for GCN or CUDA architectures.
Luxmark 2.0 was better optimised for Nvidia hardware. Later Luxmark renderer went for streamlined benchmark that would be brand agnostic.
All comptue benchmarks should be non-biased for architectures. At least in todays world.
OpenCL is all that matters if we are taking about Apple.
The Optical Flow, Face Detection, etc benchmarks mean nothing to the macOS platform, Final Cut, Creative Suite and the APIs that Mac developers code for.
Because both are different jobs.What more proof do you need, aside from the massive lead GCN has in Luxmark versus the massive lead the NVIDIA GPUs have in the CompuBench tests I attached above? I'm not even suggesting that the benchmark authors are going out of their way to make their tests favour one GPU or architecture over the others, it's just really hard to write something that runs efficiently on all GPUs. Have you written and tuned the performance of OpenCL code?
Again, all I'm saying is that you can't reasonably claim that GCN is a better compute architecture in all cases. Period. Nothing more, nothing less. There are plenty of examples, both with OpenCL and CUDA, where NVIDIA performs extremely well.
That is because AMD architecture is better at using compute for rendering...Actually, I'd argue that Metal is all that matters going forward, but sure.
Per my post above, if all you care about is running Final Cut Pro or the Adobe Creative Suite on macOS, and you find that GCN products perform the best at those benchmarks, then you obviously should buy a GCN GPU or an Apple system that features a GCN GPU such as the 2013 Mac Pro. If GCN performs better at those specific applications then by all means you should stick with GCN if you use those applications.
That is because AMD architecture is better at using compute for rendering...
May I ask, why you compare proprietary closed ecosystem of API versus open, Multiplatform API?
AMD is better at Compute rendering in OpenCL. Is that better for you? CUDA is without question here, because it is only for Nvidia, and Nvidia themselves product.
At last we are home. Only thing you get on Apple platform is either OpenCL or Metal. Why even putting here CUDA?
If your benchmark is Multiplatform, and open, and uses open standards it has to be taken that way.CUDA absolutely runs on macOS, and lots of people use multiple big NVIDIA GPUs in external PCIe enclosures (e.g. Cubix Xpander) for CUDA applications.
The only thing I'm objecting to is you guys making generic conclusions based on one benchmark like Luxmark. AMD is better at Luxmark than NVIDIA is, no argument there.
Perf per watt absolutely is, and Pascal continues the trend of NVIDIA being miles ahead of AMD on that front.