Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That is not bad at all.

It all due to software. It has nothing to do with hardware, but software optimizations. You do not need to optimize your software for CUDA, you just put CUDA into your software for GPU acceleration. For openCL - you have to optimize your software for hardware.

If AMD GPUs would be able to run CUDA code(they cannot because it is proprietary API) they would run it in comparable way. Because there is nothing in compute in Nvidia hardware that would make it faster than AMD. Compute is just mathematical algorithms executed on hardware, and exposed in TFLOPs performance levels.

Right - there are some things that AMD does well, but I am aware that NVIDIA has been the industry preference overall.

I am sure with the upcoming release of AMD's new chips, it should be interesting :)

(I am not really that partial to either unless given a choice)

It is also the same with AMD vs. Intel CPU's. They both went different directions in many of the core design on the CPU-end, with emphasis on different things. Intel is clearly the favorite - and I'd agree - but just another example. The reasons were different as well; you are correct - the GPU is very compute-intensive and that makes a big difference with software while the CPU design with AMD really came down to their design and architecture.
 
Last edited:
One thing - the new MBP 15" renders the screen resolution higher on the new models before downsampling - likely adding a bit of stress to the GPU under load.
 
Right - there are some things that AMD does well, but I am aware that NVIDIA has been the industry preference overall.

I am sure with the upcoming release of AMD's new chips, it should be interesting :)

(I am not really that partial to either unless given a choice)

It is also the same with AMD vs. Intel CPU's. They both went different directions in many of the core design on the CPU-end, with emphasis on different things. Intel is clearly the favorite - and I'd agree - but just another example. The reasons were different as well; you are correct - the GPU is very compute-intensive and that makes a big difference with software while the CPU design with AMD really came down to their design and architecture.
Two completely different things. AMD CPUs suck. Simple as it can be. AMD's focus on Multithreading performance wasn't going anywhere, where Intel's focus on Single thread performance - was adding up.

On GPU side. AMD had much more advanced hardware but their software sucked. Right now, in mainstream market AMD is on par with Nvidia in terms of hardware, but has much better software than before. In high-end, there is no competition, and Nvidia is enjoying their monopoly. However, by the looks of things a lot of industry is turning into AMD for GPUs. Alibaba, Google Cloud Deep Learning Projects. Those are two biggest wins from last weeks for AMD.

What I have wrote had nothing to do with preferences. It was factual, and actual describing of what was the problem with AMD for years. They did not offered good software. Nvidia did, that is why they were winning. Apple will focus on Metal API, which resembles pretty much CUDA optimization(extreme low-level compute optimization). Its a shame that it will be only on Apple platform.

For example this is best possible description of what will happen with Metal. Two GPUs: from AMD Radeon Pro 460, from Nvidia GTX 1050 Ti mobile. Both underclocked to 35W TDP, both having exactly the same performance - 1.86 TFLOPs of compute power. Which one will win? None. They will get exactly the same results in software. Because they have exactly the same level of performance on hardware level. Which one will end up in MBP? The one with better pricing.
 
  • Like
Reactions: Macintosh IIcx
@fs454 when you have a chance, can you save the Radeon Pro 460 vBIOS and upload it? This task has to be done in Windows with a utility such as GPU-Z. I'm sure there are people who would be able to make use of it for flashing desktop Polaris GPUs.
 
  • Like
Reactions: koyoot
The one with better pricing.

That's key - and AMD is in that position. I'm not disagreeing with you. Software makes a massive difference, which is why the tests should be carefully read if they're comparing two different states, especially with low-level GPU primitives done in the OS by Apple. There was very little known prior to this launch about these chips.

I know AMD's Vega GPU's should amount to be a jump for them - from what I've seen and watched about the changes and architecture as well as integrations for the future.

It is one of the reasons I buy Apple hardware - I prefer strongly when software and hardware are integrated as they should be. It's why Google is taking that approach now on their PIXEL phone, but Apple's whole history is that.
 
That's key - and AMD is in that position. I'm not disagreeing with you. Software makes a massive difference, which is why the tests should be carefully read if they're comparing two different states, especially with low-level GPU primitives done in the OS by Apple. There was very little known prior to this launch about these chips.

I know AMD's Vega GPU's should amount to be a jump for them - from what I've seen and watched about the changes and architecture as well as integrations for the future.

It is one of the reasons I buy Apple hardware - I prefer strongly when software and hardware are integrated as they should be. It's why Google is taking that approach now on their PIXEL phone, but Apple's whole history is that.
Is pricing a bad thing? If your end result is exactly the same between two brands, which one will you always pick? Cheaper one.
 
Is pricing a bad thing? If your end result is exactly the same between two brands, which one will you always pick? Cheaper one.

Yes, of course they want the best value and brand image. That is business..plus the marketing that AMD has put out (their page dedicated to the new GPU's on the MBP) - it's that hype that will also help Apple's overall market (not just power-users). Having a "new" or "custom" chip is attractive when you are building a brand around Beats Headphones and Apple Music, as well as the entire ecosystem alone. Nothing wrong with that, but it makes sense completely and as mentioned, Tim is all about business and brand image. Not something I would personally jump on full-time, but I'm not going to deny that's a huge motivation for these decisions.
 
AMD 460 is 3-5% better den Radeon R9 M290X and itś power consumption was 100 watt in 2014
 
One thing I have not seen talk about at all is: who are writing the drivers, AMD or Apple, or a combination?

If it is AMD, do they borrow the code from their Workstation polaris line aka the VX 4100 that is the 560 sibling? It is very interesting to know whether the drivers are optimized for Pro apps or a more all around consumer + games. But I don't assume anybody knows yet?
 
One thing I have not seen talk about at all is: who are writing the drivers, AMD or Apple, or a combination?

If it is AMD, do they borrow the code from their Workstation polaris line aka the VX 4100 that is the 560 sibling? It is very interesting to know whether the drivers are optimized for Pro apps or a more all around consumer + games. But I don't assume anybody knows yet?
All of the drivers are done on Apple side. Thats why we have not seen udates of drivers from AMD for example for... Bootcamp with Windows. Its Apple that request them and controls them, even on Windows side, which is hilarious.

AMD architectures are very well documented, as well as Intel's. Its easy for Apple to write the drivers. What is funnier, Apple macOS drivers, for Intel iGPUs has always been better than Intel's own drivers for Windows!

Nvidia on the other side does not document their architectures that well, and its hard for Apple to write the drivers, and is one of the reasons why Nvidia was phased out from Apple hardware. Nvidia is like Apple in this event: they like to control the experience, regardless if it is good or bad(they are famous for stopping optimizations of drivers for previous generation of GPUs, and their GPUs are aging much worse compared to AMD's, at least on gaming side).
 
  • Like
Reactions: thisolivier
One thing I have not seen talk about at all is: who are writing the drivers, AMD or Apple, or a combination?

If it is AMD, do they borrow the code from their Workstation polaris line aka the VX 4100 that is the 560 sibling? It is very interesting to know whether the drivers are optimized for Pro apps or a more all around consumer + games. But I don't assume anybody knows yet?

Most of it would be AMD when you look at the GPU itself if you mean Polaris as an architecture type. For this one they would have worked closely with Apple and Apple has a ton of low-level software implementations (Metal, Quartz, etc.) -- these make a big difference. It's why visualization algorithms, for example, are a lot easier to design on macOS given the amount of work done with very optimized code that almost amounts to machine code, and is in some cases directly that a layer below.

That's why I look at these tests and use them as a crude benchmark, but the conditions are never the same. With this laptop, the 15" with the 450-460 dGPU that you didn't have a choice on.. I am waiting for software publishers like Adobe to push out their major updates that utilize the growing and changing codebases at a low level and then look at the results. I am not interested in purchasing one, but I am interested in the results of the LG/Apple monitor. Why? It will likely be one of the first "real" TB3 native to the MBP and I am sure Apple did internal testing before claiming that the 460 could have 2 5K monitors with the assumption that they're usable for the user who would buy that hardware.
 
All of the drivers are done on Apple side. Thats why we have not seen udates of drivers from AMD for example for... Bootcamp with Windows. Its Apple that request them and controls them, even on Windows side, which is hilarious.

AMD architectures are very well documented, as well as Intel's. Its easy for Apple to write the drivers. What is funnier, Apple macOS drivers, for Intel iGPUs has always been better than Intel's own drivers for Windows!

Nvidia on the other side does not document their architectures that well, and its hard for Apple to write the drivers, and is one of the reasons why Nvidia was phased out from Apple hardware. Nvidia is like Apple in this event: they like to control the experience, regardless if it is good or bad(they are famous for stopping optimizations of drivers for previous generation of GPUs, and their GPUs are aging much worse compared to AMD's, at least on gaming side).

Thanks, very interesting. I always assume that part of the reason that Apple decided to stay with AMD over nVidia was that nVidia wasn't willing to write Metal drivers for Apple MacOS. The two did also have a fight between CUDA and Open CL. But if Apple writes the drivers, this theory goes out the window.

The good part is that Metal should be well supported on the 460 then.
 
If someone has the 455 or 460 GPU model and owns Civilization 6, how does the game run? Smooth?
Would a 460 be noticeably better for a game like that? Good graphics, nice 3d models but fairly static. Trying to decide if the 460 is worth the price and 5 week wait vs. in-store pick-up which may be soon. Thanks.
 
One thing I have not seen talk about at all is: who are writing the drivers, AMD or Apple, or a combination?

There was a number of reports that it was done by AMD, obviously with some Apple support. Because Apple uses very different driver interfaces, that driver needs to be written from scratch, but I assume that they can at least reuse some codebase from their Windows drivers.
 
Thanks, very interesting. I always assume that part of the reason that Apple decided to stay with AMD over nVidia was that nVidia wasn't willing to write Metal drivers for Apple MacOS. The two did also have a fight between CUDA and Open CL. But if Apple writes the drivers, this theory goes out the window.

The good part is that Metal should be well supported on the 460 then.

Nvidia have been writing Metal for their drivers for a year. It's just not in great shape because neither is Metal itself.
 
Thanks, very interesting. I always assume that part of the reason that Apple decided to stay with AMD over nVidia was that nVidia wasn't willing to write Metal drivers for Apple MacOS. The two did also have a fight between CUDA and Open CL. But if Apple writes the drivers, this theory goes out the window.

The good part is that Metal should be well supported on the 460 then.
Nvidia would want to stay with Apple, and they do not have a choice than to write Metal drivers. Currently there is ongoing job offering from Nvidia for Metal Drivers engineers, and its going for last... 12 months or so. Nvidia sees Metal as a threat to CUDA, for obvious reasons, but they do not have a choice. Its a win-win scenario.

Apple gets API that is very good in optimization, and is using the hardware efficiently. Every hardware will perform as it should be performing with particular TFLOPs level of performance(so GPUs from both brands with equal TFLOPs performance will perform exactly the same). Which will win in the end? The one that offers better performance per watt and better pricing. For Apple its the best case scenario. And we can expect best possible hardware in Macs, in particular thermal envelope, also.

Apple got a lot of criticism lately, about their design choices. For me as a geek, and hardware/tech enthusiast, actually there are two game changers that Apple has brought lately. Its Metal, and its the Touch Bar in Macbook Pro.

But its like everything with Apple lately. They brought it TOO EARLY.

Mac Pro 6.1 - Too early release with the design.
Getting rid of headphone jack - too early.
Thinning MBP's - too early.
Metal - there is no software that can use it fully.
Touch Bar - again its too early that people cannot understand how its suppose to work. We have seen this before with iPhone and iPad. It is THAT big of a deal.
 
  • Like
Reactions: simonmet
Nvidia would want to stay with Apple, and they do not have a choice than to write Metal drivers. Currently there is ongoing job offering from Nvidia for Metal Drivers engineers, and its going for last... 12 months or so. Nvidia sees Metal as a threat to CUDA, for obvious reasons, but they do not have a choice. Its a win-win scenario.
Metal is not a threat to CUDA, because CUDA was never intended to be used as a graphical user interface like METAL or to build apps and front end, any DEV outside of their own code is just to make sure NVIDIA has support for all platforms and code base. Same as their support for OpenCL.

CUDA has specific applications that don't reside on a Graphical User Interface, so METAL and CUDA, can overlap but they are not I'm competition with each other for that reason.

I have worked on CUDA farms, for raw data processing, that have no graphical interface, and that can't be said for METAL.
 
They're pushing and making it easier and optimizing Metal. Many companies state they will update their products by the end of this year (such as Adobe). It will likely be a bigger improvement than any benchmark test would show. Apple gives guidelines to many large software companies like Adobe as they push for tight integration. Also keeping an eye on Apple File System.
 
Metal is not a threat to CUDA, because CUDA was never intended to be used as a graphical user interface like METAL or to build apps and front end, any DEV outside of their own code is just to make sure NVIDIA has support for all platforms and code base. Same as their support for OpenCL.

CUDA has specific applications that don't reside on a Graphical User Interface, so METAL and CUDA, can overlap but they are not I'm competition with each other for that reason.

I have worked on CUDA farms, for raw data processing, that have no graphical interface, and that can't be said for METAL.
Metal can be used and WILL BE used by compute applications, such as Final Cut Pro X. Is it gaming?

Metal unifies Graphics and compute into single command queue. That is why it is threat to both DirectX and CUDA APIs.

As I have said, direct access to GPU compute capabilities is the secret sauce of CUDA. Its the same thing with Metal.
 
Metal can be used and WILL BE used by compute applications, such as Final Cut Pro X. Is it gaming?

Metal unifies Graphics and compute into single command queue. That is why it is threat to both DirectX and CUDA APIs.

As I have said, direct access to GPU compute capabilities is the secret sauce of CUDA. Its the same thing with Metal.

Like you said though, Metal is for macOS/iOS. It becomes more and more of a threat as Apple begins to have the framework for their own GPU, which I think will eventually happen. They do that heavily on the iPhone, and macOS and iOS integration is a key goal for them. I am sure they are looking at it. CUDA has other applications in sectors that Apple doesn't touch - large clusters, supercomputers, etc. A lot of effort goes into that. With Apple's shift from NVIDIA, I don't anticipate them going back. There's not much of a point as I think the two companies have different end-goals as a result and ultimately recognize it.
 
Metal can be used and WILL BE used by compute applications, such as Final Cut Pro X. Is it gaming?

Metal unifies Graphics and compute into single command queue. That is why it is threat to both DirectX and CUDA APIs.

As I have said, direct access to GPU compute capabilities is the secret sauce of CUDA. Its the same thing with Metal.
Yeah, METAL sounds great, but if your raw GPU power is still going to be the deciding factor for any speed increase, best, cleanest METAL code, will still be slow as molasses on a slow GPU, and The CUDA farms I worked on for MAYA, MAXWELL, and 3D Max rendering are so blazingly fast, even sloppy code couldn't slow them down.
 
Like you said though, Metal is for macOS/iOS. It becomes more and more of a threat as Apple begins to have the framework for their own GPU, which I think will eventually happen. They do that heavily on the iPhone, and macOS and iOS integration is a key goal for them. I am sure they are looking at it. CUDA has other applications in sectors that Apple doesn't touch - large clusters, supercomputers, etc. A lot of effort goes into that. With Apple's shift from NVIDIA, I don't anticipate them going back. There's not much of a point as I think the two companies have different end-goals as a result and ultimately recognize it.
Of course I should specify: It is a threat on APPLE platform. Nowhere else.
Yeah, METAL sounds great, but if your raw GPU power is still going to be the deciding factor for any speed increase, best, cleanest METAL code, will still be slow as molasses on a slow GPU, and The CUDA farms I worked on for MAYA, MAXWELL, and 3D Max rendering are so blazingly fast, even sloppy code couldn't slow them down.
The best part of Metal is that it does not care about brand but raw horsepower of your GPUs. If you have two GPUs from two brands: AMD and Nvidia, and both of them have the same level of Compute power, for example 2 TFLOPs you will get exactly the same results on them. Exporting 30 minute film to particular format will take X minutes/seconds, regardless of brand. It even applies to Intel GPUs. Or Apple's own designed and manufactured GPUs.

Apple has finally software, that diminishes the advantage Nvidia always had with CUDA. Now the situation is as it should be always: Compute horsepower always should win.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.