Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Are you saying you don't have control over what the CUs do with low level APIs?

I know you can check how many you have available and dynamically allocate.

A card is not going to report it has compute units available that have been disabled in the factory. This is a good thing as often times these are disabled because of binning and are non-functional.
 
A card is not going to report it has compute units available that have been disabled in the factory. This is a good thing as often times these are disabled because of binning and are non-functional.
You missed the point. I am saying it depends if the game has been programmed to dynamically allocate or it's tuned to known cards.
 
You missed the point. I am saying it depends if the game has been programmed to dynamically allocate or it's tuned to known cards.
Overwatch in 1080p Ultra settings went from 84 FPS on average with RX 460 to 90 FPS in the same settings on RX 560.

Drivers.
 
Overwatch in 1080p Ultra settings went from 84 FPS on average with RX 460 to 90 FPS in the same settings on RX 560.

Drivers.
Low level API drivers do basically nothing, it is the game that has to be tuned.
 
Low level API drivers do basically nothing, it is the game that has to be tuned.

My point is that it's impossible for an application to tell the driver/GPU "hey I want to run like ****, so please disable N execution units so my shaders run much slower". The API doesn't work like that.
[doublepost=1494955630][/doublepost]
Overwatch in 1080p Ultra settings went from 84 FPS on average with RX 460 to 90 FPS in the same settings on RX 560.

Drivers.

You're comparing 2 different GPUs, one of which is basically just an overclocked version of the other. Are you sure it's not a hardware difference here? What does the RX 460 get with the latest drivers?
 
  • Like
Reactions: tuxon86
My point is that it's impossible for an application to tell the driver/GPU "hey I want to run like ****, so please disable N execution units so my shaders run much slower". The API doesn't work like that.
GCN GPUs can run concurrent independent tasks on the CUs, for example physics.
 
http://www.anandtech.com/show/11280/amd-announces-the-radeon-rx-500-series-polaris/2

RX 560 is supposed to be up to 22% faster than an RX 460 on average:

By the numbers, the additional two CUs give the RX 560 a 14% boost in shader and texture throughput. Coupled with that are some modest clockspeed increases for both the boost clock and the base clock. The boost clock is being bumped up from 1200MHz to 1275MHz (6%) and the base clock from 1090MHz to 1175MHz (8%). Coupled with the increased CU count, and we’re looking at a performance improvement on paper of around 22%. That said, the ROP count isn’t changing, so the actual performance improvement will likely be in the middle of those values.
[doublepost=1494955963][/doublepost]
GCN GPUs can run concurrent independent tasks on the CUs, for example physics.

Right, but again, this will not automagically leave CUs idle because the app doesn't know that the RX 560 has more CUs than the RX 460 did, or whatever you are claiming.
 
You're comparing 2 different GPUs, one of which is basically just an overclocked version of the other. Are you sure it's not a hardware difference here? What does the RX 460 get with the latest drivers?
Well, OV went from 70 FPS average to 84 so 20% increase.

RX 560 at 99$ is fabulous offer. At 115$ - not as much. On the other hand cheapest monitor with IPS and Freesync costs around 150$(https://www.newegg.com/Product/Prod...5515&cm_re=LG_Freesync-_-24-025-515-_-Product). And this actually makes it very, very good value combined.
 
Right, but again, this will not automagically leave CUs idle because the app doesn't know that the RX 560 has more CUs than the RX 460 did, or whatever you are claiming.
I am claiming that the low level games might run faster but maybe not as well as they could on the new configuration.
 
Well, OV went from 70 FPS average to 84 so 20% increase.

That's not what your originally posted, though. You said RX 460 @ 84 to RX 560 @ 90, and claimed drivers were responsible (which I disagree with). If you're now claiming Overwatch went from 70 FPS to 84 FPS on the RX 460, then sure, that would be from driver improvements.
 
That's not what your originally posted, though. You said RX 460 @ 84 to RX 560 @ 90, and claimed drivers were responsible (which I disagree with). If you're now claiming Overwatch went from 70 FPS to 84 FPS on the RX 460, then sure, that would be from driver improvements.
This is correct.

I used word "Drivers" in different context. The difference in performance that Overwatch gets from RX 560 vs 460 is smaller than it should be possible to get, from increased core count and core clock.
 
According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
a_Capture.png

They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.
 
Last edited:
According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
a_Capture.png

They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.

Very impressive. The compute performance along with the integrated SSD support seems like something Apple would be interested in. They demoed realtime editing of 8k footage with the help of the high bandwidth memory controller in adobe premiere.
 
According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
a_Capture.png

They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.

So they're about 9 months too late, and about to be absolutely crushed by GV100? Cool.
 
So they're about 9 months too late, and about to be absolutely crushed by GV100? Cool.

Oh relax. This is a big deal. A 13 TFLOP card for a fraction of the price is a good thing for the industry. We know so little about the performance of both Vega and GV100 that its unfair to say either will crush the other.
 
Oh relax. This is a big deal. A 13 TFLOP card for a fraction of the price is a good thing for the industry. We know so little about the performance of both Vega and GV100 that its unfair to say either will crush the other.

Where did they say it'd be a fraction of the price of a GP100/GV100?
 
According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
a_Capture.png

They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.
LOL - they're comparing the latest unreleased ATI GPU chips to the Nvidia chips released and shipping a year ago.

I guess that they're be red-faced if they compared to the Nvidia Volta chips announced a week ago.
 
  • Like
Reactions: ActionableMango
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.