Are you saying you don't have control over what the CUs do with low level APIs?It doesn't work like that.
I know you can check how many you have available and dynamically allocate.
Are you saying you don't have control over what the CUs do with low level APIs?It doesn't work like that.
Are you saying you don't have control over what the CUs do with low level APIs?
I know you can check how many you have available and dynamically allocate.
You missed the point. I am saying it depends if the game has been programmed to dynamically allocate or it's tuned to known cards.A card is not going to report it has compute units available that have been disabled in the factory. This is a good thing as often times these are disabled because of binning and are non-functional.
Overwatch in 1080p Ultra settings went from 84 FPS on average with RX 460 to 90 FPS in the same settings on RX 560.You missed the point. I am saying it depends if the game has been programmed to dynamically allocate or it's tuned to known cards.
Low level API drivers do basically nothing, it is the game that has to be tuned.Overwatch in 1080p Ultra settings went from 84 FPS on average with RX 460 to 90 FPS in the same settings on RX 560.
Drivers.
Low level API drivers do basically nothing, it is the game that has to be tuned.
Overwatch in 1080p Ultra settings went from 84 FPS on average with RX 460 to 90 FPS in the same settings on RX 560.
Drivers.
GCN GPUs can run concurrent independent tasks on the CUs, for example physics.My point is that it's impossible for an application to tell the driver/GPU "hey I want to run like ****, so please disable N execution units so my shaders run much slower". The API doesn't work like that.
[doublepost=1494955963][/doublepost]By the numbers, the additional two CUs give the RX 560 a 14% boost in shader and texture throughput. Coupled with that are some modest clockspeed increases for both the boost clock and the base clock. The boost clock is being bumped up from 1200MHz to 1275MHz (6%) and the base clock from 1090MHz to 1175MHz (8%). Coupled with the increased CU count, and we’re looking at a performance improvement on paper of around 22%. That said, the ROP count isn’t changing, so the actual performance improvement will likely be in the middle of those values.
GCN GPUs can run concurrent independent tasks on the CUs, for example physics.
Well, OV went from 70 FPS average to 84 so 20% increase.You're comparing 2 different GPUs, one of which is basically just an overclocked version of the other. Are you sure it's not a hardware difference here? What does the RX 460 get with the latest drivers?
I am claiming that the low level games might run faster but maybe not as well as they could on the new configuration.Right, but again, this will not automagically leave CUs idle because the app doesn't know that the RX 560 has more CUs than the RX 460 did, or whatever you are claiming.
Doom sees increase in performance on new GPU vs RX 460.I am claiming that the low level games might run faster but maybe not as well as they could on the new configuration.
That the increase may not be total vs. the amount of cores and core clock difference.What did I say?
Or that the graphics are fast enough and the physics can be improved.That the increase may not be total vs. the amount of cores and core clock difference.
Well, OV went from 70 FPS average to 84 so 20% increase.
This is correct.That's not what your originally posted, though. You said RX 460 @ 84 to RX 560 @ 90, and claimed drivers were responsible (which I disagree with). If you're now claiming Overwatch went from 70 FPS to 84 FPS on the RX 460, then sure, that would be from driver improvements.
According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
![]()
They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.
Frontier edition... really?According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
![]()
They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.
According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
![]()
They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.
So they're about 9 months too late, and about to be absolutely crushed by GV100? Cool.
Oh relax. This is a big deal. A 13 TFLOP card for a fraction of the price is a good thing for the industry. We know so little about the performance of both Vega and GV100 that its unfair to say either will crush the other.
Link?Overwatch in 1080p Ultra settings went from 84 FPS on average with RX 460 to 90 FPS in the same settings on RX 560.
Drivers.
Link?Doom sees increase in performance on new GPU vs RX 460.
Frontier seems to be a graphics card, not an accelerator.Where did they say it'd be a fraction of the price of a GP100/GV100?
LOL - they're comparing the latest unreleased ATI GPU chips to the Nvidia chips released and shipping a year ago.According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
![]()
They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.