That is BS. Not everybody programs for proprietary APIs.Nobody currently cares about OpenGL.
Everybody has moved on to modern APIs.
And developing for Vulkan and DX12 is very complex and expensive (unless you're just using a third party engine).
That is BS. Not everybody programs for proprietary APIs.Nobody currently cares about OpenGL.
Everybody has moved on to modern APIs.
It is unfortunately inevitable, still.That is BS. Not everybody programs for proprietary APIs.
And developing for Vulkan and DX12 is very complex and expensive.
It is not inevitable because many people just can't afford it.It is unfortunately inevitable, still.
OpenGL Next is Vulkan. Vulkan will also be the only API that will combine graphics and Compute at the same time, on all of the platforms, and this alone will make Vulkan go-to API for all vendors.
What they can't afford? Currently it will be much more expensive to develop software for dead platform, than it will be for platform that is on its rise.It is not inevitable because many people just can't afford it.
It is out of place for graphics hardware vendors to place the burden of drivers on application developers and it would make for a lot of poorly optimized programs.
OpenGL is not a dead platform. DX11 might eventually be because it is a Microsoft gaming API.What they can't afford it? Currently it will be much more expensive to develop software for dead platform, than it will be for platform that is on its rise.
Poorly optimized programs? Have you seen latest iteration of Doom franchise? The most optimized game that has ever been, that does not lack graphics fidelity and details.
Why do you believe that it is SO MUCH harder to develop Vulkan applications, than OpenGL?OpenGL is not a dead platform. DX11 might eventually be because it is a Microsoft gaming API.
Not everybody has the capability of id Software to properly optimize their Vulkan programs.
Because the OpenGL driver takes care of a lot of stuff that Vulkan just leaves up to the application programmer.Why do you believe that it is SO MUCH harder to develop Vulkan applications, than OpenGL?![]()
Driver leaves a lot of control this way, out of developers. They have been asking for control over hardware, and this is what they have got.Because the OpenGL driver takes care of a lot of stuff that Vulkan just leaves up to the application programmer.
It is not about abandoning OpenGL, but about making OpenGL more multithreaded.
But the graphics workloads in professional applications are run on compute kernels, not geometry kernels.
When we considered how much we wanted this iMac to be capable of, it was clear that only one graphics chip would do — but that chip didn’t exist yet. So iMac Pro is debuting a new one. The Radeon Pro Vega is over three times faster than any previous iMac GPU, packing the power of a double-wide graphics card into a single chip. All of which translates to higher frame rates for VR, real-time 3D rendering, more lifelike special effects, and gameplay at max settings. It’s one huge reason iMac Pro is power incarnate.
The 295W GPU is on the level of performance of GTX 1080, at least according to AMD, in current state of drivers in gaming, and still will be faster than Titan Xp in compute oriented applications, which I think is most important from professionals.
Im sometimes dumbfounded about level of knowledge on this forum.
Big studios have been asking for control over hardware, not indies or engineering/research developers.Driver leaves a lot of control this way, out of developers. They have been asking for control over hardware, and this is what they have got.
Soooo...Koyoot, let me help you out with Apple's intended use of the Vega chip in the iMac Pro.
So please stop with this crap that no one should care about Vega's poor graphics performance. Its literally being marketed by Apple for its graphics capabilities.
This all depends on what your professional application is. If you are working with VR, you are far better off with Nvidia. Machine learning, despite all the marketing by AMD, I have not seen a single instance of a benchmark or other demonstration outside of AMD showing Vega FE doing machine learning. Every single popular library supports CUDA, not opencl. Even the "professional" benchmarks we have seen for Vega FE, at best it trades blows with the Titan XP/1080 Ti.
![]()
Im sure they will adapt to the situation.Big studios have been asking for control over hardware, not indies or engineering/research developers.
OpenGL will adapt.Im sure they will adapt to the situation.
You call architecture a failure because the software is not ready for it?
What will happen when RX Vega with properly optimized software(Gaming) turn out to be for example 20% faster than Titan Xp? What will then be a failure?
It is not only drivers. Your software has to be rewritten to utilize the hardware features: Primitive Shaders for example to bring performance uplift.no, we're calling the product a failure. a good product can't just be good architecture.
from what I've read vega won't see more than 10% improvement with driver optimizations, which is not nearly enough.
Soooo...
You call architecture a failure because the software is not ready for it? You call hardware not useful because there has not been software(so far), for this GPU?
What will happen when RX Vega with properly optimized software(Gaming) turn out to be for example 20% faster than Titan Xp? What will then be a failure?
Reserve your judgements over the hardware, till the software matured.
Im sure they will adapt to the situation.
Driver leaves a lot of control this way, out of developers. They have been asking for control over hardware, and this is what they have got.
Its funny that you can spin my words this way, when I was saying that software was immature for Vega FE.Here is a summary of Koyoot's argument:
When Vega FE has poor gaming performance, "Vega FE is not a gaming product!" -> Then when Vega RX mirrors Vega FE's performance, "No one should care about gaming!" -> Then when we point out that graphics workloads are important, "The performance will be better in the future!"
Sure dude. I can't wait to see how this argument keeps evolving.
No one is going to look at the mediocre results of the last few AMD GPUs and think that its going to magically get better in the future. You can cite all the architectural nuances you want, but when RX Vega's reviews hit and directly compare it to Nvidia's offerings, that is what consumers and professionals will make their decisions on.
This is bad analogy. Nvidia has to use the same rules that are in Vulkan, as AMD has to. Its developers who control the hardware performance in Vulkan and DX12, not IEMs.This sounds very good, let the users take control. But again, use super car as example. Take out all advance software assist, but marketing the car as "completely controlled manually by the driver" doesn't sounds impress, isn't it?
AMD can allow programmer to take over the low level stuff, however, that should be optional, not compulsory. If a GPU that cannot be utilise by the current software. And all software must be rewritten to release the GPU's power. That doesn't sounds like a good option for most developer. How about the next gen GPU? rewritten everything again? AMD is the manufacture provide this hardware, they should provide an easy way to utilise the hardware as well (like CUDA).
Allow programmer to control the low level stuff is just an excuse for them to not provide any proper software. When Nvidia selling fully automatic guided missile. AMD marketing they bomb can be more powerful (if you can hit the target), but you have to drop the bomb manually, and you have to do all the calculation / planing about how and where to drop the bomb. No matter how powerful the bomb is, that doesn't sounds impressive to me.
So you admit, that you, as a professional choice of the GPU base on gaming performance?
What is the reason why Nvidia is better for your needs?I base my professional choice on my specific needs. For me this is real-time physics simulations in game engines and machine learning. Nvidia is the better choice here.
I am not diminishing anything. Why you refuse to see the other side, as well, apart from only yours?
So you admit, that you, as a professional choice of the GPU base on gaming performance?
Thank you. I have no more questions, then.
Then let me express what I am seeing.I keep following this post. I am not going to start (or join) a war or anything, but just want to express my point of view.
I am sure quite a lots of Mac Pro users are also professional, may be not professional in your area, but still professional in computer / software / multimedia industry...
If they care about gaming performance, than it means gaming performance is important to them. And I don't know how can you representing them, and saying gaming performance is not important for all professional. Since when you can represent ALL professional?
If they say graphic performance is important to them, than it's important to them. It's that simple.
How about a gaming developer need a GPU to test their game? Are you going to say gaming developer is not a professional at all? How about an architect using VR to design a building? An architect is not a professional? How about I use my computer as a flight sim? I, as a professional pilot, is not a professional?
Professional has it's own definition, but not defined by you. And I don't know why professional must only care about compute performance. You are the person who only care the potential compute performance, not all professional.
I must say that I learn a lot from your post, especially about GPU architecture. However, I can't agree that AMD GPU is comparable to Nvidia at this moment. Yes, may be Vega is comparable to Titan Xp in some specific area (compute). However, at most, it only win a little bit. But in other areas (gaming, graphic performance, software developing difficulty, power efficiency, etc), a big lost. Which make it clearly not the same level product.
I know your original post is not addressing to me. But I am not refuse to see the other side. In fact, what I see is that most people comparing thing in all area. And you are the person that only see thing on your side. Why only compute matter? Why gaming benchmark cannot be important? Why graphic performance is meaningless for all professional? Why those few compute performance should override everything? I just couldn't understand.
If it is Graphics workload, and in this workload Vega is faster than Titan Xp, why it does not translate into gaming performance?I was going to respond to the "graphics workloads in professional applications are actually compute" comment, but it's so laughably false that I won't bother. If you are rendering triangles, then it's a graphics pipeline. If you are running a compute kernel, then it's compute. This is not rocket science.
Then let me express what I am seeing.
"Vega is on par or faster in gaming but uses quite a lot more power - rubbish"
"Vega is faster or on par with Titan Xp in compute - I don't care, it is not a gaming card, its a failure".
Vega in DX11 will unfortunately behave just like GTX 1080. There is nothing AMD can do here. I have to wait and see the effect of Primitive Shaders implementation in game engines, to draw conclusion with what GPU it will compete, in future games in DX12 and Vulkan, but I think it will easily tie with Titan Xp.
Erm, they show MINIMUMS. Not higher average framerates...You can claim its all software, but AMD is not doing much better in DX12/Vulkan performance. Even in AMD's marketing material, they show that the GTX 1080 averages a higher frame rate in their hand selected DX12 and Vulkan games.