Heh, it looks like Apple has made their own approach for this API. Not as explicit as DX12, and Vulkan, but still low-level enough to bring performance and explicit control over hardware that is needed for future software(VR, games, entertainment, etc...).
I am optimistic right now. It makes dev to work a bit to make application run on OS X, however, it is not as huge amount of work as is with other APIs that are explicit and require direct optimization for specific hardware. Apple offers only Intel and AMD graphics in their computers, so there is not that much to do.
Also, https://developer.apple.com/videos/play/wwdc2016/604/ In this film I understand that Tessellation made from compute shaders is running Asynchronously of graphics pipeline(concurrently). So this is direct implementation of Asynchronous Compute feature from AMD GPUs.
Too bad that only GPU that will see performance benefit from this are those made on Tonga architecture. Tahiti, and Pitcairn, and Cape Verde will not see the benefit from this.
I am optimistic right now. It makes dev to work a bit to make application run on OS X, however, it is not as huge amount of work as is with other APIs that are explicit and require direct optimization for specific hardware. Apple offers only Intel and AMD graphics in their computers, so there is not that much to do.
Also, https://developer.apple.com/videos/play/wwdc2016/604/ In this film I understand that Tessellation made from compute shaders is running Asynchronously of graphics pipeline(concurrently). So this is direct implementation of Asynchronous Compute feature from AMD GPUs.
Too bad that only GPU that will see performance benefit from this are those made on Tonga architecture. Tahiti, and Pitcairn, and Cape Verde will not see the benefit from this.