Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

PowerBook-G5

macrumors 65816
Original poster
Jul 30, 2013
1,244
1,183
I have a mid-2012 cMBP 13" with the integrated Intel HD 4000 graphics and 1536MB VRAM, which I have reason to believe will support Metal. With Metal, will you be able to use higher graphics settings in games than before while maintaining performance? I don't have the beta yet (hurry up with that public beta, Apple!) so could someone fill me in on this? Thanks.
 
I haven't gotten around to testing to see if Metal is actually integrated and provides a marked improvement yet. But *eventually*, yes this will be the truth.
 
it will take game companies to update/release their games with metal support, so far Blizzard and Epic/Unreal have come out to say they will (since they already likely support it on iOS). I'm sure Unity is in the same boat.

it should give the same type of performance increase that other similar APIs that have low overhead are saying which is 30-50% increase in performance (FPS) combined with much lower CPU usage. systems with slower CPUs and GPUs should see a higher improvement over extremely high end systems.
 
it will take game companies to update/release their games with metal support, so far Blizzard and Epic/Unreal have come out to say they will (since they already likely support it on iOS). I'm sure Unity is in the same boat.

it should give the same type of performance increase that other similar APIs that have low overhead are saying which is 30-50% increase in performance (FPS) combined with much lower CPU usage. systems with slower CPUs and GPUs should see a higher improvement over extremely high end systems.
My understanding is that Metal replaces the underhead automatically. Maybe I'm incorrect.
 
My understanding is that Metal replaces the underhead automatically. Maybe I'm incorrect.

for things that use CoreGraphics/CoreImage/etc it will use Metal instead of OpenGL. Games however are written to use OpenGL and not CoreGraphics, they'll have to updated to use Metal (along with keeping OpenGL for older systems). OpenGL is still on 10.11 for apps/games/etc that need it.

It's why 10.11 DP feels so fast, all the animations and things use CoreGraphics/etc.
 
  • Like
Reactions: AlanShutko
It will also depend on the game. And as all integrated GPUs are bandwidth-limited, the benefits of low-overhead API might be not as big as with dedicated GPUs (at the same time, Metal includes hints that allow the GPU to greatly optimise its memory access in certain cases). But in general, you should see a healthy boost, yes.
 
An HD 4000 would not benefit much because mostly Metal reduces CPU overhead and CAN offer GPU benefits if the low level access is used. They won't do much low level programming for Intel GPUs and CPU load is not a problem for HD 4000 equipped devices.
HD 5000/6000 and Iris benefit from lower CPU load because there is more TDP for the GPU left. But the HD 4000 can run at full speed already while the CPU does its thing. Metal does not help GPU performance by itself only CPU performance. Low level optimizations aren't very likely on OSX.

Generally I doubt you will see many Metal games. Some that are ported over from iOS but no high graphical fidelity titels that require a lot of GPU performance. Not many will spend the effort optimizing anything worthwhile just for the Mac platform. They will only do it for the iOS platform but that won't be stuff that pushes notebook graphics very hard. Everything else will run on OpenGL and Vulcan to support Steam boxes, Windows, PS4. Even Mac owners play games under Windows, actual OSX gamers are a small number and that shows in the quality of past ports. That won't get any better with Metal.
 
Generally I doubt you will see many Metal games. Some that are ported over from iOS but no high graphical fidelity titels that require a lot of GPU performance. Not many will spend the effort optimizing anything worthwhile just for the Mac platform. They will only do it for the iOS platform but that won't be stuff that pushes notebook graphics very hard. Everything else will run on OpenGL and Vulcan to support Steam boxes, Windows, PS4. Even Mac owners play games under Windows, actual OSX gamers are a small number and that shows in the quality of past ports. That won't get any better with Metal.

Game engines and emulators will certainly pick Metal. Not many games nowadays use graphical APIs directly.
 
  • Like
Reactions: 983356
Generally I doubt you will see many Metal games. Some that are ported over from iOS but no high graphical fidelity titels that require a lot of GPU performance. Not many will spend the effort optimizing anything worthwhile just for the Mac platform. They will only do it for the iOS platform but that won't be stuff that pushes notebook graphics very hard. Everything else will run on OpenGL and Vulcan to support Steam boxes, Windows, PS4. Even Mac owners play games under Windows, actual OSX gamers are a small number and that shows in the quality of past ports. That won't get any better with Metal.
Metal is basically AMD's Mantle with OpenGL and OpenCL mixed in one thing. Mantle defines HOW the API and the core of the system works, thats why everything is extremely responsive and fast, but the libraries are built on top of it.

Also, Metal is already supported in Graphical Engines by the biggest names in industry: Epic, EA, Crytek, Unity. Blizzard also. I think people are still really underestimating how big revolution Metal brings to OSX from "mechanical" point of view.


Edit: I forgot. I wish we could see some benchmarks comparing "before" and "after" in gaming performance ;).
 
Metal is basically AMD's Mantle with OpenGL and OpenCL mixed in one thing. Mantle defines HOW the API and the core of the system works, thats why everything is extremely responsive and fast, but the libraries are built on top of it.

Also, Metal is already supported in Graphical Engines by the biggest names in industry: Epic, EA, Crytek, Unity. Blizzard also. I think people are still really underestimating how big revolution Metal brings to OSX from "mechanical" point of view.


Edit: I forgot. I wish we could see some benchmarks comparing "before" and "after" in gaming performance ;).
Metal and openGL are two separate things. They run beside each other, otherwise openGL would have been re-written to work on top of Metal. And Apple didn't state that on any slide. Apple is pulling the plug from openGL. It will stay only for legacy purposes.. So that old software can run. Perhaps they'll write an openGL emulator on top of Metal in future in order they don't have to support it directly anymore.
 
Last edited:
Also, Metal is already supported in Graphical Engines by the biggest names in industry: Epic, EA, Crytek, Unity. Blizzard also.
So far only one engine supports it the rest only claim that they are going to support it, which does not say anything about how much time they spend on low level optimizations. Without those Metal helps the integrated GPUs which are always happy to reallocate every saved Watt but does not do much for any GPU where CPU load is not an issue.
With Blizzard I would not expect anything anytime soon. They are usually snails in such matters.
I think people hope for too much. It will help Iris (pro) and intel hd 5000/6000 GPUs the most. The dGPU won't be nearly as happy.
 
With Blizzard I would not expect anything anytime soon. They are usually snails in such matters.

Writing a Metal renderer is actually easier than writing an OpenGL one. Time will tell.

I think people hope for too much. It will help Iris (pro) and intel hd 5000/6000 GPUs the most. The dGPU won't be nearly as happy.

No, its the dGPU that will benefit the most. The iGPUs are usually bandwidth-starved.
 
Writing a Metal renderer is actually easier than writing an OpenGL one. Time will tell.

What makes you think so? From what I've read about those "low level APIs" (Mantle, Metal, Vulkan, DirectX12, ...) that they basically dispose much of the semantics and guarantees that e.g. OpenGL (and previous DirectX versions) made.

For instance the "trouble" with OpenGL (drivers) is that they are supposed to do so-and-so many "sanity checks" even when you do a "simple" GL state change ("is the texture loaded", "is the texture supporting that request", "is the right texture unit selected" etc. etc."). And they guarantee you the correct drawing order, that is, in the order that you issued the drawing commands (they might still shuffle GL commands around in the command pipeline, but the result has to be guaranteed! As per the OpenGL specs...).

Those "guarantees" (checks) are now all dropped, and given into the responsibility of the application!

Which is great! The application itself usually has a much much bigger knowledge about e.g. the texture it is about to modify, so going along with that example there is simply no need for that "is the texture format correct"-check! The application already knows - hopefully. Because if not - bang! There goes the application!

Also, because those "low level APIs" are much simpler (in terms of assumptions they have to make), multi-threading is much easier. From their point of view!

From an application point of view that means that (command queue) synchronisation is now up to the application! And there's another big performance potential: if the application knows that two given command queues are not conflicting - because they are dealing with mutual exclusive data - it can fill them up in two separate threads -> full CPU usage!


But...! And there's the big "but": an application can only profit from those performance gains like the promised "10 times draw call count" if it is willing to optimise on that low level! By exploiting the knowledge about the scene graph, the used resources etc.

And that is also the big question mark "?" for me when it comes to "Game Engines" (Unity, Unreal, ...): can those game engines exploit that "knowledge" and profit from those low level optimisations enough that it is noticeable in real world situations? Or are they simply "filling the gap" that an OpenGL driver left?

Sure: for "pre-defined effects" like "god rays" I can very well imagine that those Game Engines can optimise their own routines.

But what about changing texture data etc.? Doesn't the Game Engine not also have to guarantee certain things, which again would be superfluous for certain games (but not for others!)? In other words: back at the overhead of "OpenGL"?


We'll see how far Game Engine developers are also willing to "optimise for Metal", such that we not only see improvements for demos like "Zen Garden", but really for any game...

P.S. I have no experience with programming Game Engines, so let's hope they can really improve the "game situation" on Macs ;) But on the other hand, I am always very cautious when big companies like Adobe claim to support Metal and seeing "big performance improvements": they might simply optimise certain "blur" and "whatnot" filters, put a "Metal sticker" on top of it and be done with it...
 
What makes you think so?

Because I have glanced at the API ;) It is much more streamlined than OpenGL because it lacks all the historical baggage. The rendering code is just cleaner and the logic is 'automatically' modularised (because of state immutability etc.)

For instance the "trouble" with OpenGL (drivers) is that they are supposed to do so-and-so many "sanity checks" even when you do a "simple" GL state change ("is the texture loaded", "is the texture supporting that request", "is the right texture unit selected" etc. etc."). And they guarantee you the correct drawing order, that is, in the order that you issued the drawing commands (they might still shuffle GL commands around in the command pipeline, but the result has to be guaranteed! As per the OpenGL specs...)

Well, with Metal and other APIs the sanity checks are just usually moved somewhere else (object creating phase).
Also, the API itself gives less opportunity for error, because most state is immutable. And of course you have superior debugging capabilities. I am unsure though what happens if you mismatch the resources and the shader declaration (e.g. bind a 2D texture to a slot where a cube texture is expected), haven't had that much time to look though it. I guess you'd get undefined behaviour.

Which is great! The application itself usually has a much much bigger knowledge about e.g. the texture it is about to modify, so going along with that example there is simply no need for that "is the texture format correct"-check! The application already knows - hopefully. Because if not - bang! There goes the application!

But this is not that different in OpenGL. You can still make a mistake and bind a wrong texture/uniform etc. And you might get undefined behaviour or a cryptic GL error. And debugging GL is a mess.

From an application point of view that means that (command queue) synchronisation is now up to the application! And there's another big performance potential: if the application knows that two given command queues are not conflicting - because they are dealing with mutual exclusive data - it can fill them up in two separate threads -> full CPU usage!

For simple applications, nothing changes. You would use one command buffer and submit your drawing commands just as you would usually do in OpenGL. But you also have the choice to go one step beyond and have multiple command buffers. And this can actually simplify your drawing logic a lot! Imagine splitting your rendering into distinct phases where you have individual characters, buildings, vegetation etc. The command buffers allow you to split your logic neatly, which IMO leads to more clean and maintainable rendering code.

BTW, Metal also guarantees correct drawing order! You can just be much more flexible about it. You can say for instance, that buffer 2 needs to be drawn after buffer 1, but you can submit the buffer 2 before buffer 1. The API is very simple and easy to use in this regard and gives you a lot of flexibility.

But...! And there's the big "but": an application can only profit from those performance gains like the promised "10 times draw call count" if it is willing to optimise on that low level! By exploiting the knowledge about the scene graph, the used resources etc.

I have partially commented on this above. But also, don't forget that the batch setup with Metal is just so much faster than a traditional rendering API. So your application might get a healthy speedup even if you just use a single command buffer. And if you have an OpenGL rendering code that is well-written, converting it to Metal is not that difficult. The biggest issue is the shading language, we need tools that would allow us to convert between GLSL/Metal SL etc (maybe they even exist, no idea). It would of course be best if Metal understood SPIR-V.
 
Because I have glanced at the API ;) It is much more streamlined than OpenGL because it lacks all the historical baggage. The rendering code is just cleaner and the logic is 'automatically' modularised (because of state immutability etc.)

Hmmm, I must admit I have just "scrolled over" the Metal API (so even less "glance" than you did obviously ;)), so I have not much knowledge there.

I based my previous answer basically on

http://renderingpipeline.com/2014/06/whats-the-big-deal-with-apples-metal-api/

which explains - in easy terms - what those "closer to the Metal" APIs do different that traditional OpenGL (or DirectX 11 and older, for that matter).


Well, with Metal and other APIs the sanity checks are just usually moved somewhere else (object creating phase). Also, the API itself gives less opportunity for error, because most state is immutable.

Yes, that is one aspect which was also mentioned in the above linked site. In OpenGL "shaders" might need to be "patched" by the OpenGL driver. For instance if one shader stage expects "int" data, but the previous stage delivers "float" (or vice versa). Then the OpenGL driver needs to implicitly add "conversion code" which converts "float" to "int".

As I understand in Metal that simply doesn't (can't) happen: shaders are validated "at construction time", and that's that. It's the up to the application to ensure that the shader stages are correctly wired.

And of course you have superior debugging capabilities. I am unsure though what happens if you mismatch the resources and the shader declaration (e.g. bind a 2D texture to a slot where a cube texture is expected), haven't had that much time to look though it. I guess you'd get undefined behaviour.

Yes, lack of debugging tools I also read a lot about elsewhere. But that actually uncovers another problem: "the unpredictability of a given OpenGL driver". And I don't mean "bugs" in the sense that the driver crashes or the resulting graphic is wrong, but how/when the driver decides to actually send the commands to the GPU - or when to patch (and re-load) the shader. So you end up with "command queue bubbles" (idle times), and don't really know why. Because it's all "hidden" in the OpenGL driver.

So from the above arcticle: "The Metal API reduces the time spend on the CPU by making state handling simpler and thus reducing error checks by the driver if the state combination is valid. Precomputing states also helps: not only can the error check be done at state build time, the state change itself requires fewer API calls."

And previously in that arcticle: " this way the application has full control over when the work is send to the GPU and how many frames delay it is willing to add (thus adding latency but increasing GPU utilisation). Buffering GPU commands and sending them asynchronously in the next frame has to be implemented by the application itself."

And that "direct control" also brings more responsibility - and complexity, which was previously hidden in the "driver" - to the application!

But by carefully exploiting the knowledge an application has over its scene graph, the amount of "different objects" which require a state change the application can decide itself when to buffer what command, and when to finally send them to the GPU!

E.g. You have one huge mesh with the same texture or colour! Send the mesh and the draw command right away to the GPU! Thousands of little objects with different textures etc.? Buffer them first, batch them, only at the "beginning of the new frame" send them to the GPU.

(The above is just a simplified illustration, as I understand how those "closer to the Metal" APIs differ from OpenGL)


But this is not that different in OpenGL. You can still make a mistake and bind a wrong texture/uniform etc. And you might get undefined behaviour or a cryptic GL error. And debugging GL is a mess.

You're always supposed to get a "clean" GL error (which might or might not make sense to you). But the point here really is: it is the duty of the OpenGL driver to guarantee that you get such a GL error! Because it has to make those consistency checks.

With Metal you are on your own (my understanding): your synchronisation of command queues not done properly? Your bad! No one is going to tell you (unless you check the output on your screen yourself, please). In the worst case your application simply crashes, because you have deallocated a buffer which was still in use. Or you have overwritten data that is still in use "by the previous frame rendering pass".

For simple applications, nothing changes. You would use one command buffer and submit your drawing commands just as you would usually do in OpenGL. But you also have the choice to go one step beyond and have multiple command buffers.

Off course also OpenGL has advanced, e.g. you have "multi draw indirect": you first store all your draw commands, desired state changes and data in buffers, and then with a single draw command you could basically draw your entire scene - including objects with different textures (which are all pre-loaded in VRAM, just like the draw commands themselves).

But we are talking OpenGL >= 4.3 here, which, as we all know, is not available on OS X...

And it is also possible to access OpenGL from multiple threads, as long as the GL context is made "current" for the given thread which accesses it. There is a free chapter in a book which described and compared various techniques, also with regards to "pinning memory" (for architectures which support that), but the conclusion was the usual: "depends on the GPU/driver/platform".

So yes, since those "closer to the Metal" APIs are simpler and designed "from the ground up" to be used in a multi-threaded way, the performance is much more predictable (especially since the application itself is in control).

BTW, Metal also guarantees correct drawing order! You can just be much more flexible about it. You can say for instance, that buffer 2 needs to be drawn after buffer 1, but you can submit the buffer 2 before buffer 1. The API is very simple and easy to use in this regard and gives you a lot of flexibility.

I guess you need to replace "can" by "must", if you really want to enforce that drawing order. And that's exactly the point where "closer to the Metal" becomes more complex than with a "driver which does it all for you": you need to think about the correct order of buffers, commands etc., whereas with OpenGL the order of GL commands is the order you get. You don't even get access to the actual command buffer(s). And it's at the discretion of the OpenGL driver when it processes those buffers - but when it does it (finally), then in order.

So: more responsibility with "closer to the Metal" for the application, but much much more potential for (CPU) optimisation, more predictability of performance, simpler API ("no state changes allowed after creation", "only one way to set uniforms", etc.).


Whether Game Engines can exploit that potential, while providing the same flexibility ("shader languages"???) as when accessing those APIs directly, remains to be seen...

Because it's just not feasible for small game studios to invest so much into a "proprietary API" themselves (let alone "3D enthusiasts" like me, which are most likely interested in cross-platform solutions).
 
Generally I doubt you will see many Metal games. Some that are ported over from iOS but no high graphical fidelity titels that require a lot of GPU performance. Not many will spend the effort optimizing anything worthwhile just for the Mac platform. They will only do it for the iOS platform but that won't be stuff that pushes notebook graphics very hard. Everything else will run on OpenGL and Vulcan to support Steam boxes, Windows, PS4. Even Mac owners play games under Windows, actual OSX gamers are a small number and that shows in the quality of past ports. That won't get any better with Metal.

If Apple dumps OpenGL in the long run (i.e. as soon as a version of OSX is released that doesn't support GPUs that don't have Metal support) you could actually find that all your older games break with that release and that NO new games are EVER released for OSX again save perhaps something from Aspyr that is dedicated to Mac support. The reason is as you say, right now they get easy Linux/Steambox support along with Mac by doing OpenGL. If OSX goes to something else, it will likely OSX that gets dumped by developers as not worth the effort. This is why Apple should push Metal as a STANDARD API to replace OpenGL and open it up to everyone. Now is a good time because OpenGL itself is set to do a complete rewrite from the ground up. But Apple loves to do its own proprietary thing and we the users suffer the consequences every single time.
 
they won't dumb OpenGL, there's too many apps that support it. they haven't even deprecated it yet and even after they deprecate it, it will live on for a long time. by then we'll likely have Vulcan API which is basically the next version of OpenGL but that could years away.
 
They cannot promote Metal as some kind of standard. Vulkan is already that standard and Apple is even part of the group making the standard.
Apple has tons of money maybe they just give out nice fat checks to all the right people and Metal gets supported where it needs to be. If they really want it to succeed they should have the resources. Thing is with Apple they never really cared much about games or anything outside of their own world.
I have my doubts Metal will change much about how attrocious Aspyr ports are/will be.

I also don't think they can just drop OpenGL. Just for compatibility reasons they have to keep it along. And as Kronos says OpenGL still has its place not everyone wants to work with a lower level API like Vulkan. Apple will probably not get any better in supporting OpenGL though. Likely worse but they will still support it.
I just hope they will support Vulkan too.
 
Dusk007, you forgot that Vulkan is simply OpenGL with Functionality of Mantle.

Which in everything is Metal itself! Its OpenGl, mixed with OpenCL with Low-Level functionality of Mantle.
Mantle is in every single API right now: Vulkan, Metal, DirectX.

Why Apple went their own way? Because Programming from Scratch for OSX will be way better than porting games from Windows/Linux to it. And if we see that the base is Mantle, programming will not take long time to learn, get used to and master it. What if Metal will bring better gaming performance on OSX, than similar games on Windows? Have you thought of that?
 
It is not quite as simple as that. There are a few changed programming paradigms between OpenGL and Vulkan. You could put OpenGL on top of Vulkan but they aren't the same thing, even if they can be used to do the same thing.
There is also quite a bit of difference between DirectX 11 and 12.
See here if you are interested.

Mantle is just what set this whole API revolution in motion but the new APIs aren't just OpenGL with extra functions they differ quite a bit.
You can do with Metal things that you'd needed OpenGL and OpenCL for but they aren't really the same. It is a more stripped clean and more multi threading oriented, more modern of an API.
 
for things that use CoreGraphics/CoreImage/etc it will use Metal instead of OpenGL. Games however are written to use OpenGL and not CoreGraphics, they'll have to updated to use Metal (along with keeping OpenGL for older systems). OpenGL is still on 10.11 for apps/games/etc that need it.

It's why 10.11 DP feels so fast, all the animations and things use CoreGraphics/etc.
Apparently it's not due to that since 10.11 DP feels faster on unsupported GPU, and Netkas has shown that the Window server and most OS X apps don't appear to use Metal yet.
 
Apparently it's not due to that since 10.11 DP feels faster on unsupported GPU, and Netkas has shown that the Window server and most OS X apps don't appear to use Metal yet.
Netkas only tested one GPU and it was a non-Apple approved one at that. No one really knows anything at this point.
 
I have my doubts Metal will change much about how attrocious Aspyr ports are/will be.

What's atrocious about Aspyr ports? I think that's a pretty damn rotten thing to say about the ONLY real company doing true Mac ports of major games and have been since the PowerPC days when almost no other games were made for the Mac. I have a ton of Aspyr games (still playing Borderlands 2 and Borderlands Pre-Sequel and they both run smooth and hardly EVER crash AND they're 100% compatible with the Windows versions for online play with Steam. I don't see anything atrocious about any of the Aspyr ports I have (Call of Duty series, Star Wars series, etc.)
 
  • Like
Reactions: MacsRgr8
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.