Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

izzy0242mr

macrumors 6502a
Original poster
Jul 24, 2009
691
491
Basically what the title says: if we have all this power in our Apple Silicon Macs, with really powerful GPUs in particular…is there any way to force apps to use the GPU more to speed things up? Is that even possible?

If not, would app developers consider coding apps to use the GPU more as a sort of performance booster? Or does that just not make sense? I'm thinking for like everyday normal tasks that aren't traditionally GPU-heavy (gaming and video processing/editing).

It seems weird to have a computer that has all this latent power that doesn't get to be used if you don't do a lot of gaming or video editing.
 
In your first paragraph, you're essentially asking if you can simply change a setting and get a purely CPU-based app to use the GPU to do some of its computations. The answer is no, because apps need to be written specifically to enable this. That's known as GPU hardware acceleration, and it can be used even for non-graphical computations.

In your second, you're asking if apps can be recoded to implement GPU hardware acceleration. There are certainly apps that could benefit from this, like some operations in Mathematica. But this only works on specific types of tasks (those that can be heavily parallelized), and it takes some very sophisticated programming to achieve. Thus most developers have decided the benefit is not worth the effort and expense.

Having said that, if you do have an app that has been programmed to take advantage of GPU hardware acceleration in Apple Silicon then, yes, by all means, you can make use of that. See, for instance:
 
Last edited:
  • Like
Reactions: Basic75 and vanc
Many tasks aren't easy parallelizable. A GPU requires tasks that can be split in really small parts and be run simultaneously on 1024 cores or so to get good performances, so it's not an immediate thing. Plus there is a latency and an overhead (less on Apple Silicon, because there is no need to copy data from RAM to VRAM) to initiate a task on the GPU.

What would you like to see run on the GPU?
 
  • Like
Reactions: Basic75 and pipo2
Already, the CPU cores in Apple Silicon are insanely fast for most of the work they do. And they do that work with minimal overhead and very little juice, most of the time. A GPU-oriented task requires significant preparation before it can run (prep done by CPU logic): unless the work is very heavy and extensive, setting up work for the GPU would almost certainly cost as much as or more than what you can gain by running it through it.

It is kind of like having a vehicle with several engines: together they might get you going really fast, but once you are at speed, you can shut most of them off because you can cruise fine on just one. The GPU is excellent for when you need its power, but a lot of the time you just leave it shut off so as not to waste power needlessly.
 
What things would you want to speed up? GPU's aren't usually used (or useful) for general computing tasks. Though there are some tasks that are often run of GPUs, in order to use their speed. Bitcoin mining and password cracking are two examples that spring to mind, though there are many more. There are frameworks for writing and running code on GPU's such as CUDA, OpenCL and Metal.
 
Basically what the title says: if we have all this power in our Apple Silicon Macs, with really powerful GPUs in particular…is there any way to force apps to use the GPU more to speed things up? Is that even possible?

If not, would app developers consider coding apps to use the GPU more as a sort of performance booster? Or does that just not make sense? I'm thinking for like everyday normal tasks that aren't traditionally GPU-heavy (gaming and video processing/editing).

It seems weird to have a computer that has all this latent power that doesn't get to be used if you don't do a lot of gaming or video editing.
The problem isnt the hardware, its the developers who dont want to waste their time developing for hardware that will have limited exposure.

in games for example;
As a developer this is the decision you have to make, how can I make the most money, with the least amount of work?

Coding games for Mac (limited hardware and users for this purpose)
coding games for Windows (where alot of people have a dedicated windows gaming machine)
Coding games for Console (either you stuck on a single platform or its limited on slower consoles or different coding styles)

Since Windows gaming has been around the longest and has the biggest reach its in the developers interest to code for Windows and not Mac.

Mac is moving in the right direction, they are removing the hurdles required to make compatible games, and I would suspect by end of M4 lifecycle apple will be a serious contender in the gaming space.

If apple had a brain, they would sign a contract with Nintendo to provide them with the hardware for next gen console in exchange for making macs compatible with Nintendo games. This would be easy for Apple to accomplish and would be a huge victory for both Nintendo and Apple, but Im not holding my breath for this to happen sadly :(
 
If apple had a brain, they would sign a contract with Nintendo to provide them with the hardware for next gen console in exchange for making macs compatible with Nintendo games. This would be easy for Apple to accomplish and would be a huge victory for both Nintendo and Apple, but Im not holding my breath for this to happen sadly :(

Apple could be willing to do that, but Japanese mindset could prove itself to be an obstacle.

 
The problem isnt the hardware, its the developers who dont want to waste their time developing for hardware that will have limited exposure.

in games for example;
As a developer this is the decision you have to make, how can I make the most money, with the least amount of work?

Coding games for Mac (limited hardware and users for this purpose)
coding games for Windows (where alot of people have a dedicated windows gaming machine)
Coding games for Console (either you stuck on a single platform or its limited on slower consoles or different coding styles)

Since Windows gaming has been around the longest and has the biggest reach its in the developers interest to code for Windows and not Mac.

Mac is moving in the right direction, they are removing the hurdles required to make compatible games, and I would suspect by end of M4 lifecycle apple will be a serious contender in the gaming space.

If apple had a brain, they would sign a contract with Nintendo to provide them with the hardware for next gen console in exchange for making macs compatible with Nintendo games. This would be easy for Apple to accomplish and would be a huge victory for both Nintendo and Apple, but Im not holding my breath for this to happen sadly :(
The original question was not about games. And the issue is not about developers not wanting to developer for a specific hardware, it's just that GPU acceleration isn't applicable to everything.
 
As mentioned earlier, in replies, coding some areas to accomplish similar things on the CPU or the GPU is possible. For instance concerning numerics Apple provides the Accelerate libraries for performing different types of calculations on the CPU. I use it to solve sets of linear equations, whereas on the GPU side they provide the Metal Performance Shaders framework which can also be used (in a restricted sense) to solve similar problems. More broadly speaking, the languages most used for the CPU and GPU are the Swift language and the Metal Shading Language, respectively ... each may be used to code for their areas.
 
<quote>
More broadly speaking, the languages most used for the CPU and GPU are the Swift language and the Metal Shading Language, respectively ... each may be used to code for their areas.
</quote>
FWIW the Accelerate and Metal frameworks are exposed in the developer versions with their header files: .h for C/C++. Actually, only a small part of the published Darwin and Cocoa libraries is exposed as written in Swift only.
 
Last edited:
  • Like
Reactions: WC7
FWIW the Accelerate and Metal frameworks are exposed in the developer versions with their header files: .h for C/C++. Actually, only a small part of the published Darwin and Cocoa libraries is exposed as written in Swift only.
Trying to do as much as possible using Swift. I use the Accelerate for vDSP transforms, and then Metal Performance Shaders if I have much larger sets of equations for algebra. I am not an application developer ... and I only use a very small fraction of the libraries for specific numerical calculations ... still I am amazed what the iMac can accomplish. I am from the slide rule times!
 
  • Like
Reactions: Chuckeee
The original question was not about games. And the issue is not about developers not wanting to developer for a specific hardware, it's just that GPU acceleration isn't applicable to everything.
Both things might be true. Certain tasks are difficult to parallelize. "Difficult" does not always mean impossible. And if the developer's ROI is low, he is not going to code for GPU (it also requires special skills).
 
It is fun to think about calculations on the GPU vs CPU ... but the speed is different for different input structures when performing numerical calculations. On the CPU side Accelerate provides a way of using sparse matrix calculations but on the GPU side there is none that I see ... because either sparsity doesn't lend itself to parallel calculations or the set-up 'costs' would be prohibitive.
 
I love this. I am not quite from those times, but my grandfather taught me how to use one when I was little and I still have it somewhere. He gave me his!
When I tried writing my first, little code ... there were no screens (except window screens) and no mice (except real ones).
 
When I tried writing my first, little code ... there were no screens (except window screens) and no mice (except real ones).
My grandfather (the same one who gave me the slide rule) designed war ammunition using punch card-based systems. Crazy, right?
 
  • Like
Reactions: WC7
I know this is not related ... but back 'then', of course, there was no GPU, and the arithmetic logic unit was optional. The machine I first 'encountered' ... was a GE 235 mainframe.
 
My grandfather (the same one who gave me the slide rule) designed war ammunition using punch card-based systems. Crazy, right?
The mainframe I first experienced calculated ICBM trajectories. Pretty sad, but motivated by the missile competition.
 
Talking about all that power just sitting there, I’m still trying to get any use out of that second D700 GPU in the 6,1 ;)
 
GPU means Graphics processing unit. Hence the acceleration for those tasks you mentioned, OP.
 
Basically what the title says: if we have all this power in our Apple Silicon Macs, with really powerful GPUs in particular…is there any way to force apps to use the GPU more to speed things up? Is that even possible?

If not, would app developers consider coding apps to use the GPU more as a sort of performance booster? Or does that just not make sense? I'm thinking for like everyday normal tasks that aren't traditionally GPU-heavy (gaming and video processing/editing).

It seems weird to have a computer that has all this latent power that doesn't get to be used if you don't do a lot of gaming or video editing.

Yes you DO use the GPU when you are just running an email app. It takes a huge amount of computation to smoothly scroll the text. Before we had mouse-driven user interfaces, computers mainly worked on the actual problem at hand. But today the most computational intense thing computers do is run the screen, move and scroll windows. You can't do that well without a GPU. So the GP is constantly in use even if you are just doing online shoping and reading emails.
 
  • Like
Reactions: wyrdness and WC7
Basically what the title says: if we have all this power in our Apple Silicon Macs, with really powerful GPUs in particular…is there any way to force apps to use the GPU more to speed things up? Is that even possible?

If not, would app developers consider coding apps to use the GPU more as a sort of performance booster? Or does that just not make sense? I'm thinking for like everyday normal tasks that aren't traditionally GPU-heavy (gaming and video processing/editing).

It seems weird to have a computer that has all this latent power that doesn't get to be used if you don't do a lot of gaming or video editing.
Wouldn't it be easier to buy a MacBook Pro 18- 19- 20-inch M3 ULTRA with double GPU cores?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.