Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Basic physics… You switch more transistors in your GPU you pull more wattages and can hit higher TF numbers for both FP16/FP32. This is independent from memory power consumption (which is already very low for GDDR6) or SIMD arch.

- What do you mean with "Apple has a process advantage" and about which processes are you talking about exactly?

- Both AMD and NVIDIA have more experience than Apple when it comes to developing high performance GPU architectures and their power consumptions are tied to their maximum performance and node sizes. They are designed to deliver maximum FP32 performance in the industry, including superior ray-tracing performance (which Apple lacks) and GPU accelerate Tensor cores for ML applications (DLSS for example), INT4/8 operation and Mesh Shading. And all these features are actually accessible to developers, while Apple is not capable in any of these techniques, yet…

Ray-tracing is supported in Metal, but maybe it's not as performant as what AMD and Nvidia offer.

Metal for Accelerating Ray Tracing​

 
Basic physics… You switch more transistors in your GPU you pull more wattages and can hit higher TF numbers for both FP16/FP32. This is independent from memory power consumption (which is already very low for GDDR6) or SIMD arch.

- What do you mean with "Apple has a process advantage" and about which processes are you talking about exactly?

- Both AMD and NVIDIA have more experience than Apple when it comes to developing high performance GPU architectures and their power consumptions are tied to their maximum performance and node sizes. They are designed to deliver maximum FP32 performance in the industry, including superior ray-tracing performance (which Apple lacks) and GPU accelerate Tensor cores for ML applications (DLSS for example), INT4/8 operation and Mesh Shading. And all these features are actually accessible to developers, while Apple is not capable in any of these techniques, yet…
I think he/she is referring to fabrication process. 5nm vs 7nm.
 
Give me the M1 Pro or Max chips in an iMac or Mac Mini and maybe, just maybe, I'll be tempted.........
 
Can’t wait to see how well the new MBP GPUs handle Fusion projects within Davinci Resolve so I can get a better idea of which model to save up for.

Gaming would be nice on these, but the mmo I mainly play (Guild Wars 2) already dropped support for Mac when the M1 models came out. I plan on playing New World by Amazon game studios, but that is staying Windows only so that would not work either.
 
Not how that works. Apple Silicon chips are a lot more efficient. So they are getting 10.4TF at the power draw they advertise 60W. It’s not directly comparable numbers to Nvidia / Sony because Apples chips are a lot more efficient.

Their CPUs are efficient because of ARM64.

But explain to me how their GPUs are more efficient? How do they magically pull more FP32 performance? Are they magically switching less transistors, but have more TFlops? Pleas don’t confuse GPU performance with ARM64 vs X86 efficiency.
 
Apple really have hit the ground running … I can see a gaming focused appletv owning gaming in a couple of years

Having a ps5 all I can say that apart from the faster hd and the lack of noise I can’t honestly say it’s a drastic improvement over the PS4 . I spend most of my time playing PS4 games on it via the 2tb usb external hdd too 😝
 
I am no hardware product designer but i sure as sun think Apple should build a Nintendo switch size gaming console and with these chips it will be screaming fast than anything the market has seen.
I think they have that in the iPad Mini… just need Apple first party controllers and to allow games in the AppStore to require a controller. Developers reject porting to iOS because Apple requires touch controls
 
Ray-tracing is supported in Metal, but maybe it's not as performant as what AMD and Nvidia offer.

Metal for Accelerating Ray Tracing​


It’s indeed supported in their API, but it’s not hardware accelerated. Apple chips don’t have dedicated hardware to accelerate the BVH instructions for ray-tracing.
 
If people are looking for a game. EVE Online has native Mac client now. It runs beautifully on M1.
 
Last edited:
  • Haha
Reactions: wilhoitm
What’s most interesting about all of this is what is possible in the future. Imagine a MacBook Air, or at least a cheaper MacBook, or an inexpensive Mac Mini with high performance graphics. We could see it down the road. These could become solid gaming machines. But also imagine an Apple TV with this sort of graphics power. It could truly become a AAA gaming device hooked to a TV, as long as developers get on board. The possibilities have suddenly become astounding with what Apple could do in the future. Remember, we’re still at the beginning with another year to switch Apple’s computer line-up to Apple Silicon. What will they give us in the spring with a Mac Pro or a boosted Mac Mini? Heck, we still have some missing iMacs. The next year will really be interesting.
 
There's no BS here, the M1 Max really is going to compete with the top-end discrete laptop GPUs at a fraction of the power.

But not at a fraction of the cost. And you have decades of hostility between Apple and Game developers over Apple’s hardware and software choices including Apple/NVIDEA disputes (whatever they are and whoever caused them) and Apples insistence on Metal as the primary iMac graphic engine and Apples lukewarm support for secondary graphic engines -usually AMD based- instead of DirectX.
 
  • Like
Reactions: ohio.emt
If you actually believe Apple charts you might as well believe marketing charts from all the other companies…

Good luck beleving a 60W laptop can actually do sustained 10.4TF.

I can imagine they could lie a little in their marketing claims, but i can’t imagine they could lie about 70%. I believe this will be one of the first thing every tech blog meassure when they got a chance. So we will see...
 
They are coming. WoW by Blizzard runs natively on M1 and Eve Online is adding native M1 support soon. Unreal Engine 5 is still in early access but has M1 native support. Big game studios appear to be spending some energy here. Good things to come.

If those are the 2 benchmarks that is being used to gauge "spending energy", then we should expect not much progress in the next few to many years.
 
Their CPUs are efficient because of ARM64.

But explain to me how their GPUs are more efficient? How do they magically pull more FP32 performance? Are they magically switching less transistors, but have more TFlops? Pleas don’t confuse GPU performance with ARM64 vs X86 efficiency.
They GPUs are also custom designed and based on licensed ARM tech. I wouldn’t doubt they are more efficient m. But the entire chip is SoC so all the resources a GPU needs to keep going are right there so it performs better too. Don’t forget Apple also has a powerful cooling system on it.
 
Basic physics… You switch more transistors in your GPU you pull more wattages and can hit higher TF numbers for both FP16/FP32. This is independent from memory power consumption (which is already very low for GDDR6) or SIMD arch.

- What do you mean with "Apple has a process advantage" and about which processes are you talking about exactly?

- Both AMD and NVIDIA have more experience than Apple when it comes to developing high performance GPU architectures and their power consumptions are tied to their maximum performance and node sizes. They are designed to deliver maximum FP32 performance in the industry, including superior ray-tracing performance (which Apple lacks) and GPU accelerate Tensor cores for ML applications (DLSS for example), INT4/8 operation and Mesh Shading. And all these features are actually accessible to developers, while Apple is not capable in any of these techniques, yet…

I have replied to your post here: #48

If you actually believe Apple charts you might as well believe marketing charts from all the other companies…

Good luck beleving a 60W laptop can actually do sustained 10.4TF.

Well, a 10W laptop can do sustained 2.6TF. I don't understand why this is such a difficult thing to believe.
 
Can’t wait to see how well the new MBP GPUs handle Fusion projects within Davinci Resolve so I can get a better idea of which model to save up for.

Gaming would be nice on these, but the mmo I mainly play (Guild Wars 2) already dropped support for Mac when the M1 models came out. I plan on playing New World by Amazon game studios, but that is staying Windows only so that would not work either.
On Apple's overview page it shows a DaVinci Resolve test - comparing a MBP 16" i9 5600m/8GB/64GB with the maxed out Pro and Max

Test was Lens flare effect on 10 second UHD project 24 fps.

M1 Pro 16 Core GPU - 1.4x speed
M1 Pro 32 Core GPU - 1.9x speed

Not quite as fast as I had hoped/wished for - but plenty enough to replace my old 16" with EGPU and ensure less noise.
 
I've been waiting for this announcement to see how fast the GPU is, in conjunction with eyeing up Steamdeck and Valve pushing for more Linux gaming, and trying to push cross-compatibility. Vulcan and MoltenVK look interesting for potential cross-platform Mac gaming in the not too distant future from what I understand.

I know very little about it to be fair, and it needs to get games companies fully on board (and Apple too really), but I know they are things that look interesting potentially!

If you know more about these, the pros and cons of VoltenVK, etc, and likelihood of more cross-platform gaming which includes the Mac in future, let me know more interesting stuff or opinion on it... :)
 
I don't need the better graphics performance of the max, but do people think it's worth moving to just the 24core variant (+$180) to get both the graphics and the improved 400GB/s memory speed vs the M1 pro 200GB/s memory speed?

I'm not sure if that memory speed mostly only helps with graphics or if I'd actually see any significant impact in other daily usage:

75% web browsing,
15% xcode/software compiling,
maybe very minimal (10%) video editing or lightroom work in the future.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.