Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple forgot about Hardware Raytracing Cores again... 😩

According to some rumors / reports we should have gotten them but the engineers discovered too late in the dev cycle that the raytracing GPU drew too much power, so they scrapped that design and went with a backup plan to deliver an improved GPU.

Had the raytracing GPUs not drawn the extra power we probably would have gotten M2s with that feature and earlier than we did, it appears plans were in place to release the M2 Pro/Maxes last October.
 
Are they good enough to play games?. I mean, how do they compare with a PC with a graphics card?. I mean, for productivity the numbers, are good, I guess?. How do they compare in similar tasks with a fast PC?.
Does word open faster than in a PC?, so they boot faster?, etc. can I play Doom 1 at 5k res? ;)
 
  • Haha
Reactions: Burnincoco
Apple forgot about Hardware Raytracing Cores again... 😩
I love my Macs - M1 Max laptop and m1 ultra desktop. The unified RAM and advanced multiple video encode /decode do wonders for video editing.

But my 4090 based PC just absolutely destroys it for 3D workflows or AI/tensor. It’s like 7-8x faster. It’s also massive, loud, and sucks down like 600-800W…

These are machines built for some very narrow use cases.
 
"Don't Believe the Hype: Apple's M2 GPU is No Game Changer"

When Apple did even mention that the M2 is a game changer ? People should stop lying
Tim Cook said so on Jan 17, 2023 @ 7:01 AM

cook.png
 
Shocking that newer chips are faster 🤔
That's not the point. The point is *how much* faster.

Another interesting datapoint is thsi tweet which gives AMX performance.
In theory you might think AMX hasn't changed much, and for the largest matrices this is true. (The maximum possible performance is only about 7% or so faster, in line with the frequency boost).
But what HAS changed is that the design has been tweaked so that it's rather easier to approach the maximum performance without losing cycles to overhead. Compare the maximum performance for M1 (which is about 80% of peak), and which goes down if we have multiple clients trying to use AMX simultaneously, with the maximum for M2, which doesn't suffer from multiple clients and can get very close to 100% peak once you have a few independent clients that can all interleave execution:


I suspect less dense AMX tasks (eg complex filters on long vectors? FFTs?) will show even more of an improvement, but people never show benchmarks for those :-( )
 
isn't the consensus

mini<oled<micro
MiniLED (LCD with small LEDs for variable backlighting) is good for HDR brightness and is not prone to burn-in. The dimming zones for backlighting can cause blooming with dark backgrounds when the brightness is turned up too high.

OLED (individual LEDs using organic dyes to emit light without a backlight) is good for high contrast and dark blacks. Off-angle viewing can shift colors and burn-in is a concern.


MicroLED (individual LEDs using silicon to emit light without a backlight) has many of the advantages of both OLED and MiniLED. It has black backgrounds and high brightness. Off-angle viewing should be more accurate than OLED. Burn-in is not the problem that it can be with OLED. The problem currently with MicroLED is that no one has production techniques to build high resolution displays at the sizes needed for phones and laptops. Apple has been working on MicroLED for several years but rumors suggest that at best they may be ready to do a Watch sized screen by about 2024/2025.

Oled is good for some things and not for others. Same with MiniLED. Rumors have Apple using OLED in some iPads and MacBooks in another year or so, but that is probably only a transition prior to MicroLED and they may keep MiniLED for some models.
 
Last edited:
Reality check. The GPU performance is better than Intel integrated graphics, but not significantly better.



In the world of performance graphics, games, etc, 60 fps at 1080p sucks badly. Gaming consoles blow this out of the water. The M2 is stellar for what it does. But speedy high-resolution graphics ain’t it.

You do realize the ENTIRE POINT of this thread is to discuss the M2 Pro (2x graphics of M2) and M2 Max (4x graphics of M2), don't you?

Bringing up the supposed deficiencies of the i3 of the Apple line seems kinda scraping the bottom of the barrel: "Oh yeah, well my server has more CPU power than your Apple Watch; so take that, Apple sux!!!"
 
There’s been some discussion about this on the Mac Mini forum, and the conclusion I came to is:

Once a Mac Mini configuration enters Mac Studio pricing, the best choice is a Mac Studio — unless you need the smaller size of the mini
The above point tends to be true.
But of course the ideal solution is not to buy today's Studio but to wait till the next one is announced. The next obvious date is WWDC, so if you can wait till then...
 
There’s been some discussion about this on the Mac Mini forum, and the conclusion I came to is:

Once a Mac Mini configuration enters Mac Studio pricing, the best choice is a Mac Studio — unless you need the smaller size of the mini
Or you prefer the higher CPU performance of the M2 Pro in the Mini over the M1 Max in the Studio.

The Max brings more GPU and the Studio brings a couple more ports, but those may not be needed by everyone. It really isn't a simple dividing line.
 
  • Like
Reactions: greenbreadmmm
Why isn't the M1 Ultra twice as fast as the M1 Max? I thought the Ultra was 2 Max chips stuck together.

This is an interesting question, and depends on the exact aspect of performance.
If you have INDEPENDENT pieces of code running on the CPUs, for example, the Ultra is essentially twice as fast as the Max/Pro (look at eg the GB5 multicore score).

But when the code running on different cores (CPU or GPU) needs to interact (eg one core needs to wait until another core has finished its work) multiple issues kick in.

- The most obvious is that everyone has to wait for the slowest piece of work (you may have 16 tailors, but the suit isn't ready until the slowest tailor working on the most complicated part is finished).

- More technical (but probably more important) is that it's not easy for one core to communicate with another at high speed. There is a lot of protocol overhead (exactly what info needs to be communicated, in what order), and a lot of HW overhead (to get from a GPU core on one chiplet to a GPU core on the other, a transaction has to pass through multiple routers that decide where next to send the transaction, along with delays in buffers that match different voltages or different frequencies between different IP blocks).
Also the LOCAL connections between the GPU cores also start to get clogged once you have too many cores all trying to talk to each other, and you need to build a "second freeway" to prevent these sorts of traffic jams.

Even Max starts to suffer from this. M1 and M2 Pro Metal scores are about 2x M1 and M2, but the Max score is only about 1.65x the Pro score. Then Ultra is about 1.45x the Max score.

Apple is well aware that scaling on Ultra (and even Max) was sub-optimal, and I think they shipped Ultra essentially as an experiment (even more so than the rest of the M1 line) to see what the most serious pain points were. If you look at the patent record, they have already patented both a new cache protocol and a new Network on Chip that are designed for scalability across multiple chiplets, and are informed by what they learned from Ultra.

Of course who knows when patents will turn into products, but I suspect that we will will see a rather better scaling at the high end (both Max and Ultra) in the next generation.
 
Oy looks like the GPU scaling issue is still there.
Maybe Apple will get that sorted out for M3.
 
At least we can rest assured that future versions of Mac OS will taketh away what M chips giveth. It’s the way things work.
The biggest gain has been with AS macs possessing faster unified memory bandwidth, there you have RAM and SSD, and you are not seeing an older discrete GPU used that falls behind in performance. Additionally the SSD r/w is what affects your MacOS loading speed besides available RAM. So if you purchase a AS Mac with more than the baseline RAM/SSD you are extending the life against future MacOS loading, running applications or shutting down speeds.

Also it should be noted that using Ventura on a AS Mac is a lot faster for a M1 Mac than the first Big Safari that supported those. Ventura seems so fast shutting a Mac down compared to Monterey on a M1 Mac.
 
  • Like
Reactions: SFjohn and Ruftzooi
Curious to know the source?

People are making that speculation because the 14" M2 Pro and Max now have a different weights listed in Apple's specs. With M1 the 14" Pro and Max had the same weight but the 16" Pro and Max had different weights.
 
Apple forgot about Hardware Raytracing Cores again... 😩
The M2 (and Pro and Max) are basically updated A15's. No-one expected anything different.

The new CPU and GPU designs were created to N3, but covid delayed N3, and so...
 
  • Like
Reactions: Ruftzooi
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.