Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
My M3 Max 16c/40c/64GB is a monster. I love pushing it every day and having it never push back. I can't even comprehend an M3 Extreme...

When my house is paid off around 2030 I'm going to buy an M9 Extreme just because I can and probably only use 20% of its power on my wildest days. I will be the poster child for Mac fanboy with money to burn and finally achieve my dreams!

Probably still won't buy a Vision Pro, though. I have standards.
 
  • Haha
Reactions: Lioness~
TLDR: It appears that, if Apple were to produce a monolothic M3 Ultra, it would need to be smaller than two M3 Max's, even if they ditched the efficiency cores (which, being so small, wouldn't help much):

[...]

However, if my extrapolation above is roughly correct, it would not be possible to produce a monolithic M3 Ultra that retained all the elements of two M3 Max's.

Also, pushing the reticle limit is probably expensive, because the larger area gives a higher chance of fatal defects.

Thus it doesn't seem like it would make sense for Apple to offer a monolithic Ultra in place of a 2x Max, since it would be less capable, and also require substantially more development costs.
I agree with your reasoning but I think your facts are slightly off the mark. (I am not certain though and would welcome hard data.)

First, as best I can recall, the reticle limit is around 860mm^2. (As you point out, it's obviously over 800 since nvidia has a chip that large.) Second, I'm not aware of anyone giving a precise measure of the M3 Max's size, but I remember an estimate of around 400mm^2. I think that that was based on photos, so it may not be accurate if Apple doctored the photos, as they did when they cropped the Ultrafusion out of the M1 Max die shots.

That would suggest that a full 2xMax single chip could be made. But I agree with you that it's unlikely - there would be a significant impact from yields, and such a chip would have a lot of extra hardware that's really not necessary.

While I don't think it's likely that Apple will make a new mask for a monolithic Ultra, there is one argument in its favor - that Apple did exactly that for the M3 Pro. In previous generations, the Pro was just a cut-down Max. Now it's not. It seems that Apple found a cost benefit in making separate masks for the Pro. Could they find a similar benefit for the Ultra? I doubt it. But I hope to be wrong!

If I am, then the rest of your analysis (not quoted) seems on the mark - they can drop some stuff from the new Ultra (display controllers, for example) in favor of more cores.
 
While I don't think it's likely that Apple will make a new mask for a monolithic Ultra, there is one argument in its favor - that Apple did exactly that for the M3 Pro. In previous generations, the Pro was just a cut-down Max. Now it's not. It seems that Apple found a cost benefit in making separate masks for the Pro. Could they find a similar benefit for the Ultra? I doubt it. But I hope to be wrong!
Ah, that's a good point! Though the Pro is a high-volume chip, and Apple saved money by turning it into a cut-down version. Alas, none of that would apply to offering monolithic M3 Ultra and 2x M3 Ultra options for the Studio.
I agree with your reasoning but I think your facts are slightly off the mark. (I am not certain though and would welcome hard data.)

First, as best I can recall, the reticle limit is around 860mm^2. (As you point out, it's obviously over 800 since nvidia has a chip that large.) Second, I'm not aware of anyone giving a precise measure of the M3 Max's size, but I remember an estimate of around 400mm^2. I think that that was based on photos, so it may not be accurate if Apple doctored the photos, as they did when they cropped the Ultrafusion out of the M1 Max die shots.
1) Reticle limit: I gave Anton Shilov's article as a reference for the 858 mm^2 figure, and you're saying you think that's "slightly off the mark" because you have a recollection that it's "around 860 mm^2"? That doesn't make sense—858 mm^2 is "around 860 mm^2"! In fact, 860 mm^2 is simply 858 mm^2 rounded to 2 significant figures.

Might you have possibly misread my 858 figure as a different number?

2) M3 Max die size: My extrapolation to >500 mm^2 could certainly be wrong—but I don't think simply recalling a 400 mm^2 estimate is, by itself, a basis for thinking it's wrong. That's because I provided the reasoning behind my calculation (Apple's picture of the M3 wafer, Shilov's estimate of the M3 die size, and my extrapolation based on transistor nos.)—and thus to have a basis for thinking my figure is off, you'd first need to recall the calculation behind the 400 mm^2 figure, and then have a basis for judging its approach to be superior.

If you could provide a reference for that 400 mm^2 figure I'd be happy to go over the person's argument with you. You might be thinking of Ryan Smith's estimate (https://www.anandtech.com/show/2111...-family-m3-m3-pro-and-m3-max-make-their-marks), in which he wrote: "The N3B-built M3 Max should be significantly smaller (under 400mm2?)"—but that didn't make use of Apple's picture of the TSMC M3 wafer, which Shilov in turn used to estimate the die size.

Of course, there is a danger the pic (see below) is purely illustrative, and doesn't show the actual number of M3 dies. But taking it as real does result in a plausible value for the M3's size (146 mm^2).

We can check this by doing an independent extrapolation of the M3's size, from M1. M1 has 16B transistors, and is reportedly 120 mm^2 on N5. According to https://www.anandtech.com/show/1883...n-schedule-n3p-n3x-deliver-five-percent-gains , N3B's size (compared to N5) is 58% for logic, and 95% for SRAM. Using the standard 50% logic/30% SRAM/20% analog mix, and assuming a 91% size for the analog portion (source: https://fuse.wikichip.org/news/7048/n3e-replaces-n3-comes-in-many-flavors/ ), we get an average size reduction of 0.58 x .5 + 0.95 x .3 + 0.91 x 0.2 = 0.757.

Based on the above, we obtain that the M3 should be 120 mm^2 x 25B/16B x 0.757 = 142 mm^2. That's pretty damn close to my other estimate of 146 mm^2!




1711697679117.png
 
Last edited:
All this is great but until MacOS fixes its broken HiDPI scaling model I don't see the point. I've got a Mac Studio with a bazillion megaflops and frames/second. So what if I can't read text on the current generation of ultra-res monitors like the 57" G9 Neo?
 
  • Like
Reactions: Harry Haller
Plausible new theory from Max Tech. yeah right... Max Tech is usually over-the-top clickbait when it's reporting about anything Apple related.
It’s true. I’ve legit thought of starting a YouTube channel just doing reactions to MaxTech videos and ridiculing their nonsense. I can’t stand how they made such a big deal of single NAND chips being used on the entry level models as if people were buying Macs with 256 gig drives with the intent of using them for Lightroom.
 
All this is great but until MacOS fixes its broken HiDPI scaling model I don't see the point. I've got a Mac Studio with a bazillion megaflops and frames/second. So what if I can't read text on the current generation of ultra-res monitors like the 57" G9 Neo?
I agree MacOS is subpar for text on non-Retina-type monitors. But that hardly means there's no point to MacOS. It just means that, if you want a great experience using a Mac with text on an external display, you unfortunately need an ASD (5k@27"), Samsung Viewfinity S9 (5k@27"), XDR (6k@32"), or Dell U3224KB (6k@32").

Since the G9 is a gaming monitor, I'm guessing what you want is a single large monitor that works beautifully for both gaming and text. You can't get that with MacOS. It sounds like you decided the gaming was more important than text; otherwise you would have returned the G9 and gotten an ASD or some other 220 PPI display.
 
Last edited:
Plot twist - this new M3 Ultra Chip will itself have an interconnect so Apple can make an M3 Extreme for the Mac Pro.
 
  • Like
Reactions: Harry Haller
It’s true. I’ve legit thought of starting a YouTube channel just doing reactions to MaxTech videos and ridiculing their nonsense. I can’t stand how they made such a big deal of single NAND chips being used on the entry level models as if people were buying Macs with 256 gig drives with the intent of using them for Lightroom.
I also loved how he made a whole “Apple FINALLY solved this” video about this non-issue and after it existed for all of *checks notes* one generation. Yeah… FINALLY.
 
Someone should tag MaxTech here and ask them to explain in more detail. Wouldn’t be surprised if they just evade scrutiny. Last time they posted on the thread that had their single NAND video, they conveniently stopped posting when directly challenged.
 
  • Like
Reactions: Chuckeee
The difference is that iJustine doesn't pretend to be something she isn't. I haven't seen anyone on these forums cite iJustine to support their wingnut theories, but MaxTech gets brought up over and over again by people duped by the word "tech" in their name.
Never said she pretended to be something she wasn't. I was implying that people put credibility in her reviews simply because she's popular. Try not to read between lines I don't write, especially since it was one sentence.
 
it‘s Linux + nvidia RTX or pro acceleration cards > $20K H100 and the new Blackwell (OMG look at blackwell). Also in the 2D areas, motion animation, image processing - AI gets more and more important. So today I use the Mac to open a remote shell to my Linux workstation.

But look at the fireworks Nvidia is setting off in many key areas, while Apple adds a new camera and new color options to the next iPhone ...

The M series is great. But many top level chip designers left Apple and the M series design has come to an end.
I agree.
Apple really needs to understand their dominant position is seriously being challenged.
Missing the AI race could prove catastrophic for the company, and I'm absolutely not sure partnering with google would be a good choice...especially since I have been De-Coupling from google for years (The only service I use from them being Youtube).
It would be a low blow after all these years, to have google creep into macOS in such a profound way...
 
  • Like
Reactions: krell100
Never said she pretended to be something she wasn't. I was implying that people put credibility in her reviews simply because she's popular. Try not to read between lines I don't write, especially since it was one sentence.

Don't read between lines I don't write. I haven't put any words in your mouth.

You aren't the first person to bring her into the conversation. iJustine, someone who peddles consumer opinions rather than pretending to have knowledge or technical understanding, has been mentioned a surprising number of times in this thread about MaxTech making specious technical claims, given that this has nothing to do with her. My read on that, somewhat reinforced by your response to me, is that people view her as some sort of foil in an argument about MaxTech. It comes across as a whataboutist response.

I don't watch iJustine because I find it vacuous fluff. I don't watch MaxTech for the same reason. I don't mind iJustine because she's just giving opinions. What bothers me about MaxTech is that so many people confuse their breathless nonsense for actual information.

As I said, I've never once seen a thread started here linking to a bogus iJustine technical claim, but MaxTech links pop up like mushrooms. I've never once seen "iJustine presented a plausible theory after reposting a blurry die plot".
 
This isn't complicated.
A render that takes 4 times longer than a render that uses 4 times the energy isn't saving the planet or money.

You're (mostly) right - it's not that complicated.

But you've missed something: how many people have to "render"?

How many people just waste kilowatt hours per day because they've been trained to be profligate?

And how much software is just bloatware, not optimized for a platform?

Much of what people argue about on the internet, in regards to tech, are just luxuries.

I'm asserting that in years to come that people will be more aware of the preciousness of energy that they are consuming.
 
  • Like
Reactions: Harry Haller
Ah, that's a good point! Though the Pro is a high-volume chip, and Apple saved money by turning it into a cut-down version. Alas, none of that would apply to offering monolithic M3 Ultra and 2x M3 Ultra options for the Studio.

1) Reticle limit: I gave Anton Shilov's article as a reference for the 858 mm^2 figure, and you're saying you think that's "slightly off the mark" because you have a recollection that it's "around 860 mm^2"? That doesn't make sense—858 mm^2 is "around 860 mm^2"! In fact, 860 mm^2 is simply 858 mm^2 rounded to 2 significant figures.

Might you have possibly misread my 858 figure as a different number?
No, this is just my brain damage, when I reviewed your message while posting I somehow missed that section, just remembering the reference to nVidia's chip. Sorry.
2) M3 Max die size: My extrapolation to >500 mm^2 could certainly be wrong—but I don't think simply recalling a 400 mm^2 estimate is, by itself, a basis for thinking it's wrong.
I agree, that's why I asked for more data... which you kindly supplied. Between that and the @treehuggerpro post, I accept your estimate pending any better data. (Which honestly I'm surprised we don't have... I thought I remembered someone delidding an M3 Max and posting stuff about it, but the only thing I found on that just now was not useful.)

As I said initially, I think your analysis is accurate regardless of the fine details. In particular, we agree that they could do something pretty close to an integrated Ultra if they dropped some blocks they don't need, but that it would be huge and very expensive.

Actually I have one possible argument in favor of them doing that. I can imagine Apple thinking that they want to build experience with larger designs because they think that more massively parallel GPUs, and possibly CPUs, will be important in the future for better xR. If so they could see building a monolithic ultra as an investment in the future for the Vision line, for a time when the power and silicon budget can handle that many cores (<2nm). I don't think this is likely, but it's not crazy either.
 
Given that the m4 generation will likely have big modifications to the neural engine to support AI, and likely thunderbolt 5 /usb4v2, I would be very wary of an m3 ultra chip if it doesn’t have those features. In fact, I would skip it.
Why do you think that is likely?

If you're thinking of Apple doing this to complement the work they are doing to catch up on AI in their next *OS, that's unlikely - the design of these chips were done a long time ago. Of course, it's likely to feature improvements to the Neural Engine anyway as it does every year.
 
Since the G9 is a gaming monitor, I'm guessing what you want is a single large monitor that works beautifully for both gaming and text. You can't get that with MacOS.
Well yes, of course. It's easily fixable too if Apple really wanted to. That a Mac Studio can't allocate a larger frame buffer for scaling the UI than a base Mini is ridiculous. If it's MacOS limiting this and not the graphics hardware then a fix would be trivial.
 
Well yes, of course. It's easily fixable too if Apple really wanted to. That a Mac Studio can't allocate a larger frame buffer for scaling the UI than a base Mini is ridiculous. If it's MacOS limiting this and not the graphics hardware then a fix would be trivial.
I've never been able to get a sufficiently thorough explanation of this to enable me to really understand the difference between how Windows and MacOS handle UI scaling, so I can't speak authoritatively. But here's my understanding:

1) We know it's not the hardware, because macOS had the same UI scaling issue back when it was on Intel.

2) Thus the issue is MacOS. And adjusting the UI so it can scale perfectly to any display resolution would require a major rework of the OS, which would be highly non-trivial.

3) Apple is not going to do #2 because they unfortunately don't care about how well MacOS displays text on sub-Retina monitors (e.g., 4k@27", which is 163 ppi). We know this because they actually used to have a way to make text look good on such displays (subpixel text rendering), and they abandoned it after High Sierra. One Apple engineer reportedly said it was hard to maintain, but the fact is it was already in place, and they were able to maintain it, such that it worked well on the overwhelming majority of displays. Thus they already had a working solution in place, and they gave that up. If they're not even going to do the effort to maintain a working solution that allows sub-Retina monitors to display text nicely, then they're certainly not going to rework the entire OS to do so.
 
I've never been able to get a sufficiently thorough explanation of this to enable me to really understand the difference between how Windows and MacOS handle UI scaling,

Windows, since 98 or so, has supported a notion of changing the DPI from 96 to something else. Windows 8.1 added the ability for different displays to have different DPIs (which became critical when laptops started shipping much higher DPIs and external displays did not). The DPI can be basically any fractional value, and recent versions display it as a percentage, so "200%" is 192 dpi, 125% is 120 dpi, and so on.

macOS briefly offered a developer preview of roughly the same thing. This was discarded in favor of only allowing integer values. macOS starts at 72 dpi, and supports 144 dpi, which it calls "Retina 2x". Some iPhones also do 216 dpi ("3x").

In Windows, you pick a screen resolution (on LCDs, generally the "native" resolution) and then apply a DPI to it. The application knows what the DPI is, and also gets notified when it changes (though I find this to be extremely unreliable). Thus, it can take a button that's supposed to be 100 pixels wide at 96 dpi, and make it 125 pixels wide at 120 dpi, or 200 pixels wide at 192 dpi. Or it can choose not to do that, and leave the button a fixed size. For example, it might make sense to scale a bitmap image to 96 dpi, and to 192 dpi, but not to anything in between, because that would look blurry. Depending on a litany of compatibility modes, if applications don't do that explicitly, the system may instead perform the scaling itself, which most of the time results in blurry rendering.

In macOS, the DPI is a property of the screen resolution. It'll try to guess for you "that resolution looks high enough that you probably want to render everything at 2x", or "that resolution is fairly low, let's stick to 72 dpi". The simplicity of integer values means that you don't run into layout problems: if only a bitmap is available, you can simply scale it up to a 2x2 grid of the same pixels, or use a better upscaling algorithm, but either way, it'll look normal. Thus, for the most part, application developers don't have much to do other than to supply higher-resolution bitmap images (or avoid them altogether). Similarly, when moving between screens that are or aren't Retina, the OS will simply either double the size or not. But that does of course mean that "I'd like it to be a little bigger" like on Windows won't work with this approach. There's no 125% or 150% or 250%, unlike on Windows. Instead, what Apple proposes for this is to change your screen resolution. The entire screen gets rendered to a virtual display of a different size, and then the result is scaled back by the GPU. This approach does introduce blur, whereas on Windows, an application that handles higher DPIs properly will look perfectly sharp at any of those scale factors.

This is all, by the way, separate from scaling the font size, which adds even more complexity. macOS doesn't currently have a global notion of a scalable font size, but Windows does have one. As does iOS.

2) Thus the issue is MacOS. And adjusting the UI so it can scale perfectly to any display resolution would require a major rework of the OS, which would be highly non-trivial.

Yeah, that ain't happening.

One Apple engineer reportedly said it was hard to maintain, but the fact is it was already in place, and they were able to maintain it, such that it worked well on the overwhelming majority of displays. Thus they already had a working solution in place, and they gave that up. If they're not even going to do the effort to maintain a working solution that allows sub-Retina monitors to display text nicely, then they're certainly not going to rework the entire OS to do so.

I'll note that Windows, too, has essentially deprecated subpixel rendering. The short answer is that it is hard to implement that on the GPU level, and if you don't, then you eschew GPU acceleration. So it only really worked for pieces of text that didn't need it, and as UIs move more and more to a view hierarchy where underlying portions are GPU-rendered, that became less and less feasible.
 
Windows, since 98 or so, has supported a notion of changing the DPI from 96 to something else. Windows 8.1 added the ability for different displays to have different DPIs (which became critical when laptops started shipping much higher DPIs and external displays did not). The DPI can be basically any fractional value, and recent versions display it as a percentage, so "200%" is 192 dpi, 125% is 120 dpi, and so on.

macOS briefly offered a developer preview of roughly the same thing. This was discarded in favor of only allowing integer values. macOS starts at 72 dpi, and supports 144 dpi, which it calls "Retina 2x". Some iPhones also do 216 dpi ("3x").

In Windows, you pick a screen resolution (on LCDs, generally the "native" resolution) and then apply a DPI to it. The application knows what the DPI is, and also gets notified when it changes (though I find this to be extremely unreliable). Thus, it can take a button that's supposed to be 100 pixels wide at 96 dpi, and make it 125 pixels wide at 120 dpi, or 200 pixels wide at 192 dpi. Or it can choose not to do that, and leave the button a fixed size. For example, it might make sense to scale a bitmap image to 96 dpi, and to 192 dpi, but not to anything in between, because that would look blurry. Depending on a litany of compatibility modes, if applications don't do that explicitly, the system may instead perform the scaling itself, which most of the time results in blurry rendering.

In macOS, the DPI is a property of the screen resolution. It'll try to guess for you "that resolution looks high enough that you probably want to render everything at 2x", or "that resolution is fairly low, let's stick to 72 dpi". The simplicity of integer values means that you don't run into layout problems: if only a bitmap is available, you can simply scale it up to a 2x2 grid of the same pixels, or use a better upscaling algorithm, but either way, it'll look normal. Thus, for the most part, application developers don't have much to do other than to supply higher-resolution bitmap images (or avoid them altogether). Similarly, when moving between screens that are or aren't Retina, the OS will simply either double the size or not. But that does of course mean that "I'd like it to be a little bigger" like on Windows won't work with this approach. There's no 125% or 150% or 250%, unlike on Windows. Instead, what Apple proposes for this is to change your screen resolution. The entire screen gets rendered to a virtual display of a different size, and then the result is scaled back by the GPU. This approach does introduce blur, whereas on Windows, an application that handles higher DPIs properly will look perfectly sharp at any of those scale factors.

This is all, by the way, separate from scaling the font size, which adds even more complexity. macOS doesn't currently have a global notion of a scalable font size, but Windows does have one. As does iOS.



Yeah, that ain't happening.



I'll note that Windows, too, has essentially deprecated subpixel rendering. The short answer is that it is hard to implement that on the GPU level, and if you don't, then you eschew GPU acceleration. So it only really worked for pieces of text that didn't need it, and as UIs move more and more to a view hierarchy where underlying portions are GPU-rendered, that became less and less feasible.
Thanks for taking the time to write that summary. So, essentially, MacOS works great on all applications, but at integer resolution only (though you still need a Retina monitor for text to look good). Windows, with well-written applications, is better than MacOS: It works great at all resolutions. But with poorly-written applications, it's worse.

What you wrote is mostly the part I more-or-less understood from previous discussions here on MR. [But still, your summary was very nice.] But what I meant when I said "I've never been able to get a sufficiently thorough explanation of this to enable me to really understand the difference between how Windows and MacOS handle UI scaling" is that I don't understand the underlying technical differences in how they implement these (and it would probaly take a very well-written textbook chapter for me to do so).
macOS starts at 72 dpi, and supports 144 dpi, which it calls "Retina 2x". Some iPhones also do 216 dpi ("3x").
I still don't understand this, but I thought they started with an internal bitmap of 127 points/inch (logical pixels), rendered at 254 ppi (rendered pixels), and then resampled to the display's actual resolution (hardware pixels); see https://forums.macrumors.com/thread...ies-mbps.2345322/?post=31114755#post-31114755
 
  • Like
Reactions: chucker23n1
Two examples via John Siracusa of how things can break if you allow arbitrary scale factors:

safari-scaled-2x.jpg


Here, Safari probably customizes its toolbar height, and Mac OS X gets confused by it. When doubling the entire window, it also then doubles the toolbar height, leaving a lot of empty room.

text-edit-2.0.png


Here, Mac OS X doesn't quite understand how to scale the alignment segmented control, and does so unevenly.

Even though both examples use a 2.0 scaling, they would be avoided if only a 2.0 scaling were available, because the layout guessing/arithmetic goes away.
 
Last edited:
Thanks for taking the time to write that summary. So, essentially, MacOS works great on all applications, but at integer resolution only (though you still need a Retina monitor for text to look good). Windows, with well-written applications, is better than MacOS: It works great at all resolutions. But with poorly-written applications, it's worse.

Precisely. In theory, Windows' approach is nicer, which is presumably why Apple tried it for a while and gave up; in practice, I still (as of recent Windows 11 builds) find even some first-party apps don't quite handle it right.


I still don't understand this, but I thought they started with an internal bitmap of 127 points/inch (logical pixels), rendered at 254 ppi (rendered pixels), and then resampled to the display's actual resolution (hardware pixels); see https://forums.macrumors.com/thread...ies-mbps.2345322/?post=31114755#post-31114755

The original Mac had a DPI of roughly 72. This was a key part of the WYSIWYG approach: if you could assume all Mac displays are 72 dpi, then you could literally hold a printed page against the monitor and it would be the same size, as the OS would know how large a pixel is, physically.

Modern Macs have actual PPIs that are higher, but as a developer, you still pretend to develop against 72. (And 96 on Windows.) Just think of 72 (or 96) as "100%" or "1.0".
 
The original Mac had a DPI of roughly 72. This was a key part of the WYSIWYG approach: if you could assume all Mac displays are 72 dpi, then you could literally hold a printed page against the monitor and it would be the same size, as the OS would know how large a pixel is, physically.

Modern Macs have actual PPIs that are higher, but as a developer, you still pretend to develop against 72. (And 96 on Windows.) Just think of 72 (or 96) as "100%" or "1.0".
Sorry, I'm confused. How does this fit with the internal bitmap having previously been 110 pts/in, changing to 127 pts/in when they switched to Retina displays?
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.