Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hasn’t TSCM been working on their own interposer “fabric”? Who’s to say Apple hasn’t adopted theirs because it makes the fab process easier?
UltraFusion is just Apple’s marketing name for TSMC’s Integrated Fan-Out (InFO) technology. Apple didn’t create their own off-chip interconnect, they are using a TSMC packaging technology.

IIRC, the M1 and M2 Ultras both use the same “InFO-LSI” (LSI = Local Silicon Interconnect) packaging, but that is obsolete now. TSMC has moved on to what they are calling “InFO-oS” (oS = on Substrate) which is similar, but improved. Indeed, they have announced 10x improvements in substrate performance, but it’s unclear if that applies to InFO, it’s possible it only applies to CoWoS and/or SoIC.

So the most likely reason the M3 Max looks different is because the new and improved “UltraFusion” packaging is different, not because it is absent.
 
Last edited:
They only tested this to 3rd party security libraries and not Apple 1st party security library, that should be a sign 🤪
That's not correct, it isn't a sign of anything.

Apple had first-party functionality on the M3 to disable the DMP during specific operations that was undocumented until last week, and it requires developers to make code changes it is not an OS-wide fix. Those code changes do not work on the M1 or M2 because the hardware doesn't support disabling the DMP. There may be some workarounds for helping keep strong encryption keys secret due to the long time it takes to break those but as I said in my edited post the issue goes far beyond just stealing secret keys, that is just the most damaging possibility of the attack.

If and when Apple mitigates it system-wide in MacOS they should be explicit about how they are accomplishing it because until that happens I don't think they are doing so, and it is in fact why they had the M3 doing it with their own code already during encryption operations.

Personal tolerance for this type of flaw, currently unmitigated in the OS outside of on a specific chip doing only specific types of operations is a decision consumers will have to make for themselves, and unfortunately this area gets very complicated very quickly. I've said all I'm willing to say about this and will stop further posts in this thread now because it's getting too close to areas I'm restricted in discussing experience with and it isn't worth going to jail to make a concrete point with evidence.

I'll sum it up like this: Apple should directly address the issues with DMP and any software OS or microcode updates that apply to not only encryption operation but the entirety of the side channel attack surface via DMP. Some encryption operations are being handled via APIs like CryptoKit that enables DIT, but doing so still does not disable the DMP on the M1 or M2 CPUs. How and if they respond will inform my purchasing decisions this year because I absolutely think future generations of silicon will have better hardware mitigation and a more narrowly scoped DMP as a result of this and other research that has been ongoing. That is a good thing for consumers long-term, and it's important this research happens, is disclosed, and vulnerability is mitigated for the benefit of everyone.

Anyone interested can read the updated ABI documentation here: https://developer.apple.com/documen...IT-for-constant-time-cryptographic-operations

And here's a good link that goes into some further details about constant-time cryptography: https://www.chosenplaintext.ca/articles/beginners-guide-constant-time-cryptography.html
 
UltraFusion is just Apple’s marketing name for TSMC’s Integrated Fan-Out (InFO) technology. Apple didn’t create their own off-chip interconnect, they are using a TSMC packaging technology.

IIRC, the M1 and M2 Ultras both use the same “InFO-LSI” (LSI = Local Silicon Interconnect) packaging, but that is obsolete now. TSMC has moved on to what they are calling “InFO-oS” (oS = on Substrate) which is similar, but improved. Indeed, they have announced 10x improvements in substrate performance, but it’s unclear if that applies to InFO, it’s possible it only applies to CoWoS and/or SoIC.

So the most likely reason the M3 Max looks different is because the new and improved “UltraFusion” packaging is different, not because it is absent.
Thanks, I couldn’t tell if they were using TSMC’s solution wholesale or codeveloped it with them.
 
  • Like
Reactions: tenthousandthings
Oh please. You would be the first to say "Apple has strayed from its core technology and has no business making robots. Massive fail! And it should be ⅓ the price anyhow!
don‘t know - but when I watched the Nvidia show, it reminded me of the Apple presentations when Jobs was still at the helm. No „mother nature“ but one more thing. Apple once was super innovative, but this is a long time ago - 2011 was the last time I saw him on stage.

It make a difference if you can feel the emotion and the excitement or if a Tim just enters the stage and tells you a monotonous tale about the words incredible, amazing and awesome - but it is drop dead boring. Hope mother mature stays in bed this year …
 
don‘t know - but when I watched the Nvidia show, it reminded me of the Apple presentations when Jobs was still at the helm. No „mother nature“ but one more thing. Apple once was super innovative, but this is a long time ago - 2011 was the last time I saw him on stage.

It make a difference if you can feel the emotion and the excitement or if a Tim just enters the stage and tells you a monotonous tale about the words incredible, amazing and awesome - but it is drop dead boring. Hope mother mature stays in bed this year …
I love the concept of bemoaning the loss of innovation at Apple, in a thread about Apple Silicon…
 
Let’s extrapolate that forward a few generations. Unless the GPU industry puts real effort into breakthroughs in per/watt, people are going to have to upgrade their home circuit breakers to handle the load.

Meanwhile, Apple has a massive amount of headroom to dial up the power, but they’re a long-term thinking company and have spent years focusing on making things as efficient as possible.

NVIDIA/AMD/etc are going to hit a ceiling for professional users in the not too distant future unless people are going to be willing to install server grade power systems…

We'll see.
I hope you're right.
In the meantime 3D, virtual cinema production and game engine users have moved en masse to massive core Threadripper Pros, Epycs and RTX 6000 Adas.
They most certainly have server grade power systems.
 
Then said folk are in for a rude awakening as societies are forced to deal with energy costs.
So much this. We can’t talk politics here, but the world is going to look *very* different as the age of abundance comes crashing down in the not to distant future…

Energy is cheap enough today that these home-based renderers don’t care that their computer is using 1000+w. If their energy prices doubled, tripled, etc they’d be singing a different tune about efficiency.
 
So much this. We can’t talk politics here, but the world is going to look *very* different as the age of abundance comes crashing down in the not to distant future…

Energy is cheap enough today that these home-based renderers don’t care that their computer is using 1000+w. If their energy prices doubled, tripled, etc they’d be singing a different tune about efficiency.

Bitcoin miners have entered the chat.

Look, you know all the fabulous movies and TV shows Apple and the rest of Hollywood produce? How much energy do you think they consume? Prolly the output of several nuclear power plants. I got more bad news for you. None of it is rendered with Macs. Zip, zilch, nada, a big fat zero. So much for Apple signaling that they are saving the planet. Hilarious.
 
Last edited:
  • Like
Reactions: Ruftzooi
I love the concept of bemoaning the loss of innovation at Apple, in a thread about Apple Silicon…
Yeah people who don‘t know that Jobs started Apple Silicon tend to do so. Apple Silicon was Jobs last gambit. It started in 2008 when Apple bought P.A. Semi - Tim/Apple today had nothing to do with it and it cannot be called innovative that Apple didn‘t stop this long planned transition.
 
  • Like
Reactions: Ruftzooi and SFjohn
Bitcoin miners have entered the chat.

Look, you know all the fabulous movies and TV shows Apple and the rest of Hollywood produce? How much energy do you think they consume? Prolly the output of several nuclear power plants. I got more bad news for you. None of it is rendered with Macs. Zip, zilch, nada, a big fat zero. So much for Apple signally that they are saving the planet. Hilarious.
Do you have a specific point you’re looking to discuss or is this just another airing of perceived grievances style post?

I think it’s the right call long term, because of the downstream effects, to get as much performance out of as little power as possible. That’s my opinion 🤷‍♂️

Sure were a lot of Macs at ESPN doing production and editing when I was there…
 
  • Like
Reactions: Harry Haller
It's always just video editing, isn't it.
I agree with you about using as little energy as possible.
But for some tasks more powah is necessary.

I also agree that Apple silicon is brilliant.
For phones, tablets, laptops, Minis and Mac Studios.
For high end professional applications that the 7.1 was capable of...not now.
Maybe someday.
 
  • Like
Reactions: Ruftzooi
Oh, hello!! So if the Ultra is standalone, and - as the article suggests - has the POSSIBILITY of a new UltraFusion connection.... Then that would be the mythical (but reported on) M3 Extreme.

That would not only allow the MacPro to be a valid machine (if Extreme was limited to it exclusively), but presumably that redesign would allow for say - cards and stuff? Which would justify keeping the Mac Pro chassis in production in the first place. #MacPro

I see no connection between "the next Mac Pro could feature an M3 Extreme" and "therefore, it would also allow for expandability". No, it would allow for a more powerful SoC. The limited expandability remains.
 
  • Like
Reactions: Harry Haller
Or maybe they could try some actual courage, and admit that they're not good at building GPUs. It would be nice to have Nvidia options on the Mac again.

Apple's GPUs really aren't that far behind Nvidia's, and the gap is narrowing. If you need their performance now, sure, go with them. But if you're Apple… why bother?
 
  • Like
Reactions: SFjohn
4090 is like 3.5x faster but consumes 7.5x the power (not to mention the M2 power consumption includes CPU), and that’s the M2 with no RT hardware
Performance matters more than power consumption. Mac cant even do that.
 
Folks who render for a living don't care about power consumption.
Talk to the people who scale out data center size render farms. Over the lifetime of rows of racks, they spend more total $$$ on power and cooling than on the CPUs and GPUs. The CFOs for the customers who pay for these render farms certainly care.
 
  • Like
Reactions: picpicmac
TLDR: It appears that, if Apple were to produce a monolothic M3 Ultra, it would need to be smaller than two M3 Max's, even if they ditched the efficiency cores (which, being so small, wouldn't help much):

According to this article by Anton Shilov ( https://www.tomshardware.com/softwa...billion-to-tape-out-new-m3-processors-analyst ), the M3's die size is ≈146 mm^2. [For context, the M1 was reported as being 120 mm^2: https://www.counterpointresearch.com/insights/m1-pro-m1-max-give-definitive-push-apples-m1-journey/#:~:text=The M1 Pro and M1 Max, according to Apple, have,and 425.1mm2, respectively.]

Based on their relative transistor counts, we can estimate that the M3 Max is 146 mm^2 x (92 B/25 B) = 537 mm^2. Thus two of them would have an area of ≈ 1075 mm^2.

By comparison, the reticle limit is the maximum chip size that can be etched. According to Anton Shilov, "The theoretical EUV reticle limit is 858 mm^2 (26 mm by 33 mm)".[ https://www.anandtech.com/show/1887...ze-super-carrier-interposer-for-extreme-sips# ]

And dropping the efficiency cores, as Yuriev suggested, will only have a marginal effect on size. At the bottom I've posted a screenshot of an annotated die shot of the M3 Max created by High Yield (
). Using a ruler, I found the total area of all four efficiency cores (in green, next to the AMX coprocessor) was only 0.7% of the size of the die. Let's call it 1% Thus removing them would reduce the needed die area from ≈ 1075 mm^2 to ≈ 1065 mm^2.

Given this small change, and given the value of the efficiency cores in reducing power consumption of background tasks, I don't see why Apple would remove them.

We know dies >800 mm^2 size can be etched, since NVIDIA's (very expensive) GH100 GPU has a die size of 814 mm^2. [ https://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/ ]

However, if my extrapolation above is roughly correct, it would not be possible to produce a monolithic M3 Ultra that retained all the elements of two M3 Max's.

Also, pushing the reticle limit is probably expensive, because the larger area gives a higher chance of fatal defects.

Thus it doesn't seem like it would make sense for Apple to offer a monolithic Ultra in place of a 2x Max, since it would be less capable, and also require substantially more development costs.

The only thing that would make sense (with respect to product capability, though perhaps not financially) would for them to offer the Studio with a monolithic Ultra and a monolithic 2x Ultra (instead of a Max and 2x Max), where the monolithic Ultra would be, say, 50% bigger than the M3 Max (=> ≈805 mm). Then it could have 18 P-cores and 60 GPU cores, giving the 2x Ultra 36 P-cores and 120 GPU cores (thus making it effectively a 3x Max).

Or, even more intriguingly, they could use the extra real estate mostly for more GPU cores. That should allow them to double the number of GPU cores, giving 120 and 240 GPU cores on the hypothetical M3 Ultra and 2x M3 Ultra, respectively. For rough context, based on Geekbench's OpenCL Benchmarks, you'd need ≈150 M3 GPU cores to equal an RTX 4090 desktop, and ≈160 to equal an L40S datacenter GPU.

And they could make it a bit smaller by not proportionately increasing the number of external display engines, keeping them to 4 and 8, respectively, which may be enough for most.

Do I think Apple will do this? No idea, but I would be surprised if they did, given the substantial development costs and potentially small market. Then again, that would be a surprise it would be fun to see.


1711687439887.png
 
Last edited:
probably more reliable than Gunman, pardon me Gurman.
Gurman said there would be no M2 Studio after the M2 Mini and Mini Pro came out. I believed it, even though I needed the studio, so I bought the Mini Pro, 4 months later the M2 studio came out. until Apple says something I don't believe anyone whose primary goal is to make money by making videos.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.