Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Reminds me of the PPc days , the 601 hit the ground running ( bit like the m1 ) , showed so much promise , sadly the chips following didn't live up to the hype

Is history repeating itself ??
PPC’s BIGGEST failing was not being able to provide a performant/efficient solution for the growing mobile market. The first Intel Mac shipped around the same time (2006) that laptops in the broader market outsold desktops for the first time(2005). History repeated itself when Intel was unable to provide a performant/efficient solution for the growing mobile market. :)
 
  • Like
Reactions: George Dawes
Given that the Ultra was attached to rumours that included the Extreme/Quadra version, and the Pro/Max/Ultra aspects of the rumour were perfectly correct, there is every reason to imagine that the M* Quadra was in the works. For whatever reason, Apple hasn't shipped it. Maybe with the M3, M4, or M5 Apple will ship it.
I think the Extreme/Quadra part of the rumor was a guess tied to “How do other chip companies do it? Of course, Apple will do the same as they are also a chip company! Otherwise they can’t compete.” They didn’t take into account the reality of the situation. AMD and Intel got to where they are now because one is always trying to one-up the other on the high end. With Apple Silicon, there’s no more one upping… a user that needs the fastest Mac has to get something from Apple, so there’s no need to go to the bleeding edge for a system that will sell, at most, 5,000 units in a year IF that.

As we’ve seen already, the low end of an Apple Silicon generation will have a single core score similar to the highest end processor of a generation (which is insane, look at the competition’s low vs high end!) and the biggest difference for the high end will be in how many of those cores are duplicated on the die.
 
PPC’s BIGGEST failing was not being able to provide a performant/efficient solution for the growing mobile market.

Which, as best as I can tell, was no technical question at all but purely one of market dynamics. By then, both IBM and Motorola had given up on the desktop/laptop market. So they no longer had a self-interest in designing CPUs for it. Instead, IBM made ones for beefy servers (still to this day) and offshoots for game consoles (which they eventually gave up on as well), and Motorola on the embedded area (which they spun off as Freescale, which NXP, itself spun off from Philips, eventually bought). Apple needed something in between, but nobody wanted to design it for Apple’s volumes.

With today’s Apple, PowerPC’s trajectory could be quite different. (Which makes me wonder if they ever had “what if we choose PowerPC for the iPhone” as a thought experiment.)
 
Apple is pruning machines quickly right now because it wants to stop shipping intel OS and get its customers on ARM. The M1 will have a perfectly reasonable life span because it is part of "the future" still.
Prices on the used market are quite good I was able to get a used late 2020 8GB 512GB (minor marring of ⅓ of screen) for $650 and works fine at least for travel.
 
I guess I also tell myself the story, and believe the rumours that the "extreme" chip failed at the prototype stage.. cost, or lack of linear scaling, no idea? which is why the ASi Mac Pro feels so phoned in. The Mac Pro team had a hard delivery date and had to cobble together something by WWDC which is why you have the MP as an M2 studio ultra and $3000 of fresh air.
Folks are still wrapping their heads around what Apple Silicon is. Even though Apple’s never shipped their own silicon with external GPU’s, it’s hard for people (trained by AMD, NVidia and Intel over years and years) to understand that they can continue to NOT have external GPU’s just because it’s moved from a phone form factor to a traditional PC form factor. When a rumor someone wants to believe says “extreme” and the eventual reality says “no extreme”, “I guess that wasn’t a thing” is hard to accept. And, the rumormonger that wants to remain relevant is oh so happy to help them NOT accept it. “Oh, I was right, but they decided not to do it now, but will do it in the future, for SURE!” they’ll say :)

I also think Apple will need to find a new paradigm/strategy to keep scaling the GPU/compute in its chips. I don't think Just 2 MAX chips with their incremental improvements is gonna keep up with dedicated GPU tech. Maybe I'm wrong and the shrink to M3 will be massive for the GPU.. but will it be a bigger jump than from Nvidia 4000 -> 5000? You are not die shrinking every year, and forever. It needs to be a big jump just to keep pace and stay only a few years behind. I think the daughter-board idea.. or an eGpu.. or 4 MAX chips is the only way to close that gap. Assuming Apple wants to, and doesn't think in three years all users will be happy with 3090 performance and never more.
Another thing it’s hard for folks to understand is that NVidia’s proprietary code will never perform well on anything but Nvidia hardware. CUDA makes ML performant if one is locked into NVidia hardware, but by no means is CUDA an ML “requirement”. Apple has performant solutions, but one can’t simply grab a library honed over years and years of open source usage off-the-shelf for Apple Silicon… in many cases, one has to create the methods themselves. After doing so, developers have found that Apple’s solutions can provide performance in ways impossible on any non-Apple Silicon system.
 
Last edited:
Maybe, but that rumor nailed M1 Pro, Max, and Ultra. Just not Quad.
Of the two things: “Apple designed and delivered on 3 products, but the same team with the same competency couldn’t deliver on the 4th” OR “Apple planted information to find a leaker,” the latter to me is far more believable. Just as I’m sure that those that want the Quad to be a thing finds the former more believable.
 
  • Disagree
Reactions: DavidSchaub
Exactly.

I buy what I want, others approvals are not needed. I have spent silly money on silly things...but guess what, it's my money and I make it for that reason. But what YOU want.
NO, see, if you like what I don’t like and buy what I don’t buy, then my plan of “voting with my dollars” won’t work because they’ll be getting SOMEONE ELSE’S DOLLARS! You being happy with what Apple makes today is spoiling my plan of forcing them to release a $999 PC compatible minitower with support for Nvidia and AMD GPU’s. A plan I’m SURE will work.

So… stop. /s
 
So we are comparing an M2 Ultra with Intel 28 Core unit from 2016, which seems unfair.
These days Intel and AMD have CPUs with 56 and 64 cores with access to 4TB of RAM.

The most significant benefit of the M processors is power consumption, but workstation has fewer restrictions and needs to deliver raw power.

I would love to see a comparison with the newer CPUs from AMD and Intel.

Apple is not even in the first 100 when it comes to raw power
Personally,

I'd love to see the comparison of your AMD cpu of choice vs the M2 for a month idle vs 6hr/day across every day power consumption over a year between the two - in ANY state or province's capital city for electricity cost based on the power used by each system.

Then what the total cost is per a year - considering non-fluctuating seasonal costs (late Spring to mid summer vs late fall to mid Winter).

I'm VERY curious for THOSE results!! Wonder if the Ultra would pay for itself outright OR pay for your AMD system.
 
Another thing it’s hard for folks to understand is that NVidia’s proprietary code will never perform well on anything but Nvidia hardware. CUDA makes ML performant if one is locked into NVidia hardware, but by no means is CUDA an ML “requirement”.

To be fair, that's really no different for the Neural Engine or targeting Metal.


Of the two things: “Apple designed and delivered on 3 products, but the same team with the same competency couldn’t deliver on the 4th” OR “Apple planted information to find a leaker,” the latter to me is far more believable. Just as I’m sure that those that want the Quad to be a thing finds the former more believable.

Plausible, yes.
 
There isn't an extreme TODAY.
You seriously, after being surprised by the Pro/Max, then again by the Ultra, want to go out on a limb and say that Apple will NEVER create a design based on 4 chiplets?
Well, good luck, it's your reputation...
That’s what I’m saying, there isn’t an extreme today and there wasn’t one in the past. And as of now, Apple has shown that the Apple Silicon line up will consist of three packages

One baseline package
One higher spec’d version of that baseline package
One even higher spec’d package that can be delivered as a single package solution (Max) or a double package solution (Ultra).

And, that makes sense for a company that has their focus primarily on performant mobile solutions.
 
Another thing it’s hard for folks to understand is that NVidia’s proprietary code will never perform well on anything but Nvidia hardware. CUDA makes ML performant if one is locked into NVidia hardware, but by no means is CUDA an ML “requirement”. Apple has performant solutions, but one can’t simply grab a library honed over years and years of open source usage off-the-shelf for Apple Silicon… in many cases, one has to create the methods themselves. After doing so, developers have found that Apple’s solutions can provide performance in ways impossible on any non-Apple Silicon system.

I have known for 7+ yrs that CUDA is dead on OSX, I don't expect to see it come back. So there is no confusion there. It has been pretty sad though and confusing the last 7 years to watch Apple not even have a horse in the ML/AI race. The challenge, which is much harder that hardware for AMD, Intel and Apple is to build the eco-system and get the community support and following. Apple tried to promote OpenCL and that failed, AMD abandoned it also for HIP their fledgling version of cuda.

Do I think an iGPU can eventually be really powerful, sure. It really depends on your customer. A smart phone has had more than enough processing power for word processing, web browsing, and spreadsheets for a decade. I get that an iGpu serves 80%+ of apples customers on the low and mid end of the spectrum.

Apple is unwilling to throw a life preserver (eGPU, PCIe etc support) to the Pro community to weather the storm for the next 3 or more years until ASi is actually competitive with current dedicated GPU's.

Will an iGPU become more than enough processing power than my grandma ever needs.. yes probably already is. But in Pro land when time is money there isn't a finish line..its just an endless speed drag race of time vs. money.. and pushing what can be done at the bleeding edge, not what was cool and achievable 3 yrs ago. There is no now its as fast a 3090 so we are done.
 
I have known for 7+ yrs that CUDA is dead on OSX, I don't expect to see it come back. So there is no confusion there. It has been pretty sad though and confusing the last 7 years to watch Apple not even have a horse in the ML/AI race. The challenge, which is much harder that hardware for AMD, Intel and Apple is to build the eco-system and get the community support and following. Apple tried to promote OpenCL and that failed, AMD abandoned it also for HIP their fledgling version of cuda.

Do I think an iGPU can eventually be really powerful, sure. It really depends on your customer. A smart phone has had more than enough processing power for word processing, web browsing, and spreadsheets for a decade. I get that an iGpu serves 80%+ of apples customers on the low and mid end of the spectrum.

Apple is unwilling to throw a life preserver (eGPU, PCIe etc support) to the Pro community to weather the storm for the next 3 or more years until ASi is actually competitive with current dedicated GPU's.

Will an iGPU become more than enough processing power than my grandma ever needs.. yes probably already is. But in Pro land when time is money there isn't a finish line..its just an endless speed drag race of time vs. money.. and pushing what can be done at the bleeding edge, not what was cool and achievable 3 yrs ago. There is no now its as fast a 3090 so we are done.
And even being as fast as a 3090 is SLOW in pro land now. 4090 is fast now, and for video editing the arc 770 combined with the 13900k is crazy fast. The hyperencode feature that intel has created using both the igpu and arc graphics cards together as one cohesive unit, is the bees knees for content creation.
 
That’s what I’m saying, there isn’t an extreme today and there wasn’t one in the past. And as of now, Apple has shown that the Apple Silicon line up will consist of three packages

Thing is, if the plan was all along for the Mac Pro to be a Mac Studio with PCIe slots, they could’ve gone that 15 months ago. You really don’t need all that time to take the 2019 Mac Pro, remove the DIMM slots, and put an M1 Ultra in.

So I don’t fully buy it. Either there were discussions whether to make one at all, or there were prototypes to make it more powerful.


 
NO, see, if you like what I don’t like and buy what I don’t buy, then my plan of “voting with my dollars” won’t work because they’ll be getting SOMEONE ELSE’S DOLLARS! You being happy with what Apple makes today is spoiling my plan of forcing them to release a $999 PC compatible minitower with support for Nvidia and AMD GPU’s. A plan I’m SURE will work.

So… stop. /s
Ahhhhh No.
 
  • Haha
Reactions: Unregistered 4U
Thing is, if the plan was all along for the Mac Pro to be a Mac Studio with PCIe slots, they could’ve gone that 15 months ago. You really don’t need all that time to take the 2019 Mac Pro, remove the DIMM slots, and put an M1 Ultra in.

So I don’t fully buy it. Either there were discussions whether to make one at all, or there were prototypes to make it more powerful.
There is Apple Vision now and the ASi Mac Pro is the result of that distraction.

The other important technical reason is that the M1 and M2 Max die is not suitable for a quad version and there is no known packaging technology in HVM that would enable Apple to manufacture a quad version in any reasonable cost effective manner. To get to quad Apple would need a new die layout. The hope was M3 Max would have a better layout to make it quad compatible but we did not get M3 and likely will not see it for a while.
 
Right, I wasn't even aware of many of those until I read your research on github. Thanks for that, BTW - very enlightening!


My take was a little different. If you look at GPU perf M1->M2 it seems to be barely different isoclock.
I don't think that's true? The rough GPU numbers I've seen, eyeballing the few results for GB Metal and gfxBench, are that the M2 Ultra is 1.65x "as fast" (by some sort of weighted metric) in GPU as the M2 Max, vs about 1.5x for the M1 Ultra vs M1 Max.

That additional .15x scaling isn't great, but it also ain't nothing, and is about what I'd expect for the limited tweaks I suggested.

Do you have different numbers that suggest different Max->Ultra scaling?
 
I don't think that's true? The rough GPU numbers I've seen, eyeballing the few results for GB Metal and gfxBench, are that the M2 Ultra is 1.65x "as fast" (by some sort of weighted metric) in GPU as the M2 Max, vs about 1.5x for the M1 Ultra vs M1 Max.

That additional .15x scaling isn't great, but it also ain't nothing, and is about what I'd expect for the limited tweaks I suggested.

Do you have different numbers that suggest different Max->Ultra scaling?

Oh, one other nice piece of scaling that isn't yet widely reported is that for the M1 Ultra you could only use one of the media encoders at a time; with the M2 Ultra you can use both. (Unfortunately I can't find the Tweet that made this claim.)

I seem to remember the same held for the ANE, that you could only use one of them on an M1 Ultra. Presumably that has also changed, which again is presumably of value to some people!
 
If you look at GPU perf M1->M2 it seems to be barely different isoclock.

Do we have the clock the GPU is running at?

Let's assume it's proportional to the CPU.

Going by Geekbench 6 Metal scores:

  • A14 to A15 (iPhone 12 Pro Max to 13 Pro Max) is up 24.5%. Adjusting for the clock difference, it's still up 16.7%.
  • M1 to M2 (iPad Pro) is up 41.3%, or 29.2% adjusted for clock.
  • M1 Pro to M2 Pro is up 19.3%. Adjusted for clock, 9.1%.
 
My take was a little different. If you look at GPU perf M1->M2 it seems to be barely different isoclock.
I don't think that's true? The rough GPU numbers I've seen, eyeballing the few results for GB Metal and gfxBench, are that the M2 Ultra is 1.65x "as fast" (by some sort of weighted metric) in GPU as the M2 Max, vs about 1.5x for the M1 Ultra vs M1 Max.

That additional .15x scaling isn't great, but it also ain't nothing, and is about what I'd expect for the limited tweaks I suggested.

Do you have different numbers that suggest different Max->Ultra scaling?
On the contrary, I think that's exactly correct, and if you look back we both agreed about this a couple dozen comments ago.

When I said "GPU perf M1->M2" I meant for the M2 line up until the introduction of the Ultra. So thus the generational performance difference between the Ultras is notable - exactly as we both said earlier. My point is that I think it's more likely the improvements were to low-hanging fruit in the interconnect, rather than in the GPUs, since there's a notable boost to the M2 Ultra but not to the Max.

That makes even more sense since the GPUs are the same between the base M2 and the bigger M2s, and thus Apple had much less time to work on them (once they realized they couldn't implement the N3 design) that they did the M2 Ultrafusion, which only appeared much later on the M2 Max.
 
Oh, one other nice piece of scaling that isn't yet widely reported is that for the M1 Ultra you could only use one of the media encoders at a time; with the M2 Ultra you can use both. (Unfortunately I can't find the Tweet that made this claim.)

I seem to remember the same held for the ANE, that you could only use one of them on an M1 Ultra. Presumably that has also changed, which again is presumably of value to some people!
I had not heard that about either feature. If true, that's indeed a big improvement.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.