Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
16 cores is going to be quite the monster. I'm also surprised that they're increasing CPU cores rather than more GPU cores - given that the demand for faster GPUs is higher than faster CPUs. Perhaps the 40-core GPU is the lower end and that the Max could have more for top binned chips.
M3: 6P/4E, 192 bit bus, 12 GB RAM, 12 GPU cores
M3 Pro: 8P/8E, 384 bit bus, 24 GB RAM, 24 GPU cores
M3 Max: 12P/4E, 768 bit bus, 48 GB RAM, 48 GPU cores.
 
I'm curious what they'll be doing with the Pro configuration if this is their Max configuration. I don't assume many people buying the Max would really care about maximizing battery life, so they could probably afford to throw as many cores as they want at them.

(Not that battery life and energy efficiency don't matter, we all remember the thermal dumpster disaster of the i9 MacBooks. But I don't think Apple will be repeating that anytime soon on Apple Silicon. 😂)

Given the uncertainty regarding whether this 16/40 core M3 Max variant is the base SKU or the top end SKU, it's hard to say for sure where the Pro would slot in as far as core counts go. Assuming this is a 3nm part, 40 cores could easily be the base config given the ability to fit more transistors into the same die area.
 
  • Like
Reactions: ArkSingularity
LPDDR5x BW% ExtraExtra GB/sCPU CoresGPU CoresTotal
M3100 GB/s0%0 GB/sSameSame4P + 4E, 10 Cores
M3 Pro272 GB/s36%72 GB/s+ 2 E Cores+ 1 Cores8P + 6E, 20 Cores
M3 Max544 GB/s36%144 GB/s+ 4 P Cores+ 2 Cores12P + 4E, 40 Cores

The leak specs of M3 Max caught me by surprise too until I realized that with same amount of bus width, M3 Max going to get extra 144GB/s of memory bandwidth. All three new SoCs are getting extra 36% memory bandwidth but M3 Max is getting the most due to 512-bit memory bus. And no, I don't think Apple needs to increase memory bus at all :rolleyes:

I put all three SoCs in above table to show the memory bandwidth improvement, so far we know M3 Max is getting extra 4 P cores and 2 GPU cores which doesn't sound a lots but you have to count in CPU and GPU architectures/clock improvement in M3 family. That's why I am not sure about the amount of CPU cores in M3 Pro? If Apple opts for GPU improvement within M3 and M3 Pro, then CPU core counts will remain the same??? We shall see..
 
Last edited:
  • Like
Reactions: Lgswe
LPDDR5x BW% ExtraExtra GB/sCPU CoresGPU CoresTotal
M3136 GB/s36%36 GB/s?+ 2 Cores ??
M3 Pro272 GB/s36%72 GB/s?+ 5 Cores ??
M3 Max544 GB/s36%144 GB/s+ 4 P Cores+ 2 Cores12P + 4E, 40 Cores

The leak specs of M3 Max caught me by surprise too until I realized that with same amount of bus width, M3 Max going to get extra 144GB/s of memory bandwidth. All three new SoCs are getting extra 36% memory bandwidth but M3 Max is getting the most due to 512-bit memory bus. And no, I don't think Apple needs to increase memory bus at all :rolleyes:

I put all three SoCs in above table to show the memory bandwidth improvement, so far we know M3 Max is getting extra 4 P cores and 2 GPU cores which doesn't sound a lots but you have to count in CPU and GPU architectures/clock improvement in M3 family. That's why I am not sure about the amount of CPU cores in M3 Pro? If Apple opts for GPU improvement within M3 and M3 Pro, then CPU core counts will remain the same??? We shall see..
M3 Max has 48 GB of RAM. Which suggest wider bus width.
36 GB of RAM of leaked specs for M3 Pro also suggest that bus width is 50% wider.
 
M3 Max has 48 GB of RAM. Which suggest wider bus width.
36 GB of RAM of leaked specs for M3 Pro also suggest that bus width is 50% wider.

They don't suggest that at all. If I recall previous discussion correctly you are deeply laboring under the presumptiont hat Apple is buying the cheapest commodity LPDDR ram packages off the shelf. Apple isn't buy commodity RAM packages. Their stuff is semicustom; it isn't off the shelf. It is more so a matter just stacked the right number of dies of the correct capacity than expanding the bus width.

There is really no room around a plain Mn package Apple has been using to slap another RAM package in there to make the width wider. The 'oh just use a bigger package" excuse isn't going to fly so well inside an iPad Air/Pro. There is no copious extra room. Similar with the iMac 24" chin.

Apple's prices of $/GB for RAM are high enough they can get semi-custom packages and still make a tidy profit. They don't desperately need to scrape the bottom of the barrel of most commodity RAM package possible.
 
  • Like
Reactions: smalm and MRMSFC
I think there is a case for making the memory bus wider, as it would allow them to improve the GPU performance and RAM capacity while still using more available and cheaper LPDDR5 instead of 5x. Wider bus costs more power, but this is where Apples recently published patents on powering down memory controllers and memory folding schemes might come into play. Of course, wider bus itself costs more, but it might be cheaper/more feasible than sourcing enough LPDDR5x. And surely, this is just a speculation.
 
They don't suggest that at all. If I recall previous discussion correctly you are deeply laboring under the presumptiont hat Apple is buying the cheapest commodity LPDDR ram packages off the shelf. Apple isn't buy commodity RAM packages. Their stuff is semicustom; it isn't off the shelf. It is more so a matter just stacked the right number of dies of the correct capacity than expanding the bus width.

There is really no room around a plain Mn package Apple has been using to slap another RAM package in there to make the width wider. The 'oh just use a bigger package" excuse isn't going to fly so well inside an iPad Air/Pro. There is no copious extra room. Similar with the iMac 24" chin.

Apple's prices of $/GB for RAM are high enough they can get semi-custom packages and still make a tidy profit. They don't desperately need to scrape the bottom of the barrel of most commodity RAM package possible.
Apple surely isn't using bottom of the shelf components, but $200 for an additional 8GB... there's quite a hefty profit margin on that. This is much, much more expensive than market price even when comparing higher end retail, and wholesale prices are always substantially cheaper.
 
  • Like
Reactions: MRMSFC
They don't suggest that at all. If I recall previous discussion correctly you are deeply laboring under the presumptiont hat Apple is buying the cheapest commodity LPDDR ram packages off the shelf. Apple isn't buy commodity RAM packages. Their stuff is semicustom; it isn't off the shelf. It is more so a matter just stacked the right number of dies of the correct capacity than expanding the bus width.

There is really no room around a plain Mn package Apple has been using to slap another RAM package in there to make the width wider. The 'oh just use a bigger package" excuse isn't going to fly so well inside an iPad Air/Pro. There is no copious extra room. Similar with the iMac 24" chin.

Apple's prices of $/GB for RAM are high enough they can get semi-custom packages and still make a tidy profit. They don't desperately need to scrape the bottom of the barrel of most commodity RAM package possible.
LMAO.
All of what Apple offers are bog standard RAM chips in different package. They package them, themselves.

And yes, Apple orders bog standard DRAM chips from everybody who makes them. Ordering billions of semi-custom memory chips would be extremely expensive even for company like Apple.

Apple has not shown, even once they offer any custom DRAM solution to what is available on the market. Guess why? Because they simply order what is available on the market.

36 GB of RAM is possible ONLY in 192 and 384 bit configuration. 48 GB is possible in 192, 384, 256 and 128 bit configs, but that would mean downgrade over previous generation in terms of sheer bandwidth while CPU sizes and GPU size is increasing.

So with which options it leaves us?

192, 384, 768 bit bus, bog standard 2 and 4 GB RAM Chips.
 
  • Like
Reactions: souko and dgdosen
I think there is a case for making the memory bus wider, as it would allow them to improve the GPU performance and RAM capacity while still using more available and cheaper LPDDR5 instead of 5x. Wider bus costs more power, but this is where Apples recently published patents on powering down memory controllers and memory folding schemes might come into play. Of course, wider bus itself costs more, but it might be cheaper/more feasible than sourcing enough LPDDR5x. And surely, this is just a speculation.
If you are bringing Ray Tracing to the Apple GPUs - you need ALL OF THE MEMORY BANDWIDTH, and then some more for it.
 
I think there is a case for making the memory bus wider, as it would allow them to improve the GPU performance and RAM capacity while still using more available and cheaper LPDDR5 instead of 5x.

Where is the 'cheaper' really a huge driver here when Apple is charging a heavy price premium? Apple's "poor man's HBM" is still likely going to be more expensive than generic LPDDR5 or 5x . And they charge more to more than compensate for that. Apple isn't looking for cheaper bill-of-materials so they can pass saving along to customers. It is going to line their own pockets before any customer wins (e.g. the kneecapped SSD they are selling at the entry level configs. Again not charging 'crazy eddie' low, low prices for those either.)

If they could stuff a wider bus into the same semi-custom stacked packages this would be a far more viable options. I'm a bit skeptical they can do more than they are already doing of running four actively concurrent memory busses into the package stack. Yet another 4 channels into the two plain Mn duo packages and route that even higher than 4 up into a vertical stack ... err maybe not. At the very least , it doesn't sound inexpensive.

Throwing another x64 width package at the problem is that it is 'soldered'/'glued' vertically onto the package substration. So sucks up considerably more horizontal space on the logic board. The iPad boards are constrained. The MBP 14"/16" boards are likewise relatively constrained for more horizontal space ( even though bigger boards also being slapped with substantially larger dies and higher multiple of RAM packages). It just doesn't scale well in terms of efficient space consumption.

If Apple drops LPDDR5x onto the iPhone Pro they will have scale for the baseline building block memory dies. And even more limited horizontal fan out limitations there ... so even less clear why they would be avoiding 5x there for as long as possible also. Do they want to be first consumer out the gate? No. Do they want to kick the can for 3-4 years ? also no.

( Samsung announced LPDDR5x back in 2021.



for a product that runs from late 2023 forward that isn't exactly bleeding edge.
)

The whole 'cheap path to wider bus width' works fine for add-in-GPU cards with copious large board spaces that also happen to be normally pushed into a different dimension that the main logic board (vertical inside of horizontal in canonical orientation. ) . On the main horizontal logic board ... not so much. ( and why DDR5 is far more into banks that share relatively limited bus width paths. )


Wider bus costs more power,

And when is the last time Apple got up on stage and preach a sermon about lowering Pref/Watt ?

but this is where Apples recently published patents on powering down memory controllers and memory folding schemes might come into play. Of course, wider bus itself costs more, but it might be cheaper/more feasible than sourcing enough LPDDR5x. And surely, this is just a speculation.

Apple has already GOT super wide buses ( if compare to most of the "CPU coupled to xDDRy"package space. ). There is a decent chance, those solutions more so go toward solving the problems they already got. (even with LPDDR5x but coupled toward the Pref/Watt dogma objectives. )

If ultimate ultra wide and costs didn't matter they could have picked HBM already. HBM is getting incrementally better and Apple is going to have to keep up if want to maintain relative position as the 'poor man's HBM". Otherwise they are going to get 'smoked'.
 
Where is the 'cheaper' really a huge driver here when Apple is charging a heavy price premium? Apple's "poor man's HBM" is still likely going to be more expensive than generic LPDDR5 or 5x . And they charge more to more than compensate for that. Apple isn't looking for cheaper bill-of-materials so they can pass saving along to customers.

What I am primarily is wondering about is whether Apples partners are able to manufacture enough LPDDR5X to satisfy their humongous demand. M1 for example was still using LPDDR4 even though the next standard was already widely available - most likely because there was not enough supply in time to satisfy the millions of units Apple needs.
 
What I am primarily is wondering about is whether Apples partners are able to manufacture enough LPDDR5X to satisfy their humongous demand.

Which universe is this happening in?

Meanwhile in 2023 ...

January 2023

"...
The low-tech tactic comes at a time when many chipmakers have announced workforce cuts in response to a supply/demand mismatch—most recently and notable is Micron Technology. Meanwhile, Intel is implementing a series of job cuts due in part to a PC slump.

To further muddy the waters, the cuts come at a time when both Micron and Intel are investing in new manufacturing facilities in the U.S., spurred by the recent CHIPS and Science Act. Micron plans to invest up to $100 billion over the next 20 years to build a chip facility near Clay, N.Y.
... "
https://www.eetimes.com/memory-industry-to-hit-muddy-waters-in-2023/



March 2023 .
"... For its second quarter of fiscal 2023 Micron posted a year-over-year revenue drop of nearly 53% and said its earnings will decline further in the ongoing quarter as demand for 3D NAND and DRAM remains soft. ..."

June 2023

" ... The top 10 contract makers of chips saw their Q1 2023 revenue decline by 14.6% year-over-year and 18.6% quarter-over-quarter, according to the most recent report by TrendForce.
... "

Samsung does contract and memory , but if folks are buying less SoCs then the amount of RAM being bought isn't likely going up. If didn't by a SoC then don't need to buy RAM to go with it.


Mac sales down. iPhone sales down . This universe is mostly into a 'bust' end of the boom/bust cycle.

All of this was relatively predictable back in 2021 or so time frame. A bust cycle would have kicked in after tons of folks bought huge amounts of computers/phones/etc on borrowed money like drunken sailors on shore leave after a 2 month cruise. Anyone who as been 2-3 of these memory/storage boom/bust cycles this was all not 'new.



M1 for example was still using LPDDR4 even though the next standard was already widely available - most likely because there was not enough supply in time to satisfy the millions of units Apple needs.

Apple used that LPDDR4 like it was LPDDR5. The M1 Pro and Max used LPDDR5. I doubt Apple made a substantive change between to the memory controller between the two. The initial M1 system production was in the first 9 months of a 'biggest in a century', world wide pandemic. Apple might have wanted to buy what they could get.

Second, who said it would be anything like 'millions of units' in the short term. If Apple leads off with iMac 24" M3.... then staggers a MBP 13" M3 launch ... then staggers a MBA 13" launch ... then it won't be millions in 2023.

Yeah some stuff goes to volume slow from the hyped up earlier announcements.

"...
However, Samsung did not say when the new LPDDR5X memory modules will be available for commercial application. Notably, the Korean electronics giant announced its first LPDDR5 module in 2018, but the first smartphone featuring this technology did not appear until two years later in 2020 ..."

That was November 2021 and two years later is November 2023.
The Oppo Nord 3 has LPDDR5X ( uses a Dimensity 9000 ) and it is shipping now.

There are other Dimension 9000 and 9100 phones out there also.

The S23 uses LPDDR5x
"... Tipster Ice Universe has confirmed that the Galaxy S23, Galaxy S23+, and Galaxy S23 Ultra use LPDDR5X RAM and UFS 4.0 storage.. ..."

The S23 unit volume isn't going to outsell number of units of iMac 24" ? Probably not. Apple avoiding LPDDR5X for iPhone 15? maybe. But Apple can use a long term contract buy 'carrot' that loops in iPhone 16 and other longer term memory package buys to get discounts earlier.


In late '22 or early '23 it could have been sketchy. In late '23 ... not so much.


Also in that digital trends story.

" ... When sampled for the Dimensity 9000, Micron’s LPDDR5X solution achieved data transfer speeds of up to 7500Mbps. Although it’s higher than the 6400Mbps transfer speed of LPDDR5, it still lags behind the highest transfer rates (8533Mbps) supported by the new standard. ..."

LPDDR5X is "up to 8533". If under gets higher volume sooner that is also an option. What Apple primarily needs is slightly lower power while being faster than LPDDR5 maximum.


In short, I am a bit skeptical that the short term plain M3 SoC unit volume is going to swamp both Samsung and Micron given the substantively late arrival of the M3.
 
When I saw 40 GPU cores on the leaked specs, I assumed it meant 40 “better” cores than the M2 max. With newer core designs with potentially denser clusters and additional features like ray-tracing.

In other words, I don’t think it will be an apples to apples (cores to cores) comparison.
 

If this is true then it’s confirmed that M3 Max would have up to 48 GPU cores.

16 CPU cores and 48 GPU cores. That’s quite an upgrade from an M1 Max. 4 more P cores, 2 more E cores, 16 more GPU cores, and probably more NE cores.
 

If this is true then it’s confirmed that M3 Max would have up to 48 GPU cores.

16 CPU cores and 48 GPU cores. That’s quite an upgrade from an M1 Max. 4 more P cores, 2 more E cores, 16 more GPU cores, and probably more NE cores.

The 40 core version could indeed be a binned version. However it doesn't preclude Apple from having odd GPU core configurations either.

Below are the NON-binned cascades of chip structures....

A15/16 = 6c (2p + 4e) CPU / 5c GPU
consider this a base config of the Apple Silicon

M2 = 8c (4p + 4e) / 10c GPU
Doubling of the performance cores over A16, doubling of the GPU cores

M2 Pro = 12c (8p +4e) / 20c (19c) GPU
Doubling of the performance cores again over M2, doubling of the GPU cores (1 disabled)

M2 Max = 12c (8p +4e) / 40c (38c) GPU
Doubling of the GPU cores over M2 Pro (2 disabled)

So if we were to assume Apple will follow the same cascade of hardware scaling from base design up to the Max die, then....

A17 = 6c (2p + 4e) CPU / 6c GPU
Consider this a base config of the next generation Apple Silicon, +1 extra GPU core

M3 = 8c (4p + 4e) / 12c GPU
Doubling of the performance cores over A17, doubling of the GPU cores

M3 Pro = 12c (8p +4e) / 24c (22-23c?) GPU
Doubling of the performance cores again over M3, doubling of the GPU cores (1-2 disabled)

M3 Max = 16c (12p +4e) / 48c (46-47c?) GPU
Triple the M3 base performance cores, doubling of the GPU cores over M3 Pro (2 disabled)
This is the odd one out - insofar as what we know Apple is adding an extra 4p cores here, and POSSIBLY the M2 Pro as well.

M2 generation had binned GPU counts of 8c (-2c) on the base M2, 16c (-4c) on the Pro and 30c (-10c) on the Max, based on 10/20/40 core GPUs.

M3 generation could then see binned GPU counts of 10c (-2c) on the base M3, 20c (-4c) on. the Pro and 38c (-10c) on the Max.

Of course, yield rates and GPU designs could dictate that the pattern just isn't relative. A 'binned' Max MIGHT be 40 cores, which is -8c from the full version. It's all guess work really isn't it?
 
Like I said earlier... ;^P

Throw LPDDR5X into the mix for increased capacity and UMA bandwidth to feed more GPU cores (and ray-tracing hardware); chips up to 64GB would give up to 256GB for a M3 Max SoC package, slightly less if Apple decides to go with an in-line method for ECC...?
 
The other way they could go, is that the base M3 cpu is 6p + 4e, giving us a 10 core CPU. Which would mean doubling the p-cores to 12p for the Pro/Max would be more feasible.
M3 should be 6P/4E and 12 core GPU. 40 core is just base version of the M3 Max chip, just like 30 core is base version of 38 core version of M2 Max.

6>12>24>48 GPU core GPU. Perfect scaling from iPhone to M3 Max.
 
M3 should be 6P/4E and 12 core GPU. 40 core is just base version of the M3 Max chip, just like 30 core is base version of 38 core version of M2 Max.

6>12>24>48 GPU core GPU. Perfect scaling from iPhone to M3 Max.
If this rumor is correct, M3 is staying at 8 CPU cores:

 
I have bad feelings about M3's memory support after knowing A17 only supporting LPDDR5. I am afraid Apple will reserve LPDDR5x support on M3 Pro and above...

Isn’t the only difference between LPDDR5/X/T the operating frequency? I’d expect a controller that supports faster memory be backwards compatible with slower standards.
 
  • Like
Reactions: ArkSingularity

I have bad feelings about M3's memory support after knowing A17 only supporting LPDDR5. I am afraid Apple will reserve LPDDR5x support on M3 Pro and above...

M3 has enough features for midrange system support and Apple could use bigger memory size like 12GB to justify higher price point...
There won't be LPDDR5X support on any of 3 nm chips from first gen. They will all be LPDDR5 6400.
 
  • Like
Reactions: ArkSingularity
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.