Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Question is wether it is "practical" to have 1.5TiB at all....

Sure, but the point is the amount of memory you can practically put in the package is limited, and we so far haven't seen how Apple plans to tackle that for higher-end Macs.

b) Flash based storage is getting faster and faster

So is RAM.

and you con run them in "parallel" (if your chip has enough I/O) to the point that it might be just as fast as having multiple RAM chips/modules hanging on a bus several inches long.

There's an order of magnitude difference, even ignoring latency issues on top of that (the SSD is not on-package).

Apple's SSDs tend to top out at ~3 GiB/s (this will probably improve when Apple moves to PCIe 4 or 5). Their LPDDR4X RAM? ~30 GiB/s.


The rest is just SW, as in better paging better APIs and better compilers making sure SW gets back to being smart about RAM use.
Software can't make up this massive difference.
 
Apple's SSDs tend to top out at ~3 GiB/s (this will probably improve when Apple moves to PCIe 4 or 5). Their LPDDR4X RAM? ~30 GiB/s.

So? How much IO does that SSD get compared to RAM?

Even at a factor of 10 what compromise is good? Moving the RAM out package decreasing performance for say 10% just so you can have more? In the end RAM/SSD are just the last items that start with the L1 cache.

Software can't make up this massive difference.

In most cases it could, not sure if you've seen that video comparing 2 M1s one 8 the other 16GB (I think it was MaxTech) where they used mostly video based test and in all but the last one it made almost no different. AFAIR the last one was some insane 8k file that only showed one thing, how lousy written the SW was.
Those streaming style task (any form of media processing) really only need space for the actual code and a relativ small buffer (100MB per core is more than enough) assuming the IO is capable of reading the original and saving the result at fast enough speed.

But for the past 20 years developers have learned that "alloc() never fails" and here we are "needing" 16GB just to browse the web.....
 
I think thats a good guess, as too binning by CPU cores, the question is how many do fail to have the full number?
And with the M1 loosing a "big core" would mean quite a performance hit that would be felt much more than -1 GPU.

In the end it is also a question of logistics and keeping the store clean cos otherwise they end up with 1000s of possible configs confusing the costumer (which should be cheaper, broken CPU or GPU).

So it was most likely that there are enough broken GPUs to make that a separate product.
With modern die processes I’m betting far more often they are purposely disabling a core to make the lower-end chip than it just happens to have a failure in the die in the exact spot and way needed to disable one (and only one) core.
 
With modern die processes I’m betting far more often they are purposely disabling a core to make the lower-end chip than it just happens to have a failure in the die in the exact spot and way needed to disable one (and only one) core.

Well there have been cases (some prior IPP, the PS3) were a product was marked down as to few chips had the full spec (and those that had it got 1 core disabled).

What also happens is that yields do improve over time leading chips to be binned down/crippled despite to keep a separate lower cost product available.

As for the 7GPU M1, that only makes sense if they can disable anyone of the 8 cores if it turns out to be broken and given that it is a relativ big chip on a still somewhat new process it's not so hard to believe that those yields exist "naturally".
 
No idea. I am surprised there hasn’t already been binning on clock speed (though perhaps there has been! Who says iPad Pro M1’s can hit the same max clock as iMacs?)
Isn’t having two 3 GHz Firestorm cores on the A14 but four 3.2 GHz Firestorm cores on the M1 basically just that?
 
Isn’t having two 3 GHz Firestorm cores on the A14 but four 3.2 GHz Firestorm cores on the M1 basically just that?

Only if A14s were born as M1s (as in having those cores).

In reality the A14 is a smaller chip lacking much more then just those 2 cores.

Now if your question was "could an A14 boost to 3.2GHz with sufficient cooling and power delivery? Might be but we'll never find out.
 
Only if A14s were born as M1s (as in having those cores).
They have the same cores.
In reality the A14 is a smaller chip lacking much more then just those 2 cores.
That’s true, but the binning here isn’t of the package, but rather of some cores on it.
Now if your question was "could an A14 boost to 3.2GHz with sufficient cooling and power delivery? Might be but we'll never find out.
It’s literally the same core.
 
but the binning here isn’t of the package, but rather of some cores on it.

Seem you confuse package vs chip.

All cores (slow,fast,GPU,neural engine etc) are on 1 chip which then gets combined with RAM into a package.
So wether something is an A14 or M1 gets determined when they "print" the wafer.
 
Seem you confuse package vs chip.

All cores (slow,fast,GPU,neural engine etc) are on 1 chip which then gets combined with RAM into a package.
That is exactly what I said. Thank you.

Again: both A14 and M1 have Firestorm and Icestorm cores.
 
That is exactly what I said. Thank you.
Isn’t having two 3 GHz Firestorm cores on the A14 but four 3.2 GHz Firestorm cores on the M1 basically just that?

"Binning" is selecting what chip goes in what product, since there is no product using the A14 that goes beyond 3GHz the only binning here is discarding everything that fails that test.

Same with the M1, if a chip fails to meet the specs it is either discarded or used in a "lower" M1 product (7GPU MBA/iMac and maybe the IPP if that has a lower boost) but never in an A14 product.
 
"Binning" is selecting what chip goes in what product, since there is no product using the A14 that goes beyond 3GHz the only binning here is discarding everything that fails that test.

Same with the M1, if a chip fails to meet the specs it is either discarded or used in a "lower" M1 product (7GPU MBA/iMac and maybe the IPP if that has a lower boost) but never in an A14 product.
That definition of binning (in bold)is somewhat ambiguous. "Binning" is the division of a production run of the same unit into different "quality" groups based on the outcome of testing. The base line is the same model of chip - those that fail to meet some standard are put into a separate bin (a literal bin in some cases) and may be destined for lower spec machines if not discarded.

A14 and M1 are (almost certainly) different chips and would not be binned across Mac and iPhone/iPad product lines. M1 chips are binned into those with 8 functioning GPUs and those with only 7 (of the 8 on the die).
 
A14 and M1 are (almost certainly) absolutely positively different chips and would not be binned across Mac and iPhone/iPad product lines. M1 chips are binned into those with 8 functioning GPUs and those with only 7 (of the 8 on the die).
Fixed. They have much in common (like the CPU core design) but they are different.

A14: 11.8 billion transistors
1620607341712.png

M1: 16 billion transistors
1620607330744.png


(images courtesy tip3x)

Now, if you told me that iPad Pros used M1s binned for lower clock speeds, then that I might believe. And I have to assume that the 7/8 core GPU business is a result of binning, because I can't think of any other conceivable reason for it.
 
  • Like
Reactions: Fomalhaut
And I have to assume that the 7/8 core GPU business is a result of binning, because I can't think of any other conceivable reason for it.
This got me thinking tho. If the current M1 already have problem with 8 GPU cores, wouldn't it be more difficult to get more CPU and GPU cores?
 
This got me thinking tho. If the current M1 already have problem with 8 GPU cores, wouldn't it be more difficult to get more CPU and GPU cores?
No. It’s highly unlikely that every 7-core GPU chip is an 8-core GPU chip with a failed GPU. Some likely are, but many are likely just 8-core GPU chips where a core is intentionally disabled even though it would work. We did that a lot at a prior employer of mine.
 
  • Like
Reactions: Fomalhaut and neilw
Let's remember that there are two possible upgrades here. New Core design and scaling up the existing Core design. And yes a new Core design can be combined with more cores as well.

For THIS new SOC that went into production I assume it is still Firestorm/Icestorm but scaled up with more cores and cache and additional controllers. Probably an 8x4x16 (Performance/Efficiency/GPU) with proportionally upscaled cache as well as additional controllers and no doubt supports more than 16GB RAM.
 
Let's remember that there are two possible upgrades here. New Core design and scaling up the existing Core design. And yes a new Core design can be combined with more cores as well.

For THIS new SOC that went into production I assume it is still Firestorm/Icestorm but scaled up with more cores and cache and additional controllers. Probably an 8x4x16 (Performance/Efficiency/GPU) with proportionally upscaled cache as well as additional controllers and no doubt supports more than 16GB RAM.
This has already been hashed out ad nauseum in this thread.

A scaled-up M1 with something like 8/4/16 cores is what we would call an M1X (who knows if that's what Apple would call it). Something with new core designs and probably on a new or updated process would be what we would call an M2 (or maybe even an M2X, if they're going to lead off with a scaled-up version for the 16" MBP and larger iMac.)

Because we're getting very close to the time when A15 production would be starting up (I confess I don't know their typical schedule to any degree of preciseness), it's not out of the question that it's really an M2(X) that is getting fabbed right now, which is what this rumor is saying.

For our part, we simply do not yet know how Apple plans to handle the Apple Silicon product line, and so everything so far must be considered pure speculation. Hopefully we'll get some more solid rumors/leaks/news in the coming weeks and months, maybe even a product reveal at WWDC (that would be a great time to introduce the 16" MBP).
 
I disagree. The A series will be the main platform Apple develops for. iPhones dwarf Laptops and Desktops now. An M1 is a bad fit for iPhones.
The M1 has drastically different IO requirements than A series do, the minor increase in power requirements alone would kill an iPhone. The A series will be the fastest cycle and get the newest hardware features because iOS is Apple's premier platform and the requirements of the A chips are much simpler. Once new designs shake out in A series, they'll be "scaled up/scaled out" a year later for the M series chips.
Agreed the M1 is basically a scaled up A14 with extra bits added.

 
Agreed the M1 is basically a scaled up A14 with extra bits added.

That video reminds us that there's a lot more to the M1 than just the CPU, GPU and RAM. It also has specialized blocks devoted to image processing, video processing, and audio processing, as well as a 16-core neural engine, all of which can contribute to its speed, depending on the task. I wonder whether, and to what extent, these will be upgraded as well.
 
No. It’s highly unlikely that every 7-core GPU chip is an 8-core GPU chip with a failed GPU.

None of us knows how high the failure rate is for the GPU cores. We also don't know the number of 7 GPU base models sold.

What we do know is that 5nm is still young and that Apple did disable 1 GPU core for the A12x but managed to have all 8 onboard for the A12z.

So yes it is likely that there are plenty broken M1s and so far anything with something other than a GPU defect or double GPU defect is discarded. The fact that they added it as an low option for iMac IMO suggest there are still plenty broken chips.

This got me thinking tho. If the current M1 already have problem with 8 GPU cores, wouldn't it be more difficult to get more CPU and GPU cores?

Nope, lets say full chips have 8/4/16 (fast/slow/GPU) so they will simply have a base option like 6/4/12 where everything with a broken core gets broken some more to fit.
 
None of us knows how high the failure rate is for the GPU cores. We also don't know the number of 7 GPU base models sold.

What we do know is that 5nm is still young and that Apple did disable 1 GPU core for the A12x but managed to have all 8 onboard for the A12z.

So yes it is likely that there are plenty broken M1s and so far anything with something other than a GPU defect or double GPU defect is discarded. The fact that they added it as an low option for iMac IMO suggest there are still plenty broken chips.



Nope, lets say full chips have 8/4/16 (fast/slow/GPU) so they will simply have a base option like 6/4/12 where everything with a broken core gets broken some more to fit.

Any yield issue that affected GPUs to the extent you are thinking would also affect CPU cores. I just don’t think the failure rate is that high, based on my experience binning CPUs.
 
  • Like
Reactions: Unregistered 4U
Any yield issue that affected GPUs to the extent you are thinking would also affect CPU cores.
Maybe maybe not, but adding a 3rd option (with 3 big CPU cores) just wasn't option so those are discarded.

In the end with a process that is right on the edge what is possible and billions of potential failure points, what is acceptable? 1%? 10%? ??? Which brings us again back to how many M1 products chip with 7 GPU cores? 10%? More? Less?
 
I think pretty much everyone knows how Apple's main SOCs families work.

The underlying architectures are:

Performance CPU = Firestorm

Efficiency CPU = Icestorm

Unknown name GPU core

Unknown name ML core


The difference between A and M families is more how many of each, how much cache, what type of controllers and how much and what type RAM.

Right now we are at A14 / M1. When the next core architectures land we go to A15 and whatever the corresponding Mac SOC will be named.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.