Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
No.
The NAND chips are spread over two daughter logic boards , but the metadata for just one drive. The usage of the drive volume is being spread out but if either one of those fail the whole thing fails. Conceptualy it is RAID-0. But all SSDs where write and read speed are anywhere near equal are doing that general concept.

There is no SSD controller on those daughter cards. They are not 'drives'. It is a 'brainless' subcomponent.
I think this is kind of a semantic bit. Higher capacity Macs have both slots filled, but the 512MB model just has 1. I don't think it is Raid 0 but more like JBOD. While read speeds go up the more space you have: 3400 at 512MB up to 8100 on 8TB, the write speeds seems to hover around 5500. As there are only two cards in there, it isn't like there is some way for there to be 5 different read speeds. They may not be normal SSD, but calling them SSDs is reasonable enough.

But it isn't.

back in 2023

Generally PCI-e v5 x4 is 128Gb/s conduit; not 80Gb/s.


2025
"... The P510's performance does not surpass that of the T705, which provides sequential read speeds of 14,500 MB/s and write speeds of 12,700 MB/s. The drive falls slightly short of the T700, Crucial's first PCIe 5.0 drive, which offered read speeds of 11,700 MB/s and write speeds of 9,500 MB/s. ..."

14.5GB/s ==> 116Gb/s no 80Gb/s


2025-26

( 8 * 27 = 216 not 80 Gb/s )
PCIE6 isn't on the market yet. Test cases don't count. Yes, they demoed it, but that doesn't mean it will be on the market soon. Maybe the next Mac Pro (which doesn't exist yet) will show up with TB6 (which also doesn't exist yet)

Is TBv5 "fast enough" for most general workloads and general users. Probably. However, that doesn't mean that TBv5 is outpacing modern, leading edge NVME SSD drives. It is not.

This is the crux of it. Sure, if you happen to have $20 million to throw at the problem, you could buy that test drive from Micron. But, for those of us not playing golf with Musk and Bezos, TB5 is about as fast as anything we're likely to buy on the market, today.
Tomorrow? Well, that is another story.
 
I don't think it is Raid 0 but more like JBOD.

But it’s not “a bunch of disks”. It’s just one disk. The same way an HDD with four platters isn’t four disks but just one, because there’s only one controller (in Apple’s case, in the SoC).
 
We need some more data to determine NAND speed on the new Studios, since a 16TB offer is now present which clearly uses the 2TB NAND chips that were first used on the M4 Pro mini, so Apple probably have changed the 1/2/4/8TB SKUs' number of NANDs distribution for logistic reasons.

In general, one daughter card is the lower speed tier, having two card is the higher.
 
I think this is kind of a semantic bit. Higher capacity Macs have both slots filled, but the 512MB model just has 1. I don't think it is Raid 0 but more like JBOD. While read speeds go up the more space you have: 3400 at 512MB up to 8100 on 8TB, the write speeds seems to hover around 5500.

That is actually more evidence for the presence of a "RAID 0" like effect as opposed to the opposite (none and "just a bunch of independent disks).

Write lower than read is just a primary property of the low level NAND chip properties. Writes to NAND logic chips is dramatically slower than read. In some contexts whole blocks have to be rewritten even for a small , sub-block, change. Which means you have to read out the unchanged data to include with the block re-write along with the new changes. No way that is going to be as fast as read (which very hopefully) does not change the surrounding data at all.

Even more 'worse' when start storing 2,3, or 4 bits in each location.


As there are only two cards in there, it isn't like there is some way for there to be 5 different read speeds.

Of course there is.a way. The NAND packages for different capacities are not 100% uniform in construction. Some packages have more raw NAND dies in them than others. (so the number of dies 'talking' to is varying). Between differences in generation there is 'layers' in the NAND construction. (again number of channesl of flow changes).

The primary reason Apple is using two logical daughterboards to is to put more capacity by using the same level of density packages. Use more of the same NAND package as oppose to using a higher capacity density package. (typically a higher density package is more expensive as typically newer, bleeding edge tech). ( i.e, Apple can squeeze more economies of scale out of buy more of the same part in higher numbers. As opposed to more parts each in smaller numbers ).

The helpful side effect is that spreading out the capacity over more packages basically guarantees there are more memory channels that can be leverage in parallel ( e.g, a wider RAID-like stripe. The concurrent reads is faster than just two, or just one. )


They may not be normal SSD, but calling them SSDs is reasonable enough.

This is akin to calling a zebra a horse primarily because have seen more horses than zebras in your lifetime.
They are only close to being the same if don't pay attention to details.

Apple has decomposed a SSD into entirely dependent modules. The SSD controller is in the SoC. The NAND chips are either placed on the main logic board or on daughter cards. The daughter care a mainly just to efficiently add more NAND modules than would fit on a main logic board (some vertical stacking in systems that have more verticle clearance). And also cheaper for Apple. I suspect there are some customers for which the much easier to destroy daughter cards help decommission those Macs later with fewer problems ( where physical destruction of storage drive is the primary acceptable secure deposal of the data).

Putting the NAND chips on the main board or daughter board don't make them independently more useful . In both contexts they are basically useless as pragmatical 'storage drive' all by themselves. Like someone else pointed out it is like calling a platter from a hard drive a HDD. No read/write head , not controller, no metradata management.... it is not a 'drive'.

PCIE6 isn't on the market yet. Test cases don't count. Yes, they demoed it, but that doesn't mean it will be on the market soon. Maybe the next Mac Pro (which doesn't exist yet) will show up with TB6 (which also doesn't exist yet)

x4 PCI-e 6 bandwidth == x8 PCI-e v5 bandwidth == x16 PCI-e v4 bandwidth. A Mac Pro with a single x16 PCI-e v4 slot can handle a x4 PCI-e v6 drive with an appropriate card. All the card needs is a x16 PCI-e v4 to x4 PCI-e v6 switch and a slot to put the card into.

If spending > $10K on a workstation, it might be nice that can work with a single card in the next 2-3 years without having to replace the whole device. Also nice if can work with 2-3 PCI-e v5 devices which exist now. (don't have to wait... and carefully snipped from your quote. ). It isn't about the 'future' it is about now. The only thing the future brings is ability to do it with just one drive instead of 2-4 drives under some software/hardware RAID set up.

Pragmatically Thunderbolt is limited to x4 widths at any one generation. [ Primarily so that can fit onto a relatively affordable , copper wires in a classic USB style cable diameter] It is always going to be behind and one generation back.

And TBv6 likely his going to be hamstrung by USB committee foot dragging. TBV3 -> TBv4 wasn't bandwidth increases. It was far more making less USB optional loopholes smaller and the interface more consistent. Probably won't get to a Thunderbolt revision where there is a net bandwidth increase for more than two generations. And again it will fall behind. [ PCI-e v6 likely is also pragmatically shorter which is also in conflict with constraints that USB wants to deal with ]

This is the crux of it. Sure, if you happen to have $20 million to throw at the problem, you could buy that test drive from Micron. But, for those of us not playing golf with Musk and Bezos, TB5 is about as fast as anything we're likely to buy on the market, today.
Tomorrow? Well, that is another story.

It is today. ( x16 PCI-e v4 four M.2 PCI-e v4 cards exist today). PCI-e v5 drives are on market now. (now getting to 2nd , and soon 3rd , generation v5 drives on the market. )
Don' t have to stick to M.2 form factor either


2.5" enclosure has more space for more NAND packages so can 'fan out" and stripe more concurrent read/writes to fill the x4 PCI-e v5 connection.


And it isn't any more in terms of spend than a Mac Pro currently is; don't have to be anywhere near the $1M or even $100K range. ( go to general PC maket and spend money and not hard at all to get a M2. PCI-e v5 right there on workstation motherboard. Not particularly new example:
" ... For connectivity, the TRX-50 SAGE WiFi comes with five PCIe x16 slots, three of which are Gen 5 enabled, as well as two PCIe Gen 5 capable M.2 slots, a single Gen 4 M.2 slot, and an additional Slim SAS connector for enterprise drives. ..."
)
 
Last edited:
...ripping off customers with highway robbery prices for consumables like memory & storage.
Memory and storage don't count as "consumables". Paper, yes. Ink, yes. Paint, yes. Coffee, tea, and sugar in your kitchen, yes! Motor oil in your car, YES YES YES.

But calling memory and storage "consumables" is like calling your car's camshaft a consumable. Or calling your computer screen a "consumable". Yes, you use them and you can wear things out, but they don't get replaced on the frequency of paper, ink, paint, oil, coffee, tea, or sugar.
 
It doesn't matter whether it's SoC or not. It' just how you design the chip. But SoC is meant for mobile size chips, not desktop grade chips.
Now you have me confused. The comment of yours I was responding to seems to be making a case for the opposite of what you say here: that whether it IS or IS NOT an SoC matters.

This whole post and thread is, at its core, about the utility and speed of a given spec in a workstation context, which just happens to be SoC. SoC was originally designed for compact devices, yes. But architecturally, it is now the standard for ALL compact and multi-function consumer and professional computing devices, including the desktop, because it is better for 95% of the use cases there.
 
architecturally, it is now the standard for ALL compact and multi-function consumer and professional computing devices, including the desktop, because it is better for 95% of the use cases there.

Well, most Intel or AMD CPUs don’t come in an SoC, although the distinction is a bit blurry.
 
3. SoC is the biggest problem since the die size is just too big. A big die means too expensive and difficult to manufacture. This is why chips are aim to make smaller. Perhaps Apple realize now that SoC is not a good solution and maybe should go for chiplet or McM.
Not sure how you arrived at that conclusion. I’m pretty sure that SoC is the whole reason why Apple Silicon performs so well. Do you mean, just for the extreme high end?
It doesn't matter whether it's SoC or not. It' just how you design the chip. But SoC is meant for mobile size chips, not desktop grade chips.
Now you have me confused. The comment of yours I was responding to seems to be making a case for the opposite of what you say here: that whether it IS or IS NOT an SoC matters.

This whole post and thread is, at its core, about the utility and speed of a given spec in a workstation context, which just happens to be SoC. SoC was originally designed for compact devices, yes. But architecturally, it is now the standard for ALL compact and multi-function consumer and professional computing devices, including the desktop, because it is better for 95% of the use cases there.
@BNBMS calls for Apple to use chiplet or MCM (multi-chip module) packaging instead of SoC, but doesn’t recognize that all M-series silicon already uses MCM to package the SoC and its memory together, and UltraFusion already uses one of TSMC’s chiplet technologies (InFO-LSI) to package the two Max together for the Ultra.

In turn, they fail to recognize that Apple’s desktop competition, for example Intel’s Core Ultra, AMD’s Ryzen G-series, and Nvidia’s DGX Spark and DGX Station, also feature integrated systems that combine advanced packaging technologies, with comparable results. Like @chucker23n1 said, the distinction tends to be “a bit blurry” (these are trade secrets, after all) — but the idea that Apple is somehow behind the industry curve is way off the mark.
 
Last edited:
In turn, they fail to recognize that Apple’s desktop competition, for example Intel’s Core Ultra, AMD’s Ryzen G-series, and Nvidia’s DGX Spark and DGX Station, also feature integrated systems that combine advanced packaging technologies, with comparable results.

Yep.

There's an evolution to it. Ca. 2000, Intel CPUs came with a "chipset", consisting of a "northbridge" (which connected the CPU to the RAM, PCIe, etc.; "north" meant "close to CPU") and a "southbridge" (which connected the CPU to slower I/O, such as your disk). So you had at least chips just to get the CPU running. This stood in clear contrast to an SoC, which put all of those on one package.

The northbridge has long since been eliminated (ca. 2010), and the southbridge is now referred to as the Platform Controller Hub. That's just two chips. But on laptops, with Haswell (ca. 2013), the PCH was integrated into the same package as well (that means, for today's chips, Thunderbolt, Wi-Fi, etc. are all what is ostensibly the "CPU"). Plus, laptop CPUs tend to offer integrated graphics, too. So in that sense, most laptop CPUs have been close to an SoC for a while. Lunar Lake even puts the memory on the package, just like Apple has done.

but the idea that Apple is somehow behind the industry curve is way off the mark.

Indeed. If anything, they've been bolder than the competition.
 
@BNBMS calls for Apple to use chiplet or MCM (multi-chip module) packaging instead of SoC, but doesn’t recognize that all M-series silicon already uses MCM to package the SoC and its memory together, and UltraFusion already uses one of TSMC’s chiplet technologies (InFO-LSI) to package the two Max together for the Ultra.

In turn, they fail to recognize that Apple’s desktop competition, for example Intel’s Core Ultra, AMD’s Ryzen G-series, and Nvidia’s DGX Spark and DGX Station, also feature integrated systems that combine advanced packaging technologies, with comparable results. Like @chucker23n1 said, the distinction tends to be “a bit blurry” (these are trade secrets, after all) — but the idea that Apple is somehow behind the industry curve is way off the mark.
The problem with SoC is the die itself is huge which makes it expensive and low yield. They cant upgrade and make specific chips bigger within SoC. What if you want to add more GPU? Ultra chips are great example of why Apple CANT do that. They had to combine two identical chips which already a problem.
 
The problem with SoC is the die itself is huge which makes it expensive and low yield. They cant upgrade and make specific chips bigger within SoC. What if you want to add more GPU? Ultra chips are great example of why Apple CANT do that. They had to combine two identical chips which already a problem.

Do they need to have two identical chips though, might they not combine a Mn Max with a GPU-specific die...?
 
Do they need to have two identical chips though, might they not combine a Mn Max with a GPU-specific die...?

They can:

  • design an SoC that's otherwise the same but has more GPU cores. That's pretty much what the original Max was: a Pro with more GPU cores, and higher memory bandwidth to feed them. (Though, chronologically, it's the opposite: the M1/M2 Pro were a Max with some cores and memory controllers chopped off.) The M3 Max and M4 Max are more distinct from the Pro. They could do an M4 Doublemax with even more GPU cores, sure.
  • design a separate GPU chip. However, this immediately loses them all the advantages of their current design: this GPU would require its own RAM and its own memory controllers, and copying data between CPU and GPU cores would suddenly be an expensive operation. It would also raise questions of "what about the GPU cores we already have on the regular SoC?" If you turn them off, you lose cores. If you leave them on, you now have a complicated process by which some cores share the CPU RAM and some don't.
  • the same holds true for GPU PCIe cards, though bandwidth/latency are even worse, then. OTOH, it allows for higher heat dissipation.
 
They can:

  • design an SoC that's otherwise the same but has more GPU cores. That's pretty much what the original Max was: a Pro with more GPU cores, and higher memory bandwidth to feed them. (Though, chronologically, it's the opposite: the M1/M2 Pro were a Max with some cores and memory controllers chopped off.) The M3 Max and M4 Max are more distinct from the Pro. They could do an M4 Doublemax with even more GPU cores, sure.
I think Apple’s problem with that is the reticle limit. The Max is already close to it, so doubling its GPU cores may not be possible. That’s why I think the SoIC rumor for M5 Pro/Max sounds plausible, it fits perfectly into what Apple has been doing. I think it’s TSMC’s solution for this problem, and it’s coming sooner rather than later.

  • design a separate GPU chip. However, this immediately loses them all the advantages of their current design: this GPU would require its own RAM and its own memory controllers, and copying data between CPU and GPU cores would suddenly be an expensive operation. It would also raise questions of "what about the GPU cores we already have on the regular SoC?" If you turn them off, you lose cores. If you leave them on, you now have a complicated process by which some cores share the CPU RAM and some don't.
  • the same holds true for GPU PCIe cards, though bandwidth/latency are even worse, then. OTOH, it allows for higher heat dissipation.
Like you suggest, both of these seem unlikely.
 
Last edited:
The problem with SoC is the die itself is huge which makes it expensive and low yield. They cant upgrade and make specific chips bigger within SoC. What if you want to add more GPU? Ultra chips are great example of why Apple CANT do that. They had to combine two identical chips which already a problem.
With all due respect, I still think what is driving your comments isn’t clear.

Maybe if you’re a wafer plant, a chip fab, or a silicon mask designer, one could be interested in yield. But why would we care as consumers? Can you get your hands on the highest performing chips available? Unless we’re talking discrete GPU units, or located somewhere that is impacted by the CHIPS act, you probably can. So, yields are fine.

SoC can’t be expanded, correct. But it’s still a better overall design, and it’s performing fabulously in the majority of use cases. Essentially, the only difference between it and technologies of yesteryear is scaling. The manufacturing and packaging process has evolved to enable everything to be produced in a more tightly-controlled, consistent, and compact output. The “chipset,” and “bridges,” as they were, are still all there. It’s just much smaller now.

Just because you don’t like not being able to upgrade or add performance to an aging workstation does not mean SoC is problematic.
 
  • Like
  • Disagree
Reactions: Boil and BNBMS
With all due respect, I still think what is driving your comments isn’t clear.

Maybe if you’re a wafer plant, a chip fab, or a silicon mask designer, one could be interested in yield. But why would we care as consumers? Can you get your hands on the highest performing chips available? Unless we’re talking discrete GPU units, or located somewhere that is impacted by the CHIPS act, you probably can. So, yields are fine.

SoC can’t be expanded, correct. But it’s still a better overall design, and it’s performing fabulously in the majority of use cases. Essentially, the only difference between it and technologies of yesteryear is scaling. The manufacturing and packaging process has evolved to enable everything to be produced in a more tightly-controlled, consistent, and compact output. The “chipset,” and “bridges,” as they were, are still all there. It’s just much smaller now.

Just because you don’t like not being able to upgrade or add performance to an aging workstation does not mean SoC is problematic.
Why?

1. Higher cost
2. Higher chip price
3. Limited product line (So far, no Mac Pro grade chips)
4. Limited GPU performance (They cant just increase GPU cores instead of combining two chips)
5. Inefficient chip design.

If you really think the yield is fine, then you have a serious problem after all. Dont forget that Apple is the only one making and using the latest TSMC chips while others are still using 5nm which provides several advantages.
 
Why?

1. Higher cost
2. Higher chip price
3. Limited product line (So far, no Mac Pro grade chips)
4. Limited GPU performance (They cant just increase GPU cores instead of combining two chips)
5. Inefficient chip design.

If you really think the yield is fine, then you have a serious problem after all. Dont forget that Apple is the only one making and using the latest TSMC chips while others are still using 5nm which provides several advantages.

everything you listed is just an opinion.

Have you compared M4 Max performance to every Mac Pro ever produced? Go do that and then come tell us what you mean to say when you say “Mac Pro performance.”
 
Last edited:
  • Disagree
Reactions: BNBMS
1. Higher cost
2. Higher chip price

This is just… the same point twice, but sure.

3. Limited product line (So far, no Mac Pro grade chips)

This is mostly an effect of Apple not being interested in big product ranges, not of the SoC choice. They could absolutely make a Mac Pro chip; it just isn’t economically exciting for them.

4. Limited GPU performance (They cant just increase GPU cores instead of combining two chips)

They can; that’s pretty much what the M1/2 Max were.

5. Inefficient chip design.

No?

Dont forget that Apple is the only one making and using the latest TSMC chips while others are still using 5nm which provides several advantages.

This isn’t relevant to the question of SoC vs. dedicated chips. If the GPU were separate, Apple would still contract TSMC and pay for the latest and greatest.
 
  • Like
  • Disagree
Reactions: o9p0 and BNBMS
everything you listed is just an opinion.

Have you compared M4 Max performance to every Mac Pro ever produced? Go do that and then come tell us what you mean to say when you say “Mac Pro performance.”
And yet, proven by Apple themselves: No chips for Mac Pro, limited GPU performance, expensive chip price, and more. This is why they need a different chip design instead of making everything at once.

Also comparing OLD Mac Pro to M4 Max is just stupid when they are RTX 50 series and RTX 40 based workstation GPUs are available and expandable and proven my point after all since Apple Silicon dont have any workstation grade CPU and GPU.
 
Last edited:
And yet, proven by Apple themselves: No chips for Mac Pro, limited GPU performance, expensive chip price, and more. This is why they need a different chip design instead of making everything at once.

Comparing OLD Mac Pro to M4 Max is just stupid when they are RTX 50 series and RTX 40 based workstation GPUs are available and expandable.
I am really confused by your line of thought.

No new Mac Pro and interleaved releases of new Apple Silicon across the product portfolio is exactly what you’re asking for; nothing comes at once. Maybe you’re really wanting to say you want everything to come at once?

In any case, you’re kind of talking past everyone, not really following the point of the original discussion thread, and even misfiring on the technical underpinnings of the argument you’re wanting to make.

Taking a step back, you seem fixated on this idea of having something bigger and better in the mold of a “hugely more powerful Mac Pro,” given the current absence of a Mac Pro comparable to the latest Mac Studio, or maybe just the lack of upgradability of Apple Silicon. And you’ve landed on chip design / architecture as the lens through which to make that argument.

When really you should just be looking at what thing YOU want to get done with your computer. If tinkering / upgrading for the sake of tinkering / upgrading IS THAT THING, Apple Silicon is not the right product for you, however cool it is.

Otherwise ask yourself what is the performance you need to achieve your goals, and then whether what is available on the market is enough.

SoC is the better design, has incredible performance, is here to stay, is only going to get better, and there is nothing you can do about it.

Good luck.
 
  • Disagree
Reactions: BNBMS
And yet, proven by Apple themselves: No chips for Mac Pro,

That's kind of true, sure.

limited GPU performance, expensive chip price, and more. This is why they need a different chip design instead of making everything at once.

They really don't. And extremely few people think of ARM Mac performance as "limited". If anything, the Mac has been getting rave reviews on how good a balance Apple has struck.

Also comparing OLD Mac Pro to M4 Max is just stupid when they are RTX 50 series and RTX 40 based workstation GPUs are available and expandable and proven my point after all since Apple Silicon dont have any workstation grade CPU and GPU.

If you want that kind of GPU, get that kind of GPU. Apple isn't going to cater to every niche.
 
  • Like
Reactions: o9p0
They really don't. And extremely few people think of ARM Mac performance as "limited". If anything, the Mac has been getting rave reviews on how good a balance Apple has struck.
And do they even have 4x GPU? No. Stop ignoring facts.

If you want that kind of GPU, get that kind of GPU. Apple isn't going to cater to every niche.
You only agreeing my point after all.

I am really confused by your line of thought.

No new Mac Pro and interleaved releases of new Apple Silicon across the product portfolio is exactly what you’re asking for; nothing comes at once. Maybe you’re really wanting to say you want everything to come at once?

In any case, you’re kind of talking past everyone, not really following the point of the original discussion thread, and even misfiring on the technical underpinnings of the argument you’re wanting to make.

Taking a step back, you seem fixated on this idea of having something bigger and better in the mold of a “hugely more powerful Mac Pro,” given the current absence of a Mac Pro comparable to the latest Mac Studio, or maybe just the lack of upgradability of Apple Silicon. And you’ve landed on chip design / architecture as the lens through which to make that argument.

When really you should just be looking at what thing YOU want to get done with your computer. If tinkering / upgrading for the sake of tinkering / upgrading IS THAT THING, Apple Silicon is not the right product for you, however cool it is.

Otherwise ask yourself what is the performance you need to achieve your goals, and then whether what is available on the market is enough.

SoC is the better design, has incredible performance, is here to stay, is only going to get better, and there is nothing you can do about it.

Good luck.
Like I said, you are only proving my point.
 
Yeah, OK, I'm not interested in "mac sux lol" levels of conversation.

There's certainly a point to be made that Apple's decision to go all-in on SoCs has drawbacks. But it serves them extremely well for almost every product: Apple TV, Apple Watch, iPhone, iPad, Vision Pro, the entire Mac line-up, and I might be forgetting something can all run off the same chip design, just running at different clock speeds and with different additional features (such as Thunderbolt on the M series). The approach scales all the way from a Watch to the Mac Studio.

The only product where it serves them poorly is a hypothetical Mac that has even higher-end CPU and/or GPU cores. It makes perfect sense to me that this isn't a high priority for Apple. This is a shrinking sliver of computer usage.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.