Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

drecc

macrumors regular
Original poster
Nov 6, 2014
115
50
The current MacBook Pro 14 and 16 are advertised as having "jaw-dropping 7.4GB/s read speeds" (Apple web site product page).

In comparison, the existing MacBook Air M1 was benchmarked by MacRumors as "2190 MB/s writes and 2675 MB/s reads".

Does anyone have any reason to expect SSD improvements in the 2022 M2 MacBook Air? Could there be limiting factors to increasing the speed, such as power requirements?

Thanks!
 
Would be interesting to know but till reviews I can’t believe any one on here would have more then wild guess
 
  • Like
Reactions: unixfool
The current MacBook Pro 14 and 16 are advertised as having "jaw-dropping 7.4GB/s read speeds" (Apple web site product page).

In comparison, the existing MacBook Air M1 was benchmarked by MacRumors as "2190 MB/s writes and 2675 MB/s reads".

Does anyone have any reason to expect SSD improvements in the 2022 M2 MacBook Air? Could there be limiting factors to increasing the speed, such as power requirements?

Thanks!
Let's say there were improvements. The speeds quoted are sequential; hence, speeds which you will never see in real life or will actually matter. The speeds you need to watch for are the QD1 @ 4K.
 
Does anyone have any reason to expect SSD improvements in the 2022 M2 MacBook Air? Could there be limiting factors to increasing the speed, such as power requirements?

Thanks!

Limiting factor is cost. If it were substantially faster, Apple would have announced it. Apple clearly said 2X for M1 MBA.
 
  • Like
Reactions: drecc
Let's say there were improvements. The speeds quoted are sequential; hence, speeds which you will never see in real life or will actually matter. The speeds you need to watch for are the QD1 @ 4K.
The reason I am interested in a faster SSD for sequential reads is because that is what will affect performance when I have lots of apps open and RAM is being swapped back from SSD.
 
The reason I am interested in a faster SSD for sequential reads is because that is what will affect performance when I have lots of apps open and RAM is being swapped back from SSD.
With more reason. Swap is usually on the scale of 4k files which means you have to look at Read QD1 @ 4K speeds. Sequential read speeds are useless.
 
With more reason. Swap is usually on the scale of 4k files which means you have to look at Read QD1 @ 4K speeds. Sequential read speeds are useless.
Thanks for your reply. Let's say I have Adobe After Effects open, using 4GB of RAM, but I've not used it for a few hours. Why would that RAM be swapped out to SSD as a fragmented mess, such that reading it back into RAM would look more like 1 million 4K random reads rather than as one 4GB sequential read, or as a list of 200 MB sequential reads?

I'm only guessing at how the RAM<>SSD swap mechanism might work, so if you know different, please could you link me to any evidence you have that the swapping process would result in highly fragmented data?
 
Thanks for your reply. Let's say I have Adobe After Effects open, using 4GB of RAM, but I've not used it for a few hours. Why would that RAM be swapped out to SSD as a fragmented mess, such that reading it back into RAM would look more like 1 million 4K random reads rather than as one 4GB sequential read, or as a list of 200 MB sequential reads?

I'm only guessing at how the RAM<>SSD swap mechanism might work, so if you know different, please could you link me to any evidence you have that the swapping process would result in highly fragmented data?
Look, just because an application is using 4GB of memory doesn't mean it's one big file. It can be segmented into several data sets. After all, memory isn't about a file, it's about data being deposited during usage.

That mechanism is handled by the CPU and the OS.
 
I didn't mean to irritate you, I'm genuinely interested.

I'm a software engineer, and if I was the one writing the code to swap untouched pages of memory out to disk, I'd be well aware that random 4K reads would be massively slower than sequential reads.

My 2018 MBP does 3467MB/s SEQ1M QD8 (1744MB/s SEQ1M at QD1) reads, but only 24.25 MB/s RND4KQD1 reads (according to AmorphousDiskMark).

Therefore I'd reason that it'd be completely crazy to write code that swaps chunks of significantly less than 1MB out to disk.

It's therefore a pretty reasonable question for me to ask what reason you have to believe that the Mac swap implementation would highly fragment the swap data.
 
You didn't irritate me, I'm just explaining it as best I can.

The Swap implementation works based on how each developer sets their application up. While you may try to stick to higher than 1MB files, other instances won't be able to. It all depends on the app and what it requires.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.