Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Give this video a watch. Its interesting and I havent seen it covered anywhere. Tldr is when transferring large files(100gb) in the 256 gb ssd your slc cache(not tested but believed to be around 60-80gbs) runs out and your read speeds drops like a rock i mean like very low. Maybe you might think it wont affect you but its best that you know of this before hand rather than finding out.

Also my knowledge is limited in ssds but I dint think this would be an issue once you have limited space on the ssd ie less than the slc cache because at that point you would just write to the slc cache.

I think you can avoid this by splitting up your data if that is possible like editing large videos (while not ideal entirely possible) you could split your 100gb in to 10gbs folders and just start editing while you wait for the whole 100gb to transfer. This wont be possible when compiling code or anything but I mean what kind of code are you working on(locally) that is 100gb of compilable code. Unless you are coding the next COD game on your pc you should be fine in that regard. Still an issue that should be talked about with apples insane ssd pricing tho

What i am curious about is working directly from the ssd ie editing files on the ssd and compiling code and running apps from it. I wonder if this issue is seen in that scenario aswell or not.
 
  • Like
Reactions: ConvertedToMac
Isn't that normal for dynamic SLC on NAND? Some technical details would be good though.
 
I think that is normal. The solution would be to use all slc by apple but that is unlikely. I think its somewhat of an edge scenario its unlikely someone will transfer that large of a file in one go and it does seem you can just work off the external ssd. But its good to know if your workflow has something like this.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.