Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Hi all

Looking to pickup a 256gb macbook air, but I'm not sure how much actual memory that will mean i.e. getting a 32gb iphone nets you ~27 gb of actual storage space.

Can anyone tell me how much actual storage space is on the 256gb ssd macbook air?

Roughly 256,000,000,000 bits give or take a few hundred millions. You get that many bits, what you do with it is actually up to you. The default option is an amount of that space will be taken up by OSX and default applications, the precise amount of which can varies a bit. You can also instead of that, wipe everything and install linux for much less space usage. Or have nothing on the drive and have all bits available for use.

The only accurate guage of space usage is to see what your current mac is using, what your monthly increase with pictures, musics, etc are like, and then project the size you'll need in 24 or 36 months depending on how long you intend to use the machine. My rule of thumb is all my current stuffs should be around 40% of SSD size. That let me double use over the next 3 years and still have some leftover space.

i also have a 128GB SD card where all non critical files like the work dropbox or backups goes into.
 
i have a sandwich
i ate it
but wait, that is too much sandwich they said
you should've eaten only 70% they said

Don't tell me:eek: You're Joey Chestnut and you just ate 61 of 'em.

Seriously - A Sandwich and an SSD?

Lou
 
i get the logic of the 70% thing. but ill probably get down to like 15-17gb before i freak out and delete stuff. even if there is major performance dropoff, its annoying to see the number dip low
 
Ok, Munchkin, this isn't a peer review document, this isn't a college level paper, stop telling people to site their sources; you have just as much access to the same information as anyone else!

flowrider claimed, "You'll take a hit performance wise, and the drive may fail prematurely. I think 60 to 70% of total capacity is a good rule of thumb. Essentially, advising people to buy almost twice ('60%') the amount of SSD storage they need.

When asked, flowrider cited, twice, references to degraded performance as an SSD fills up but didn't cite a source for his premature failure opinion.

By your 'logic', no one can ask why he, or you, recommend this, i.e., '...stop telling people to site their sources...'. Everyone has to accept it or prove them incorrect, '...you have just as much access to the same information...'.

I'll pass on following your 'logic' as well as buying almost twice the amount of SSD storage I need.
 
flowrider claimed, "You'll take a hit performance wise, and the drive may fail prematurely. I think 60 to 70% of total capacity is a good rule of thumb. Essentially, advising people to buy almost twice ('60%') the amount of SSD storage they need.

When asked, flowrider cited, twice, references to degraded performance as an SSD fills up but didn't cite a source for his premature failure opinion.

By your 'logic', no one can ask why he, or you, recommend this, i.e., '...stop telling people to site their sources...'. Everyone has to accept it or prove them incorrect, '...you have just as much access to the same information...'.

I'll pass on following your 'logic' as well as buying almost twice the amount of SSD storage I need.



I never said don't buy anything, I merely pointed out the FACT that excess write functions to a SSD Hard Drive will cause excess wear and eventually cause it to fail before one might expect it to. The document provided even pointed out that it was bad enough that the industry had to create a special algorithm to evenly write data across the drive so that it would wear evenly; it even points out that SSD drives tend to fail unexpectedly without warning where a traditional drive using S.M.A.R.T. can give the user some warning that something is wrong and give them a chance to save data from the drive before it fails!

That said, I've run drives up to 90% full before; I currently don't have drives that full because I installed a second hard drive in my laptop so I could install a third OS. I now run Mavericks, Windows 7, & Ubuntu 14.01 on my MacBook Pro. The 60% to 70% full is the opinion of Flowrider, I prefer to keep some free space but I prefer to keep 10% to 20% free myself!
 
flowrider claimed, "You'll take a hit performance wise, and the drive may fail prematurely. I think 60 to 70% of total capacity is a good rule of thumb. Essentially, advising people to buy almost twice ('60%') the amount of SSD storage they need.

When asked, flowrider cited, twice, references to degraded performance as an SSD fills up but didn't cite a source for his premature failure opinion.

By your 'logic', no one can ask why he, or you, recommend this, i.e., '...stop telling people to site their sources...'. Everyone has to accept it or prove them incorrect, '...you have just as much access to the same information...'.

I'll pass on following your 'logic' as well as buying almost twice the amount of SSD storage I need.

And the fact of the matter is, DancyMunchkin, you are nothing more than a partially educated drama queen; you have provided nothing constructive to this conversation, you have only attacked people's posts requesting cited documentation. And when I provided documentation from a technology company, pointing out the fact of premature drive failure, you just turn your nose up and say oh by your logic we can't ask why, no thanks. Well, why don't you try to provide something useful or go back to where you came from and stop trying to create drama where none should exist. You probably didn't even read that PDF published by Western Digital.

I will not respond to any more of your "drama" inducing posts from here on out; to me you are nothing more than a trouble maker hiding behind your computer! :D
 
I will not respond to any more of your "drama" inducing posts from here on out; to me you are nothing more than a trouble maker hiding behind your computer! :D

Which is why I have not responded further. To me he is a darn Spoiled Brat trying to incite folks into an argument. I aint bitin'.

Lou
 
I have a hard time understanding why some people can't grasp the fact that if a SSD is almost full it would impact its performance. It has been like that forever and the same principal existed for regular HDDs since their existence.

Most of the things that we interact and use on daily basis will have degraded performance when pushed to its limits: cars/highways, all kinds of connections (wifi, cellular bandwith etc) , CPU/memory utilization, are just few examples.
 
I have a hard time understanding why some people can't grasp the fact that if a SSD is almost full it would impact its performance. It has been like that forever and the same principal existed for regular HDDs since their existence.

Especially before we got into multi-gig ram systems; you had to leave at least 20 to 30 or even 40 percent of the drive free to make room for the page file!!
 
I have a hard time understanding why some people can't grasp the fact that if a SSD is almost full it would impact its performance. It has been like that forever and the same principal existed for regular HDDs since their existence.

Most of the things that we interact and use on daily basis will have degraded performance when pushed to its limits: cars/highways, all kinds of connections (wifi, cellular bandwith etc) , CPU/memory utilization, are just few examples.

Its a little more interesting with SSD because they don't have linear physical access the same way a spinning plate does. The problem is the free space amount will be different on each model of SSD due to excess sectors the manufacturers provide already. Manufacturers anticipate some cells dying and actually have more space on the drive than reported, I remember some early drives from years ago actually had something like 30-40% spare cells above reported storage limit. The extra number basically allow you to actually use 100% of reported capacity and still have a ton of physically empty space for the SSD firmware to do it thing.

The problem is as SSD technologies mature, manufacturers are going to try and minimize excess capacities to save money and lower prices. Depending on make and model, we don't usually easily know what that excess capacity is, could be 1%, could be 25%. And those differences then does impact whether you get performance degradation at 100% full or not...
 
I have a hard time understanding why some people can't grasp the fact that if a SSD is almost full it would impact its performance. It has been like that forever and the same principal existed for regular HDDs since their existence.

Poorer performance as an SSD approaches capacity is not the issue.

Two 'Internet experts' claimed using more than '60 - 70%' caused not just poor performance but also premature failure. They were asked for proof. Their response was anyone who doubted them had to prove them wrong. In other words, everyone should just believe them. Now that they've been challenged again, they are hiding behind name calling. What a shock. 'Internet experts' morph into 'Internet bullies'. :rolleyes:
 
Poorer performance as an SSD approaches capacity is not the issue.

Two 'Internet experts' claimed using more than '60 - 70%' caused not just poor performance but also premature failure. They were asked for proof. Their response was anyone who doubted them had to prove them wrong. In other words, everyone should just believe them. Now that they've been challenged again, they are hiding behind name calling. What a shock. 'Internet experts' morph into 'Internet bullies'. :rolleyes:

Way off topic but let's put the tangent to rest so we can get to the root of the question....

SSDs work very differently than traditional magnetic media. The SSD is comprised of blocks, which which contain multiple pages. For simplicities sake lets say that a block contains 4 pages (it varies by NAND but a typical block could be 128 pages or more).

Over time the process of emptying and refilling the block with electrons breaks down. This is what causes SSDs to die over time.

What is unique about NAND is that a page must be empty to be written, but to write a page the entire block must be empty. So lets say I have a block(A) that contains P1, P2, P3, and P4.

If I want to over right P1 the drive must copy P1, P2, P3, and P4 into cache (the performance hit mentioned), erase P1, P2, P3, and P4, and the rewrite them all (a write which causes the slow degradation of the drive over time).

The more data the drive contains, the greater the likelihood that a multipage write might impact multiple blocks. For example, if you need to write two pages and all you have free is Block(A)/P1 and Block(B)/P4. You are shortening the life of of two entire blocks two write tow pages.

This is obviously a simplistic view of the process and SSD vendors put a lot of work into software to more efficiently manage free space and intelligently write data.

Your acceptance of this as true is not a prerequisite for this to be true.

Kurso
 
Way off topic but let's put the tangent to rest so we can get to the root of the question....

SSDs work very differently than traditional magnetic media. The SSD is comprised of blocks, which which contain multiple pages. For simplicities sake lets say that a block contains 4 pages (it varies by NAND but a typical block could be 128 pages or more).

Over time the process of emptying and refilling the block with electrons breaks down. This is what causes SSDs to die over time.

What is unique about NAND is that a page must be empty to be written, but to write a page the entire block must be empty. So lets say I have a block(A) that contains P1, P2, P3, and P4.

If I want to over right P1 the drive must copy P1, P2, P3, and P4 into cache (the performance hit mentioned), erase P1, P2, P3, and P4, and the rewrite them all (a write which causes the slow degradation of the drive over time).

The more data the drive contains, the greater the likelihood that a multipage write might impact multiple blocks. For example, if you need to write two pages and all you have free is Block(A)/P1 and Block(B)/P4. You are shortening the life of of two entire blocks two write tow pages.

This is obviously a simplistic view of the process and SSD vendors put a lot of work into software to more efficiently manage free space and intelligently write data.

Your acceptance of this as true is not a prerequisite for this to be true.

Kurso


Interesting Kurso, so it sounds like you're saying the more you write to a drive the more likely it is to fail; hmmm that sounds familiar! It seems I've heard that somewhere before. Perhaps Western Digital said something very close to that same thing and then I think someone pretending to be smart totally ignored that!


Anyway, storage amounts have always been less than stated but never by much; has to do with a difference in how they are figured on paper and how they are formatted, although, I've never sat down with the math to figure it!
 
Interesting Kurso, so it sounds like you're saying the more you write to a drive the more likely it is to fail; hmmm that sounds familiar! It seems I've heard that somewhere before. Perhaps Western Digital said something very close to that same thing and then I think someone pretending to be smart totally ignored that!


Anyway, storage amounts have always been less than stated but never by much; has to do with a difference in how they are figured on paper and how they are formatted, although, I've never sat down with the math to figure it!

Thats because hardware manufacturers used base10 and software engineers used base 2, so we counted 1024 as 1K instead of 1000. Compounded a few times and you had counting discrepancy, that is no longer the case as Apple went to a base10 system when reporting drive space a while back.
 
Thats because hardware manufacturers used base10 and software engineers used base 2, so we counted 1024 as 1K instead of 1000. Compounded a few times and you had counting discrepancy, that is no longer the case as Apple went to a base10 system when reporting drive space a while back.

I didn't realize Apple had, but I'll still play around with some numbers just for fun. Thanks for the info.
 
I never said don't buy anything, I merely pointed out the FACT that excess write functions to a SSD Hard Drive will cause excess wear and eventually cause it to fail before one might expect it to.
Ah, in that case you completely fail to understand what we are talking about here and what the document you are linking to is talking about.

The document simply states that a NAND cell has a fixed amount of p/e cycles it can endure before going bust. p/e in this case means program/erase, meaning filling and clearing a cell (aka put data in it, throw out any data that was in it). This has nothing to do with filling up a drive. By not filling up a drive you can go through a lot of p/e cycles. The only way to go through a lot of those p/e cycles is by writing AND deleting a lot of stuff: aka moving data around like there is no tomorrow. You can accomplish this even by filling a 128GB drive for 10%. It's like a bucket with a hole. It doesn't matter if you fill it completely or only for 10% because in all those cases the water will still come out that hole.

The only thing filling up a drive completely will do is decrease the performance. If you remove data either TRIM or GC will be used to clear out the NAND cells. We need to do this because the performance decrease happens due to cells holding data; so no data, no performance decrease (simply put).

There are two principles in this case:
  1. Filling up all NAND cells with data which decreases performance.
  2. Going through a lot of p/e cycles which kills NAND cells.
Only the first is affected when you fill up your drive. The second is only affected when you do a lot of writing AND deleting, how much the drive is filled doesn't matter.
 
Ah, in that case you completely fail to understand what we are talking about here and what the document you are linking to is talking about.

The document simply states that a NAND cell has a fixed amount of p/e cycles it can endure before going bust. p/e in this case means program/erase, meaning filling and clearing a cell (aka put data in it, throw out any data that was in it). This has nothing to do with filling up a drive. By not filling up a drive you can go through a lot of p/e cycles. The only way to go through a lot of those p/e cycles is by writing AND deleting a lot of stuff: aka moving data around like there is no tomorrow. You can accomplish this even by filling a 128GB drive for 10%. It's like a bucket with a hole. It doesn't matter if you fill it completely or only for 10% because in all those cases the water will still come out that hole.

The only thing filling up a drive completely will do is decrease the performance. If you remove data either TRIM or GC will be used to clear out the NAND cells. We need to do this because the performance decrease happens due to cells holding data; so no data, no performance decrease (simply put).

There are two principles in this case:
  1. Filling up all NAND cells with data which decreases performance.
  2. Going through a lot of p/e cycles which kills NAND cells.
Only the first is affected when you fill up your drive. The second is only affected when you do a lot of writing AND deleting, how much the drive is filled doesn't matter.

Maybe I'm missing something but it looked like you told me I was wrong, then described how the cell goes bad which is using the cell: i. e. Writing data to it, then disagreed with yourself. The only time cells are physically ever erased is normally during a full format of the drive. A quick format doesn't even physically erase the drive! Or zero fill the drive, that would delete the cell data too.

Just deleting a file doesn't physically remove the data. It only removes the reference to that file from the index file.
 
Last edited:
You are missing something but it is quite subtle though. You are overlooking the additional step. If you write something to a cell it only fills it. Do that enough and you fill up the entire drive. Filling up a drive has no impact on the lifespan of NAND cells, it only impacts performance.

A NAND cells lifespan only decreases if you write something to it and then clear it. That's a full program/erase cycle (or p/e). It's the additional step that makes all the difference.

The other thing that you are wrong about is the erasing part. It's the internal GC functionality as well as the TRIM command that automatically clears out cells (aka a physical delete). TRIM it is more precise because it is a command send to the SSD to tell it what to clear out. Clearing out cells is a physical operation. The only purpose for these systems is to maintain performance as much as they possibly can (obviously you can't when you simply fill the drive; both TRIM and GC only work when deleting stuff).

When you use GC it'll work like an ordinary HDD. Areas will be marked as "reuse" which will be picked up by the GC algorithm. It will then clear out these areas. An ordinary HDD won't do this because it doesn't need to. It doesn't suffer a large hit in performance. That's why they don't delete it physically but only remove it from the index. Those areas will be overwritten.

Do not mix up HDD technology with SSD technology. There are similarities but they differ an awful lot. The part about deletion is where they differ.
 
Ah, in that case you completely fail to understand what we are talking about here and what the document you are linking to is talking about.

The document simply states that a NAND cell has a fixed amount of p/e cycles it can endure before going bust. p/e in this case means program/erase, meaning filling and clearing a cell (aka put data in it, throw out any data that was in it). This has nothing to do with filling up a drive. By not filling up a drive you can go through a lot of p/e cycles. The only way to go through a lot of those p/e cycles is by writing AND deleting a lot of stuff: aka moving data around like there is no tomorrow. You can accomplish this even by filling a 128GB drive for 10%. It's like a bucket with a hole. It doesn't matter if you fill it completely or only for 10% because in all those cases the water will still come out that hole.

The only thing filling up a drive completely will do is decrease the performance. If you remove data either TRIM or GC will be used to clear out the NAND cells. We need to do this because the performance decrease happens due to cells holding data; so no data, no performance decrease (simply put).

There are two principles in this case:
  1. Filling up all NAND cells with data which decreases performance.
  2. Going through a lot of p/e cycles which kills NAND cells.
Only the first is affected when you fill up your drive. The second is only affected when you do a lot of writing AND deleting, how much the drive is filled doesn't matter.

Sorry, maybe my previous example was not clear. Here is how filling a drive can lead to deceased life span, in addition to the decreased performance.

Let's say we have 4 blocks (b1, b2, b3, and b4). Each block has 4 pages b1.p1, b1.p2, etc... In all blocks pages 1-3 are filled with data (p1-p3 in each block is filled). This means the drive is 75% of capacity. Now, I want to write a piece of information that is 4 pages long... Where does it for? Well it is spread across all 4 blocks.

This means not only do you take the performance hit because you have to cache and rewrite all four blocks but you have now degraded the lifecycle of all 4 blocks as well.

Now, obviously the number of pages per block and the number of blocks per drive is much larger than the example but the same mathematical principle holds true. The closer you get to capacity of the drive the greater chance that a write will need to be spread across a larger number of blocks, thus the performance and lifecycle hit.
 
From what I know it will only spread them out in new parts of the ssd the same way a hdd does it. As a result you get fragmentation (which you hardly notice due to the speed of the ssd). It does not reorder stuff, that's only done when the GC kicks in (simply put: GC does something that is very similar to a defragmentation). The only time it would move data around is when it can't find any areas it can actually write to. That doesn't happen a lot.

The biggest problem is that this is all theory. In reality you hardly notice any difference, even with TLC as real world tests have already shown. Especially in devices like tablets, MacBook Air/ Pro Retina where batteries are glued. The batteries will due sooner than the SSDs. If an SSD dies it usually is due a controller failure. Only the server SSDs will fail due to dead NAND cells (but we are talking lots and lots of p/e cycles on these machines!).
 
Last edited:
From what I know it will only spread them out in new parts of the ssd the same way a hdd does it. As a result you get fragmentation (which you hardly notice due to the speed of the ssd). It does not reorder stuff, that's only done when the GC kicks in (simply put: GC does something that is very similar to a defragmentation). The only time it would move data around is when it can't find any areas it can actually write to. That doesn't happen a lot.

The biggest problem is that this is all theory. In reality you hardly notice any difference, even with TLC as real world tests have already shown. Especially in devices like tablets, MacBook Air/ Pro Retina where batteries are glued. The batteries will due sooner than the SSDs. If an SSD dies it usually is due a controller failure. Only the server SSDs will fail due to dead NAND cells (but we are talking lots and lots of p/e cycles on these machines!).

Correct. This is exactly what I have described. The mathematical likelihood that data will get fragmented increases the further over 50% capacity you are (it can occur under 50% but that's more detail that needed here). The result is decreased performance and decreased lifespan. TRIM/GC and the LBA virtualization help the SSD better manage the placement of data but ultimately there is no way to eliminate either the performance or lifecycle impact.

So with 100% certainty drive performance and lifespan are negatively impacted, to varying degrees by use case (and vendor specific software), as the drive pushes closer to full capacity.
 
To me it sounds like the same basic thing that's been said; only a lot more technical. The less empty space a HHD or SSD has the more they have to work at arranging and rearranging data and for the SSD all this rearranging lessens the life of the drive.

But for the most part, most people won't notice the life span reduction?
 
An ordinary hdd does the same thing: the more you use it, the lesser its lifespan will get and the slower it'll become. It's part of the normal wear and tear which every component in a computer has. In case of an ssd controller failure is far more likely to happen. Those flash drives in the MacBook Air/Pro Retina (and similar devices) will be replaced sooner because people buy newer computers sooner. All in all...don't worry about it ;)
 
It think there is too much nit-picking here and most of the SSD drives are going to last years past when the owner even wants to use it.

I do agree that a "stuffed" drive might slow things down but not to the point that most would be able to perceive. Failure due to over filling a drive is also so far away that most will never see it.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.