Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
(Not sure what is the relationship with the original subject, but let's go anyway.)
You're the one who made a claim about Hynix, Anobit, and Apple. And it sounded highly unlikely.

Gotta love it when someone starts like this and then procceeds to get it wrong.
Indeed, so how much do you know NAND flash? Doesn't sound like much.

Not exactly. They themselves, and the reviews and papers about them along the web, describe what they do as something beyond error correction. The accent seems to be on the signal processing part. (Surprising, given that their tech is named Memory Signal Processing, huh?).
Their "Memory Signal Processing" technology has less to do with signal processing and more to do with marketing. It's error correction which attempts to utilize knowledge about the nature of physical flash miswrites and misreads. (there are other manufacturers who do this as well)

In general, the problem with MLC lifetime is with regards to "where to put the data next" and "how do I try to get the correct data from this degraded page". The most they can be working on is the allocation and error correction layers of the FTL, which is why I summarized it as error correction.

Saying it is "beyond error correction" is little more than hyperbole. While it is more than BCH/parity codes typical of cheap controllers, MSP is still categorically error correction, just slightly different in style.

Yep, but that's encapsulated in the products, which are memory controllers. You use memory controllers so they control the memory, so you don't have to control it. That's why they are called "controllers", mostly.
1) Huh? What are you trying to say?
2) I was referring to your reference to Hynix. No, it's not likely that Hynix would buy or use Anobit's IP since Hynix doesn't appear to have a NAND controller product... unless they wanted to enter that market.
 
You're the one who made a claim about Hynix, Anobit, and Apple. And it sounded highly unlikely.

Care to elaborate what exactly sounded so highly unlikely?

Their "Memory Signal Processing" technology has less to do with signal processing and more to do with marketing. It's error correction which attempts to utilize knowledge about the nature of physical flash miswrites and misreads. (there are other manufacturers who do this as well)

Every manufacturer tries to do something, of course. And yet, looks like only Anobit have a way to extend the useful life of MLC Flash memory to the level of SLC memory, and TLC memory to MLC level.

In general, the problem with MLC lifetime is with regards to "where to put the data next" and "how do I try to get the correct data from this degraded page". The most they can be working on is the allocation and error correction layers of the FTL, which is why I summarized it as error correction.

Looks like you are confusing pages and cells. Anobit work at a lower level than you are thinking.
And so, likely you are also confusing what they do and "the most they can be working on". (was that a brave statement, man...)

Saying it is "beyond error correction" is little more than hyperbole. While it is more than BCH/parity codes typical of cheap controllers, MSP is still categorically error correction, just slightly different in style.

We'll address that when we are sure that you know the difference between pages and cells, ok?

2) I was referring to your reference to Hynix. No, it's not likely that Hynix would buy or use Anobit's IP since Hynix doesn't appear to have a NAND controller product... unless they wanted to enter that market.

Didn't you hear about the "Flash device by Hynix" in the iPhone4S? Maybe they already entered the market! :p
 
Last edited:
Hum, what you described is just "a server somewhere". :rolleyes: That's what the cloud is, basically the good old Client/Server model.

Only in laymen's terms. If you ever have the opportunity to architect one, you'll find out that the two are worlds apart. They are about as similar as an MMO and the multiplayer of Battlefield3.
 
Care to elaborate what exactly sounded so highly unlikely?

Sure.

You said "Looks like Apple uses memory from Hynix, which reportedly uses Anobit's tech. I Am Not A Lawyer or anything, but I'd guess it'd be Hynix who would have an IP relation with Anobit, not Apple."

You are suggesting that Hynix memory chips are using Anobit's IP. This is unlikely because Anobit's MSP technology is not implemented in memory chips. Anobit's MSP is implemented in the NAND controller, which is a separate chip that talks to memory chips. In other words, what you said doesn't make sense because you're talking about a technology being inside the wrong part.

Every manufacturer tries to do something, of course. And yet, looks like only Anobit have a way to extend the useful life of MLC Flash memory to the level of SLC memory, and TLC memory to MLC level.

Looks like you are confusing pages and cells. Anobit work at a lower level than you are thinking.
And so, likely you are also confusing what they do and "the most they can be working on". (was that a brave statement, man...)

Lower level? Dude, the memory chips all speak ONFI. All the low level stuff is abstracted out by the time it hits the pins on the chip. The NAND controller talking to the chip doesn't get the raw analog read values for each cell, so there's no way for the NAND controller to do actually signal analysis for each cell. All the NAND controller gets is chunks of data in pages.

Just stare for a while at page 3 of Anobit's presentation at the Flash Memory Summit 2011 on their web page.
 
You said "Looks like Apple uses memory from Hynix, which reportedly uses Anobit's tech. I Am Not A Lawyer or anything, but I'd guess it'd be Hynix who would have an IP relation with Anobit, not Apple."

You are suggesting that Hynix memory chips are using Anobit's IP. This is unlikely because Anobit's MSP technology is not implemented in memory chips. Anobit's MSP is implemented in the NAND controller, which is a separate chip that talks to memory chips. In other words, what you said doesn't make sense because you're talking about a technology being inside the wrong part.

Then maybe you should complain to the writers of the original articles.

And everyone says "Hynix Flash device" or some variation; no one I read said "Hynix memory chips", so I don't know why you'd say that. Heck, I don't even know if the memory controller might be embedded with the memory itself in a device like a smartphone.

Anyway you could go straight to the horse's mouth. Go to Hynix and do a search on Anobit: you'll see they paid them royalties for 3 years now. Go then and search MSP: you'll see they have some products with it.

Lower level? Dude, the memory chips all speak ONFI. All the low level stuff is abstracted out by the time it hits the pins on the chip.

Obviated by the possibility of the controller being embedded with the memory.

The NAND controller talking to the chip doesn't get the raw analog read values for each cell, so there's no way for the NAND controller to do actually signal analysis for each cell. All the NAND controller gets is chunks of data in pages.

Pal, words keep falling from your mouth as if you knew something, but then you neatly proceed to negate it.
I don't know if they work in the analog or digital domain. I guess neither do you. But if you actually read *any* of the articles in the Technology section of anobit.com, they talk as if they do read the analog values. That's the reason that what they do is first Signal Processing. ECC comes later, when already in the digital domain. And of course, pages come even later.

Since ONFI allows for ECC, surely it comes after ECC, I'd guess.

In fact, already in 2008 Hynix had their first product which at the same time used ONFI and MSP. Just stare for a while at the google translation of their press release:
http://translate.google.com/transla...ata-view.jsp?search.seq=973&search.gubun=0004

Just stare for a while at page 3 of Anobit's presentation at the Flash Memory Summit 2011 on their web page.

And where exactly should I stare at? At the part which says that Signal Processing comes before ECC, as I said?

Or maybe to page 4, where they say that their architectural change is that "Flash controller" is now closer (embedded??) to NAND memory? Matches with what I said, methinks.

You could also stare at the article for Embedded Computing Design which is also in Anobit's Technology page. There, you can find pearls like "For example, the floating gate coupling distortion in a cell can be measured and compensated for via signal processing algorithms."
Is that analog enough for you? straight from the horse's mouth enough for you?
 
Pal, words keep falling from your mouth as if you knew something, but then you neatly proceed to negate it.
I don't know if they work in the analog or digital domain. I guess neither do you. But if you actually read *any* of the articles in the Technology section of anobit.com, they talk as if they do read the analog values. That's the reason that what they do is first Signal Processing. ECC comes later, when already in the digital domain. And of course, pages come even later.

And where exactly should I stare at? At the part which says that Signal Processing comes before ECC, as I said?

Or maybe to page 4, where they say that their architectural change is that "Flash controller" is now closer (embedded??) to NAND memory? Matches with what I said, methinks.

You could also stare at the article for Embedded Computing Design which is also in Anobit's Technology page. There, you can find pearls like "For example, the floating gate coupling distortion in a cell can be measured and compensated for via signal processing algorithms."
Is that analog enough for you? straight from the horse's mouth enough for you?

I ran the search as you said. And it turns out, Hynix does offer block abstracted NAND which use MSP. Good to know, I wasn't aware of that.
But it still doesn't resolve the fact that we simply disagree on how MSP is implemented.

You keep suggesting that they read the analog values off the cell and do signal processing on that. I disagree and believe it's just a proprietary ECC engine in the NAND controller, outside of the NAND flash chip.
And yeah, I read the Embedded Computing Design article. And the sentence you quoted. And I agree that what that articles says sounds like it's cell-level.

But then I look at this: "http://anobit.com/uploaded/MSP2025%20Embedded%20Flash%20Controller%20-%20Product%20Brief%201%20page.pdf"
Okay, so here we have a MSP-powered NAND controller. It's the MSP/ECC block on the page 4 diagram mentioned earlier. It's designed to hook up to ONFI NAND.
Explain to me how this chip reads analog values from NAND flash. Because as far as I know, if you're outside the NAND, you're getting pages.
 
I ran the search as you said. And it turns out, Hynix does offer block abstracted NAND which use MSP. Good to know, I wasn't aware of that.
But it still doesn't resolve the fact that we simply disagree on how MSP is implemented.

I don't see any problem with respectfully disagreeing.

It's only a problem when one tries to dig some understanding and someone other arrogantly dismisses the thing with nothing more than some handwaving.

You keep suggesting that they read the analog values off the cell and do signal processing on that. I disagree and believe it's just a proprietary ECC engine in the NAND controller, outside of the NAND flash chip.
And yeah, I read the Embedded Computing Design article. And the sentence you quoted. And I agree that what that articles says sounds like it's cell-level.

Note that I suggest that because that's what they themselves suggest, as you have seen. Who knows if that's really the case. I certainly didn't chase the patents. (...yet)

But then I look at this: "http://anobit.com/uploaded/MSP2025%20Embedded%20Flash%20Controller%20-%20Product%20Brief%201%20page.pdf"
Okay, so here we have a MSP-powered NAND controller. It's the MSP/ECC block on the page 4 diagram mentioned earlier. It's designed to hook up to ONFI NAND.
Explain to me how this chip reads analog values from NAND flash. Because as far as I know, if you're outside the NAND, you're getting pages.

First of all: no idea.
But, my barely educated guesses? All the mentions say "embedded Flash Controller". As I speculated, maybe this is really embedded together with the memory itself?
Interesting for example that both the input and output of the controller diagram say NAND I/F. Are we sure of where ONFI appears?

Even more: in fact, with multibit cells, what you get when reading a cell is of course multiple bits, which surely is after all a discretization of the analog value. So if instead of taking the bits at face value you use them as input for some processing... you in fact can comply with everything (digital values via ONFI which still do represent the underlying analog cell value which can be signal-processed or whatever). Although that'd still require some knowledge of the actual cells being accessed, of course.
 
Last edited:
I don't see any problem with respectfully disagreeing.

It's only a problem when one tries to dig some understanding and someone other arrogantly dismisses the thing with nothing more than some handwaving.

Oh, I'm totally fine with respectfully disagreeing and analytical discussion. But it certainly sounds like you believe I'm doing the handwaving, and vice versa.

Note that I suggest that because that's what they themselves suggest, as you have seen. Who knows if that's really the case. I certainly didn't chase the patents. (...yet)

On the off chance, If you do chase the patents, I'd like to see them too.

First of all: no idea.
But, my barely educated guesses? All the mentions say "embedded Flash Controller". As I speculated, maybe this is really embedded together with the memory itself?
Interesting for example that both the input and output of the controller diagram say NAND I/F. Are we sure of where ONFI appears?

Typically, "embedded flash controller" is the controller embedded in a managed storage device that sits between the storage device outside interface and the nand flash chips. It should be standard terminology. An example would be an SD card or SSD drive. The external interface would be SD or SATA. The controller to flash interface is ONFI.

The Hynix device you brought up is an eMMC device. Several manufacturers make these, and they're literally a MMC card on a chip or a package. The idea being that many embedded device CPUs have a MMC/SD interface, so it's easy to hook into your design. Inside, they're still typically divided into the flash controller and the nand chip with vias in between, usually to maximize yields.

Knowing what ONFI (http://onfi.org/specifications/) looks like, and knowing what typical flash storage device designs looked like, I dismissed the idea of doing analog signal processing with the per chip values because the flash controller simply doesn't have access to this data. In order for them to have access to this data and have reasonable throughput, Anobit would have to design and build their own NAND. Having them ship a MSP implementation as a flash controller proved that MSP doesn't use the raw cell values as input data.

Sound less handwavy now?

Even more: in fact, with multibit cells, what you get when reading a cell is of course multiple bits, which surely is after all a discretization of the analog value. So if instead of taking the bits at face value you use them as input for some processing... you in fact can comply with everything (digital values via ONFI which still do represent the underlying analog cell value which can be signal-processed or whatever). Although that'd still require some knowledge of the actual cells being accessed, of course.

Right, but when you're at the digital level, wouldn't you consider that the same realm as ECCs? I speculate that what they're doing is actually inputting some model data about the cell layout into a hardware ECC engine to apply a level of correction regarding probabilistic changes due to array impairments. But that's speculation.
 
Typically, "embedded flash controller" is the controller embedded in a managed storage device that sits between the storage device outside interface and the nand flash chips. It should be standard terminology. An example would be an SD card or SSD drive. The external interface would be SD or SATA. The controller to flash interface is ONFI.

The Hynix device you brought up is an eMMC device. Several manufacturers make these, and they're literally a MMC card on a chip or a package. The idea being that many embedded device CPUs have a MMC/SD interface, so it's easy to hook into your design. Inside, they're still typically divided into the flash controller and the nand chip with vias in between, usually to maximize yields.

Knowing what ONFI (http://onfi.org/specifications/) looks like, and knowing what typical flash storage device designs looked like, I dismissed the idea of doing analog signal processing with the per chip values because the flash controller simply doesn't have access to this data. In order for them to have access to this data and have reasonable throughput, Anobit would have to design and build their own NAND. Having them ship a MSP implementation as a flash controller proved that MSP doesn't use the raw cell values as input data.

Sound less handwavy now?

Certainly, and is an interesting background and thought process.
Although I wouldn't go as far as to say that it "proves"... I guess there are still too many unknowns (for example, are we sure MSP works after ONFI? Are there no options?).

And about the "in order to have access to this data and have reasonable throughput" part, note that Anobit also specifically mentions that as something they have had to take care of. (again, mentioned in the Embedded Computing Design article)

Right, but when you're at the digital level, wouldn't you consider that the same realm as ECCs? I speculate that what they're doing is actually inputting some model data about the cell layout into a hardware ECC engine to apply a level of correction regarding probabilistic changes due to array impairments. But that's speculation.

Mhm. I wouldn't interpret like that the boxes in the diagrams, ... but why not, also sounds like a possibility. However... would that still be an Error Correcting Code?

I mean, it's been some time since I studied these things, but the last I remember where you'd take into account some model of the medium would be in, for example, digital de/modulation. Which I guess would be closer to SP than to ECCs. I could be wrong or things could have evolved, of course.

Anyway, thist starts to look like what we are saying are not so different things after all, and the difference is more on where to draw the distinction between signal processing, ECC... and whatever else.
 
Last edited:
Certainly, and is an interesting background and thought process.
Although I wouldn't go as far as to say that it "proves"... I guess there are still too many unknowns (for example, are we sure MSP works after ONFI? Are there no options?).

Of course without seeing the exact plans of Anobit, we can't say it's 100% proven. But given that MSP is always talked about in terms of the NAND controller which is after ONFI, and that I've never seen or heard of a mass produced NAND array that doesn't speak ONFI since I started reading about NAND controller implementation in 2009, I'd say it's highly likely to be true.

Also, when I mentioned that there's similar technologies from other companies, they're also implemented outside the NAND and inside the NAND controller. (Sandforce implements "Advanced Read/Program Disturb Management" in their NAND controller)

The only way to prove otherwise is for Anobit to ship or cooperate in building a NAND array where the NAND controller is directly hooked into the array with ADCs instead of the output drivers. (see page 9 http://www.rockbox.org/wiki/pub/Main/OndaVX747/K9HBG08U1M.pdf)
That'd be quite expensive to make, the yield lower, more time consuming to design, and lower performance due to the amount of time necessary to run any sort of analysis algorithm on every read.

In addition to that, there'd have to be two different systems branded as MSP as one implemented as part of the NAND array would be extremely different than the one they ship in their existing NAND controllers simply because the input would be so different.

They're welcome to try for such a beast, just like Intel tried with FBDIMMs, but such a design doesn't sound like a winner to me.

And about the "in order to have access to this data and have reasonable throughput" part, note that Anobit also specifically mentions that as something they have had to take care of. (again, mentioned in the Embedded Computing Design article)

I went back to the article and reread that part a few times. They say "For example, the floating gate coupling distortion in a cell can be measured and compensated for via signal processing algorithms."

To me, I read that as the effects of the distortion can be measured (and then compensated for). Which is true. You can write to the NAND, and then read and write to nearby blocks and determine the effects through iteration, and then extrapolate for the rest of the chip since the arrays are pretty uniform. This could be a one time thing after the controller and NAND are first paired and formatted. And this calibration data reused for the lifetime of the device.

Given this model, and the need for performance, the "signal processing block" would essentially be a logic block that applies the model to predict what the data could have been with some distortion removed.

Of course this is speculation, but it does the fit the description.

Mhm. I wouldn't interpret like that the boxes in the diagrams, ... but why not, also sounds like a possibility. However... would that still be an Error Correcting Code?

I mean, it's been some time since I studied these things, but the last I remember where you'd take into account some model of the medium would be in, for example, digital de/modulation. Which I guess would be closer to SP than to ECCs. I could be wrong or things could have evolved, of course.

Anyway, thist starts to look like what we are saying are not so different things after all, and the difference is more on where to draw the distinction between signal processing, ECC... and whatever else.

I see what you mean. While it's error correction, it's not what you'd typically call a code and I agree that it does sound more like traditional signal processing. Given what I've heard of for more advanced ECC algorithms (although I have not done in-depth research on those more advanced algorithms) it does seem harder to distinguish between the fields.

But yeah, when described using comparisons like that, I agree that signal processing does sound like a more appropriate way to describe the process.
 
Of course without seeing the exact plans of Anobit, we can't say it's 100% proven. But given that MSP is always talked about in terms of the NAND controller which is after ONFI, and that I've never seen or heard of a mass produced NAND array that doesn't speak ONFI since I started reading about NAND controller implementation in 2009, I'd say it's highly likely to be true.

Also, when I mentioned that there's similar technologies from other companies, they're also implemented outside the NAND and inside the NAND controller. (Sandforce implements "Advanced Read/Program Disturb Management" in their NAND controller)

The only way to prove otherwise is for Anobit to ship or cooperate in building a NAND array where the NAND controller is directly hooked into the array with ADCs instead of the output drivers. (see page 9 http://www.rockbox.org/wiki/pub/Main/OndaVX747/K9HBG08U1M.pdf)
That'd be quite expensive to make, the yield lower, more time consuming to design, and lower performance due to the amount of time necessary to run any sort of analysis algorithm on every read.

In addition to that, there'd have to be two different systems branded as MSP as one implemented as part of the NAND array would be extremely different than the one they ship in their existing NAND controllers simply because the input would be so different.

Sounds convincing. In any case this is not my field of work, and you seem to be much better informed.

I went back to the article and reread that part a few times. They say "For example, the floating gate coupling distortion in a cell can be measured and compensated for via signal processing algorithms."

To me, I read that as the effects of the distortion can be measured (and then compensated for). Which is true. You can write to the NAND, and then read and write to nearby blocks and determine the effects through iteration, and then extrapolate for the rest of the chip since the arrays are pretty uniform. This could be a one time thing after the controller and NAND are first paired and formatted. And this calibration data reused for the lifetime of the device.

Given this model, and the need for performance, the "signal processing block" would essentially be a logic block that applies the model to predict what the data could have been with some distortion removed.

Of course this is speculation, but it does the fit the description.

In fact I was wondering if performance would degrade over time. Reviews and papers only seem to focus on longevity, and only passingly refer to good performance. Maybe when MSP would start "working hard" (I was imagining some real-time testing and recalibrating) there would be a performance impact vs. the initial state?

But the system you outline would avoid all of that, if in fact that calibration data can be gathered, reused and extrapolated for the life of the device. Interesting.
 
Sounds convincing. In any case this is not my field of work, and you seem to be much better informed.

Hardware isn't my line of work right now either (by day, I'm currently a consumer software developer) but originally it was supposed to be. (yup, dot com crash) So now all the embedded systems stuff is just a hobby to keep the idle bits of my brain moving.

On the off chance you're interested in learning more about NAND controller implementation, look up the OpenSSD project. Basically, Indilinx published everything you need to write your own firmware for their Barefoot controller so that anybody who wanted to could do it for research. I've considered looking for a broken Barefoot-based SSD on ebay (since they're typically broken because the NAND wore out past the consumer firmware's limits) and then playing with it, but not many have appeared and the prices are kinda high. At any rate, the example FTLs they have source code for are great examples of the theory behind early flash storage systems.

In fact I was wondering if performance would degrade over time. Reviews and papers only seem to focus on longevity, and only passingly refer to good performance. Maybe when MSP would start "working hard" (I was imagining some real-time testing and recalibrating) there would be a performance impact vs. the initial state?

But the system you outline would avoid all of that, if in fact that calibration data can be gathered, reused and extrapolated for the life of the device. Interesting.

The design we discussed could be expanded on to provide recalibration during the product lifetime. But calibration would be harder to apply back as time goes on as the array ages since the array will never age uniformly. Without actually trying it out, I'm not sure how well it'd work. There'd definitely be risk of applying a less-correct model due to unevenness of wear, and there'd definitely be an additional performance penalty. But it's a good idea to check out because the extra work could be worth it over simply extrapolating from the starting model.

In the long run, there is no avoiding general performance degradation due to age. Eventually cells will go bad. Making them last longer will slow the inevitable. But it can't stop it.
 
Last edited:
Hardware isn't my line of work right now either (by day, I'm currently a consumer software developer) but originally it was supposed to be. (yup, dot com crash) So now all the embedded systems stuff is just a hobby to keep the idle bits of my brain moving.

On the off chance you're interested in learning more about NAND controller implementation, look up the OpenSSD project. Basically, Indilinx published everything you need to write your own firmware for their Barefoot controller so that anybody who wanted to could do it for research. I've considered looking for a broken Barefoot-based SSD on ebay (since they're typically broken because the NAND wore out past the consumer firmware's limits) and then playing with it, but not many have appeared and the prices are kinda high. At any rate, the example FTLs they have source code for are great examples of the theory behind early flash storage systems.

Thanks for the pointers, I had no idea about that. Not that I am going to jump right into it, but lately I'm thinking about going lower level, so... we'll see. :)

The design we discussed could be expanded on to provide recalibration during the product lifetime. But calibration would be harder to apply back as time goes on as the array ages since the array will never age uniformly. Without actually trying it out, I'm not sure how well it'd work. There'd definitely be risk of applying a less-correct model due to unevenness of wear, and there'd definitely be an additional performance penalty. But it's a good idea to check out because the extra work could be worth it over simply extrapolating from the starting model.

In the long run, there is no avoiding general performance degradation due to age. Eventually cells will go bad. Making them last longer will slow the inevitable. But it can't stop it.

Of course. However I was referring mostly to my (surely unwarranted) assumption that longevity would be longer and "that's it"; but after this discussion I'd love to see some "longevity vs. performance" graph. Specifically, P/E cycles vs. burst rate, for both "standard" NAND controlers and MSP. I guess standard ones simply start failing and maybe reducing the reported memory size as failing blocks get detected and put aside? If you can point me to any information of that kind, I'd be grateful.
 
Of course. However I was referring mostly to my (surely unwarranted) assumption that longevity would be longer and "that's it"; but after this discussion I'd love to see some "longevity vs. performance" graph. Specifically, P/E cycles vs. burst rate, for both "standard" NAND controlers and MSP. I guess standard ones simply start failing and maybe reducing the reported memory size as failing blocks get detected and put aside? If you can point me to any information of that kind, I'd be grateful.

I tried digging up papers talking about longevity vs performance and couldn't find any that measure it on the level you're thinking of. I'm guessing this is probably because it's hard to age an SSD for testing when modern controllers are pretty good. And then it'd be hard to be able to separate out the effects of the aging in light of unique characteristics of various controllers as well as the terabytes of input data. So I can write out a basic explanation and example as to why longevity and performance are closely tied, but I don't have any way to provide concrete numbers.

As blocks fail, they'll be set aside on the bad block list and removed from usage. The reported memory size will still be the same as that can't be changed without messing up the filesystem on top of it. (unless the reported memory size change happens when you're reformatting the file system, but the SSD doesn't have any easy way of knowing that during the typical reformatting process on the average consumer's laptop)

So when blocks fail, it's reducing the amount of reserved space. The reserved space exists to assist the controller in improving longevity (static/dynamic wear leveling) and performance (reducing write amplification by whatever tricks the controller designs thought up, such as packing fragmented blocks or compression, or whatever). The reserved space is not comprised of physical blocks specifically set aside, but just a list of blocks that are not completely full of current or stale data. So given a bunch of user writes and overwrites, a block (A) that is currently storing data would eventually be moved to the reserved space list when the user "overwrites" the data. This is because overwriting it in place is slower and causes less even wear, so the new data is written to a reserved block (B), and the pointers updated to use the new block (B). When (A) is garbage collected, the remaining valid data is moved elsewhere, and (A) is put on the reserved block list.

As the reserved block list gets shorter, more garbage collection would have to happen to ensure the existence of blank or partially blank blocks which are important to keeping write performance up. More garbage collection means more erasures. More erasures accelerates aging. Plus, erasures are slow, impacting performance. Eventually, the reserved block list is empty and it's game over.

The closest paper I found talking about long term effects was this:
http://www.usenix.org/event/fast10/tech/full_papers/boboila.pdf

It mentions something interesting that I hadn't thought of looking into before. As cells age, their program and erase time changes as well. So disregarding the controller entirely, there are already some performance changes over time.

The paper also does talk about the FTLs on usb flash drives which is a good example of early SSD controllers like the JMF601.
 
Wow, hchung, thank you very much for the explanation and link; I didn't expect so much. I'll be reading that PDF right now.

Regards! :)
 
Apple still has no manufacturing capabilities. Only design. This is a fab-less company.

I keep on wondering when they are going to break ground and have their own proprietary fabrication process. I'm sure someone is running those numbers.
 
I keep on wondering when they are going to break ground and have their own proprietary fabrication process. I'm sure someone is running those numbers.

Chip fab or product manufacturing? I doubt it'd be worth it for Apple to go into either.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.