Certainly, and is an interesting background and thought process.
Although I wouldn't go as far as to say that it "proves"... I guess there are still too many unknowns (for example, are we sure MSP works after ONFI? Are there no options?).
Of course without seeing the exact plans of Anobit, we can't say it's 100% proven. But given that MSP is always talked about in terms of the NAND controller which is after ONFI, and that I've never seen or heard of a mass produced NAND array that doesn't speak ONFI since I started reading about NAND controller implementation in 2009, I'd say it's highly likely to be true.
Also, when I mentioned that there's similar technologies from other companies, they're also implemented outside the NAND and inside the NAND controller. (
Sandforce implements "Advanced Read/Program Disturb Management" in their NAND controller)
The only way to prove otherwise is for Anobit to ship or cooperate in building a NAND array where the NAND controller is directly hooked into the array with ADCs instead of the output drivers. (see page 9
http://www.rockbox.org/wiki/pub/Main/OndaVX747/K9HBG08U1M.pdf)
That'd be quite expensive to make, the yield lower, more time consuming to design, and lower performance due to the amount of time necessary to run any sort of analysis algorithm on every read.
In addition to that, there'd have to be two different systems branded as MSP as one implemented as part of the NAND array would be extremely different than the one they ship in their existing NAND controllers simply because the input would be so different.
They're welcome to try for such a beast, just like Intel tried with FBDIMMs, but such a design doesn't sound like a winner to me.
And about the "in order to have access to this data and have reasonable throughput" part, note that Anobit also specifically mentions that as something they have had to take care of. (again, mentioned in the Embedded Computing Design article)
I went back to the article and reread that part a few times. They say "For example, the floating gate coupling distortion in a cell can be measured and compensated for via signal processing algorithms."
To me, I read that as the effects of the distortion can be measured (and then compensated for). Which is true. You can write to the NAND, and then read and write to nearby blocks and determine the effects through iteration, and then extrapolate for the rest of the chip since the arrays are pretty uniform. This could be a one time thing after the controller and NAND are first paired and formatted. And this calibration data reused for the lifetime of the device.
Given this model, and the need for performance, the "signal processing block" would essentially be a logic block that applies the model to predict what the data could have been with some distortion removed.
Of course this is speculation, but it does the fit the description.
Mhm. I wouldn't interpret like that the boxes in the diagrams, ... but why not, also sounds like a possibility. However... would that still be an Error Correcting Code?
I mean, it's been some time since I studied these things, but the last I remember where you'd take into account some model of the medium would be in, for example, digital de/modulation. Which I guess would be closer to SP than to ECCs. I could be wrong or things could have evolved, of course.
Anyway, thist starts to look like what we are saying are not so different things after all, and the difference is more on where to draw the distinction between signal processing, ECC... and whatever else.
I see what you mean. While it's error correction, it's not what you'd typically call a code and I agree that it does sound more like traditional signal processing. Given what I've heard of for more advanced ECC algorithms (although I have not done in-depth research on those more advanced algorithms) it does seem harder to distinguish between the fields.
But yeah, when described using comparisons like that, I agree that signal processing does sound like a more appropriate way to describe the process.