Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You're right, but I'm all for faster technology becoming standard! :)

I feel that by 2020, all PCs should come with an SSD standard, even if they're the slower kind. The prices have come down so much over the past decade that we just might get there.

https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/

I just hope this new technology is better in the data integrity part.
Also, I hope that this might ultimately give Apple a reason to up their filesystem game.

HFS+ still in 2015, no data integrity assurance is awful.

Glassed Silver:mac
 
  • Like
Reactions: V.K.
If I had to guess, we'll first see this as very small, on-CPU caching.
For CPU caching SRAM is used, which is still multiple times faster than DRAM, which is faster than this technology. So no, this won't be used for CPU caching. The question really is, is this technology performant enough to replace traditional RAM? That would be a breakthrough of great significance (it would literally change computing forever).
 
  • Like
Reactions: retep42
This could be the next "BIG THING" if what I'm reading is correct. The key word there is NON-VOLATILE. Regular system type ram has never been non-volatile and while it's been faster than either flash storage or SSD drives, you could never swap one for the other without dire consequences. In other words, SSD storage, fast as it is can't compare to regular memory speeds and using regular memory for storage would mean you'd have to keep it refreshed (power to it) making it largely useless for storage as it would be erased the moment it lost battery power and it would be a relative power hog. Here, however, you have the potential to replace FLASH, SSD and RAM all with this new type of memory. Imagine a computer that comes with 5TB of this stuff that is both RAM and STORAGE with no differentiation between the two and now speed drops transferring things. Games would never "load" in the traditional sense again as "loading" is moving data from storage to the main system memory so the CPU can manipulate it. Here, there wouldn't have to be a difference! Even external storage of this type would be as fast as the data bus lines could possibly move it (with current technology), making all current SSDs obsolete, etc. The only issue, of course, is PRICE. I'd imagine, however, that this stuff is going to be so popular that it will change the face of the entire computing industry within a few years time, unless it has a major manufacturing issue. The article mentioned it being designed to be affordable yet throws "too expensive at first" back in your face in the same article so I imagine that will be a limiting factor at first. But long term, storage may change entirely.

I share your enthusiasm on this, as it could indeed change the way things work in computing. The one thing you did not mention is that the OS will also need to change. If everything is in the same memory, then as you said the OS does not need to load an app from long term storage to run-time storage. This also means that the entire memory management component of the OS would need to change (what happens when you have an app that has a memory leak given this new world). Finally, security will need to be look at -- do you partition or sandbox parts of the memory?

Bottom line is that the introduction of this hardware is great, but it's not like you can simply slap it in and everything works perfectly. Given that they are releasing it later this year, I cannot see this being available in the iphone or Mac since there is no indication of it in either El Capitan or iOS9. Maybe next year, but that would be the earliest IMO.
 
  • Like
Reactions: Analog Kid
so does this replace the ssd and the system memory and make it one solution? is this a shift in hardware architecture? as in making the ssd and system memory the same component on a system board?
Unless they solved the issue with the finite number of read/write cycles, it won't be useful as system memory.
 
  • Like
Reactions: ardent73
Pointless until a bus, ports and other I/O can feed data at that rate.

Why does there have to be a bus/port/etc? Intel was involved in the design. If this could unify RAM and Storage at once, they could simply use their memory controllers built into their CPU's to access these chips at unheard of rates.
 
I share your enthusiasm on this, as it could indeed change the way things work in computing. The one thing you did not mention is that the OS will also need to change. If everything is in the same memory, then as you said the OS does not need to load an app from long term storage to run-time storage. ......

This idea is not new. The Multix OS. That was before UNIX in the 1960's had a unified RAM and storage. As I remember a virtual memory layer mad it appear that everything was in RAM all the time, So the file system sat on top of that and only had to track the RAM address of the files and the VM layer translated that to a location on the disk.

People have been working on this idea for 40 or 50 years now.

Also remember "core" storage? This also was a non-volilte kind of RAM that was used for the CPU's active memory like we use DRAM today, but core was nonvolatile. Some high-end computers kept files in core. This was the 1960's when a megabyte of core was VERY expensive but I did work as a systems programmers (An OS developer) on these old machines. It's fun to see ideas that where implemented 40 years ago finally become affordable. The old CDC and IBM computers I used cost about $12M back when $12M was a lot of money. This can be in a phone today

There are only a few more 1960's technologies I'still waiting to see in my phone. That would be the "I/O Channels" from the IBM370 and even better "Peripheral Processor Units" from the old 6600 series of CDC mainframes. Most everything else from the 60's is already present in our low cost systems.

I remember a computer that used non-voile core for RAM. It was shipped to us on a pellet and when we plugged it it is started running with the software it was running before it was shipped. They had just pulled the plug on a running computer without shutting it down. We applied power and it did not miss a beat.
 
  • Like
Reactions: retep42
As I remember a virtual memory layer mad it appear that everything was in RAM all the time, So the file system sat on top of that and only had to track the RAM address of the files and the VM layer translated that to a location on the disk.

You would still need to "load" it and have a boundry between what's used as memory and what is used for storage. Storage uses a file system exposed to the user and files uses file formats. Take a jpeg as an example, it first needs to be parsed and decompressed and the result placed in a separate area. Even if this was not a requirement, editing a file would be the same as editing memory, any change to memory would be permanent.
 
And, what happened to that Anobit?? deal with the Apple-owned Flash memory maker in Israel?

Anobit was not a flash memory maker, they were a flash CONTROLLER DESIGNER.
Presumably the team designs the flash controllers in iPhones so that Apple can embed them directly on the SOC and avoid one more external chip. The flash controller keeps track of which flash blocks are in use and which are free, schedules operations to runs as fast as possible, ensures wear leveling, encodes each block (first via encryption, then adding error correction), that sort of thing.
They seem to do the job --- iPhone's have never had flash performance problems. (There was an issue, leading to a recall, with the flash in the 128GB iPhone6S, but that was a problem with the flash chips, not the flash controller.)
 
For CPU caching SRAM is used, which is still multiple times faster than DRAM, which is faster than this technology. So no, this won't be used for CPU caching.

So whats your comparison? How fast are you claiming L1 cache in a current Intel CPU is? (GB/s)

How fast are you saying this technology will be? (GB/s)
 
Last edited:
This idea is not new. The Multix OS. That was before UNIX in the 1960's had a unified RAM and storage. As I remember a virtual memory layer mad it appear that everything was in RAM all the time, So the file system sat on top of that and only had to track the RAM address of the files and the VM layer translated that to a location on the disk.

People have been working on this idea for 40 or 50 years now.

Also remember "core" storage? This also was a non-volilte kind of RAM that was used for the CPU's active memory like we use DRAM today, but core was nonvolatile. Some high-end computers kept files in core. This was the 1960's when a megabyte of core was VERY expensive but I did work as a systems programmers (An OS developer) on these old machines. It's fun to see ideas that where implemented 40 years ago finally become affordable. The old CDC and IBM computers I used cost about $12M back when $12M was a lot of money. This can be in a phone today

There are only a few more 1960's technologies I'still waiting to see in my phone. That would be the "I/O Channels" from the IBM370 and even better "Peripheral Processor Units" from the old 6600 series of CDC mainframes. Most everything else from the 60's is already present in our low cost systems.

I remember a computer that used non-voile core for RAM. It was shipped to us on a pellet and when we plugged it it is started running with the software it was running before it was shipped. They had just pulled the plug on a running computer without shutting it down. We applied power and it did not miss a beat.

Yes, but iOS and OSX does not work this way today. I am not saying it can't, just that it doesn't and therefore work is required to make it work in this fashion.
 
Yes, but iOS and OSX does not work this way today. I am not saying it can't, just that it doesn't and therefore work is required to make it work in this fashion.

Multics was never used as a commercial product, it was a research project which collapsed under it's own weight. Unix was created from the lessons learned. And as I pointed out above, there are several reasons, why you do not want it to work like this. That being said, the VM in OS X already maps both memory and disk.
 
You would still need to "load" it and have a boundry between what's used as memory and what is used for storage. Storage uses a file system exposed to the user and files uses file formats. Take a jpeg as an example, it first needs to be parsed and decompressed and the result placed in a separate area. Even if this was not a requirement, editing a file would be the same as editing memory, any change to memory would be permanent.

The boundary might be fuzzier than we imagine. Right now we "swap" out RAM to hard disk when we run out of RAM and need to temporarily store its contents on disk, and we "memory map" files, so that the file contents are used just as memory regions, and we have "temp files" which are temporary files that are not intended to remain after a program quits running.

For compressed data, when a CPU is fast enough and the file is small enough, it's actually faster to uncompress as you go and not store an uncompressed copy of the data. With images, they'd probably just be uncompressed into video RAM instead of system RAM, if part of the user interface.
 
The boundary might be fuzzier than we imagine. Right now we "swap" out RAM to hard disk when we run out of RAM and need to temporarily store its contents on disk, and we "memory map" files, so that the file contents are used just as memory regions, and we have "temp files" which are temporary files that are not intended to remain after a program quits running.

I agree, and I already brought up both memory mapped files and the VM. But that doesn't mean that a memory object and any regular file in the filesystem are the same object with many references. Opening for example an image file like that and viewing the content, then editing the file would change the in memory content. Probably not what you want.

For compressed data, when a CPU is fast enough and the file is small enough, it's actually faster to uncompress as you go and not store an uncompressed copy of the data. With images, they'd probably just be uncompressed into video RAM instead of system RAM, if part of the user interface.

No matter how small the uncompressed section is, it still needs to be placed in a buffer in memory. Now if memory is disk you probably not want to alter the original file, and what if many programs access the same file at the same time.
 
It doesn't say NEW it says Breakthrough Advances of material. Could be new flavor of Gallium Arsenide.
"The companies invented unique material compounds..."
Definitions of "new" and "flavor" may vary, but this sounds like more than just changing the doping profile.
 
I think I speak for many here when I say "thank you" for putting this whole thing in semi-layman's terms and writing it cogently so that it's easy to grasp the potential significant of this technology!!

The only real problem I've noticed since yesterday after reading more articles about it is that despite it being "1000x faster" than NAND (aka "Flash" type memory used in things like SSDs) is that even so it's apparently slower than DRAM, but how much so is still unclear (they apparently didn't release any exact numbers but say it's "slightly slower than the fastest DRAM" out there). As CPUs increase in speed, that could potentially present problems if this can't be made to be faster to compete with newer versions of DRAM or some other memory. In other words, it's only a game changer for main system memory if it's comparable in speed to at least the point where's it's only a small difference. The advantage of having your hard drive be your memory itself is immense. It's instantaneous access to long term storage. Games, for example would load instantly (like the "cartridges" of old).

Certainly, it'll likely replace all current SSD technology if it really delivers, but ultimately your bus speed is going to limit it. Put succinctly, SATA III isn't going to cut it...not by a long shot. Even Thunderbolt III isn't going to be enough. NOTHING IS, although clearly Thunderbolt III is preferable. This stuff will be inherently limited as external (or even external to main memory) storage by the bus connections to it.

It would seem there are OS concerns as well for using this as non-volatile main memory instead of DRAM (most operating systems weren't designed to have long term storage in main memory). Given Apple's close relationship with Intel, one wonders if perhaps they got wind of this ahead of time and have been getting the OS ready for such a change and already looking at new motherboards for an initial offering. Imagine the advantage Apple would have over other PCs running Windows (or Linux) if they had a working model out next year that at least had main memory using this like a small SSD (maybe 128 GB) supplemented with a traditional SSD or even one of those hybrid drives for lesser used stuff. Not only would it justify Apple's high-end prices for once, but Apple would be the only game in town. Everyone else would just have SSDs made with it that would max out bandwidth. Apple could have something that runs much faster and has far more advantages. With enough lead time, it could be a game changer for Macs. I can't help but wonder if the sudden appearance of the METAL graphics API wasn't started on with a nudge from Intel as you would want as much efficiency as possible to make the best use of such memory. I don't know how well this might work with a combined GPU setup, but given Intel has its own GPU lineup, one wonders what they might be able to do with it. It might not be fast enough, so you could end up with a CPU with XPoint 3D memory and a GPU with conventional memory. Apps that need extra speed in the memory department might be better off using the GPU for calculations (ala OpenCL or Cuda) while leaving the advantages of massive non-volatile memory for the CPU.

In any case, it'll be interesting to see what happens over the next few years. I was thinking of buying a new computer next year (was waiting for Skylake and Thunderbolt III), but this throws a monkey wrench in things. How long until we see actual consumer products? How much better will they be over the next few years than conventional tech? How much will they cost by comparison and how long will it take for the price to come down. These are variables without answers. I'm guessing I'd be safe buying a Skylake next year. I doubt the market is going to change overnight. It'll probably take 4-6 years to see any significant design changes and for them to work out the BUS limitations, OS implementations, etc. of any "one memory to fit them all" type thing. This stuff is 20nm. Intel is moving to 10nm on chips. I have little doubt that as they perfect THAT process, this memory will eventually go to 10nm as well and that will certainly improve the speed, density and power requirements even further.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.