Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
We are already measuring boot times in seconds ;)

(But I get what you are trying to say)
There could actually be no such thing as "boot"
booting , in general, is the process of copying the OS files off of the disk into memory and executing them to create the correct memory environment (i.e loading all of the libraries in to memory, executing the drivers, etc)

this could happen just once and that "booted" memory config would never go away (its on non-volatile memory)

we could "boot" a computer in 1 second or less....
 
  • Like
Reactions: ErikGrim
There could actually be no such thing as "boot"
booting , in general, is the process of copying the OS files off of the disk into memory and executing them to create the correct memory environment (i.e loading all of the libraries in to memory, executing the drivers, etc)

this could happen just once and that "booted" memory config would never go away (its on non-volatile memory)

we could "boot" a computer in 1 second or less....

The purpose is also to initialize hardware, but besides that, even if we assume a hypothetical computer that didn't need to be booted more than once. The idea that you can reboot to start fresh, would be out the window. I don't know about you, but I wouldn't feel comfortable with a machine that has, say a three year uptime.
 
There could actually be no such thing as "boot"
booting , in general, is the process of copying the OS files off of the disk into memory and executing them to create the correct memory environment (i.e loading all of the libraries in to memory, executing the drivers, etc)

this could happen just once and that "booted" memory config would never go away (its on non-volatile memory)

we could "boot" a computer in 1 second or less....

Ironically, we already had that back in 1982 (well technically quite a ways before, but most of those computers sucked). It was called a Commodore 64. It booted in about a second from ROM (a read only non-volatile variant of the very thing they apparently are rediscovering today, that putting the OS in a silicon chip beats the hell out of loading it off a flipping moving magnetic disk)! :D

Well...that is if you can call Commodore Basic (ironically made by Microsoft) an "OS". Load"* [or filename]",8,1 is all you really needed to know along with the directory Load"$",8 and LIST and RUN. Oh yeah, the best gaming ever until the Amiga came out. :)
 
Ironically, we already had that back in 1982 (well technically quite a ways before, but most of those computers sucked). It was called a Commodore 64. It booted in about a second from ROM (a read only non-volatile variant of the very thing they apparently are rediscovering today, that putting the OS in a silicon chip beats the hell out of loading it off a flipping moving magnetic disk)! :D

You can have that today as well, if you are willing to live with a computer with the capabilities of a C64. :)

Well...that is if you can call Commodore Basic (ironically made by Microsoft) an "OS". Load"* [or filename]",8,1 is all you really needed to know along with the directory Load"$",8 and LIST and RUN. Oh yeah, the best gaming ever until the Amiga came out. :)

But the BASIC interpreter would only be one part of the "OS" and not really neccessary either. The LOAD, LIST and RUN commands on the other hand. And the closest thing to a system call, would be a part of memory mapped IO in rom, known as the kernal (yes that's the correct spelling).
 
You can have that today as well, if you are willing to live with a computer with the capabilities of a C64. :)



But the BASIC interpreter would only be one part of the "OS" and not really neccessary either. The LOAD, LIST and RUN commands on the other hand. And the closest thing to a system call, would be a part of memory mapped IO in rom, known as the kernal (yes that's the correct spelling).


Is that so?

The KERNAL was known as kernel[6] inside of Commodore since the PET days, but in 1980 Robert Russell misspelled the word in his notebooks forming the "word" kernal. When Commodore technical writers Neil Harris and Andy Finkel collected Russell's notes and used them as the basis for the VIC-20 programmer's manual, the misspelling followed them along and stuck.
 
...article mentioned it being designed to be affordable yet throws "too expensive at first" back in your face in the same article so I imagine that will be a limiting factor at first. But long term, storage may change entirely.

Supposedly OLED TVs, when the manufacturing processes are perfected, will be cheaper to manufacture than traditional LED lit LCD TVs, so I guess we'll just have to wait and see.
 
Hm. With nothing useful in between?

I thought this might be a high bar for such a common adjective, so I did a quick tally. Assuming we aren't talking about economics, literature, physiology and peace (leaving chemistry and physics), there have been somewhere around 218 new things in the world since 1901.

I think the tone you were going for was cynical, but you pushed it across the line over to silly.

"things" <> "materials"

The "tone" I was going for was 'word mean things' but if American English isn't your first language I apologize.

Let's use the Manhattan Project as an example, it was an advance of technology from experimental to practical but not new or revolutionary. Einstein had already made the science breakthrough; and fission had already been done in Germany.
 
"things" <> "materials"

The "tone" I was going for was 'word mean things' but if American English isn't your first language I apologize.

Let's use the Manhattan Project as an example, it was an advance of technology from experimental to practical but not new or revolutionary. Einstein had already made the science breakthrough; and fission had already been done in Germany.
Ok... I'm starting to lose the point here... What was your point regarding the Intel/Micron announcement?

Mine was that if this isn't a standard silicon process, it's going to take a while to get costs down. Yours was that this could just be a "flavor of GaAs". While GaAs hasn't benefitted from quite as much R&D as silicon, it's a fairly mature process if still more expensive. I think "unique material compounds" suggests it's more than just a simple process change even to a GaAs process, and still think it will take time to make this truly cost effective versus other technologies.

So are we arguing about that point, or are we just arguing about what the word "new" means? If it's the latter, I think we can stop. You're welcome to substitute any word you'd like if you think I'm undermining Einstein's legacy by devaluing the meaning of "new". My point was that it sounds like this material is sufficiently different from existing materials that it may take time to get costs down.
 
Why does this remind me of the sony playstation cell technology pitch. Hopefully its as good as they say and there isnt too many hidden issues to work through.
 
I agree, and I already brought up both memory mapped files and the VM. But that doesn't mean that a memory object and any regular file in the filesystem are the same object with many references. Opening for example an image file like that and viewing the content, then editing the file would change the in memory content. Probably not what you want.

Both memory pages and file-systems use a concept called copy-on-write. The idea being that two things share an object or file until the moment that one needs to change it, and then at that point they get their own copy. Presumably when accessing an object/file, one would specify if they want to work on their own copy or continue sharing and modify it in-place.

Right now if a program memory mapped the image file and edited the in-memory data, then the data on disk would be affected too, depending on how that access was specified.

Obviously software would need to be updated to handle such a change in paradigm, it's just that these concepts aren't actually new to software.

No matter how small the uncompressed section is, it still needs to be placed in a buffer in memory. Now if memory is disk you probably not want to alter the original file, and what if many programs access the same file at the same time.

Actually, I just read that Skylake has hardware encoding (and decoding?) of JPEG built-in. As well as several video codecs. So no, I don't think it's necessary that an image or video be decompressed into system RAM (now also disk). Rather, you would want it uncompressed into video memory, which will necessarily be different than system memory, since it needs to update 30+ times per second and be wired millimeters from the video chip.
 
Both memory pages and file-systems use a concept called copy-on-write. The idea being that two things share an object or file until the moment that one needs to change it, and then at that point they get their own copy. Presumably when accessing an object/file, one would specify if they want to work on their own copy or continue sharing and modify it in-place.

Not necissarily no, copy on write is usually used to gain redundancy and safety, as is the case with ZFS which does not use a journal.

Right now if a program memory mapped the image file and edited the in-memory data, then the data on disk would be affected too, depending on how that access was specified.

Obviously software would need to be updated to handle such a change in paradigm, it's just that these concepts aren't actually new to software.

Yes of course that would be the case if you used mmap on a file, the ponit is, you wouldn't want this to be the defualt behavoiur..

Actually, I just read that Skylake has hardware encoding (and decoding?) of JPEG built-in. As well as several video codecs. So no, I don't think it's necessary that an image or video be decompressed into system RAM (now also disk). Rather, you would want it uncompressed into video memory, which will necessarily be different than system memory, since it needs to update 30+ times per second and be wired millimeters from the video chip.

So what, the image was just an example, you can take any file format with meta data, pdfs, word document etc. The point is that the data on disk is different than the actual data in memory, the relevant parts that's not for human consumption is just a reciepe for how to parse and display the data, the on disk object is not the same as the in memory object.

But that strongest case against the idea, is that you would still want permanent storage even if the underalying memory is the same. And in any case, this stuff will not replace Flash anyway according to Intel. Faster non-volitile memory is already available btw, so that may be what will someday replace both. But again, that doesn't negate the need or desire for permanent storage.


Right now if a program memory mapped the image file and edited the in-memory data, then the data on disk would be affected too, depending on how that access was specified.

Again, of course. There is both a desire and need for permanent storage, presented as a hierarchical file system to the user, regardless what the underlaying medium is capable of. Accessing objects through a filesystem is also slower (though more feature rich) than the memory allocator user for volatile memory.
 
Last edited:
I'm not sure what we're even getting at anymore. Basically, I'm agreeing that software will need to be adapted to handle and take advantage of the new hardware, but the paradigm itself isn't all that new, since both blocks of memory in RAM and file-systems on disk use many of the similar techniques that would be used going forward, and already bleed between the two.

No idea what you're disagreeing with about copy-on-write. It's used in programming languages like Swift, in memory hardware by the paging hardware, in most Unix file-systems, and not just some niche little fs like ZFS.

Already mentioned that mmap takes param for mapping being unidirectional or bidirectional. Up to the app if it's applicable. I wouldn't say a blanket statement about which is generally applicable. Just that if you aren't intending to write back, it would be an error to mark it bidirectional and also less performant. Any time one can indicate read-only instead of read-write, the software can optimise.

Ok so you take any file, and then the question is, what are you doing with it? What is your working set? If some portion of the file needs to be uncompressed, then does it need to be used just once, in which case it likely doesn't need to be sitting in a complete state in memory. Or does it need to be continually re-referenced? In which case if disk is like ram, why not have a persistent cache so that future accesses are accelerated as well. If disk space gets low, or the underlying file is modified, then flush that file's decompression cache. Just like how Android went from re-JIT-ing apps every execution to using ART to JIT it once on install and then persist it to disk.

My guess is that file formats for small simple files will evolve to store data as it can be directly used by the CPU, and the CPU will gain additional register load instructions, like for example, loading an FPU register from a string base 10 representation. Something will emerge like how JSON appears to Javascript, a way of representing data that is immediately usable by the environment, with parsing and loading being done at a lower level.

I don't think systems would bother with differentiating in hardware the temporary storage and the permanent storage. It would be one chip, and the software in the operating system would create the virtualisation of separation. "RAM" would be temp files, etc.

Of course, the "RAM" file-system wouldn't be exactly the same as the persistence file-system. Blocks might even continue to be different sizes, and security would be different, with processes having access versus users having access, etc.
 
I'm not arguing that software needs to be adapted. My point is that currernt software can work as is, with the changes made on the OS behind the current abstraction. There may be some new system calls, but I don't see why current software world need to change.

With regards to filesystems, ZFS is hardly niche. But neither HFS+ or FFS uses copy on write. The point of it on ZFS, is that current data is never overwritten so if you lose power in a midle of a transaction, you can roll back to a previous satte, much in the same way you could replay your journal, on a journaled FS.

I don't get your point about mmap. IF you use it on a file, then changing the file by adding data to the "array" is your intended purpose. I don't see why you argue about this feature as something you world want as the default operation on files.. Clearly it's better to use a file abstraction. If you have a vacation photo on your HD, then open it up in Photoshop and add some blur for example, you would permanently, modify your only copy. Why do you think anyone would want this? Yes, you can use CoW but why is that a better solution, than simply hide the new technology behind the current abstraction of files and floders..
 
Wow. Reading this brought back memories of when i saw my first commercial about DSL. 100x faster than dial up? Yeah right.

But you have to remember that in the middle 1990's, the best dial-up speed was about 5 kilobytes per second download speed--using a high-end US Robotics Courier external modem running V.90 connections. DSL at the time offered around 155 kilobytes per second download speeds--a HUGE leap forward for its day. Today, Comcast DOCSIS 3.0 modems offer 50 to 100 megabits per second download speeds--and that's even slow by international standards.

But getting back on topic, once 3D XPoint becomes commercially available, we may be talking 1 to 2 terabytes of local mass storage at essentially RAM speeds on future versions of the MacBook Air, MacBook and MacBook Pro. Imagine cold boot from start to finish in under 8 seconds! :D Or effectively "instant off/instant on" to and from power-saving modes.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.