I'm not sure what we're even getting at anymore. Basically, I'm agreeing that software will need to be adapted to handle and take advantage of the new hardware, but the paradigm itself isn't all that new, since both blocks of memory in RAM and file-systems on disk use many of the similar techniques that would be used going forward, and already bleed between the two.
No idea what you're disagreeing with about copy-on-write. It's used in programming languages like Swift, in memory hardware by the paging hardware, in most Unix file-systems, and not just some niche little fs like ZFS.
Already mentioned that mmap takes param for mapping being unidirectional or bidirectional. Up to the app if it's applicable. I wouldn't say a blanket statement about which is generally applicable. Just that if you aren't intending to write back, it would be an error to mark it bidirectional and also less performant. Any time one can indicate read-only instead of read-write, the software can optimise.
Ok so you take any file, and then the question is, what are you doing with it? What is your working set? If some portion of the file needs to be uncompressed, then does it need to be used just once, in which case it likely doesn't need to be sitting in a complete state in memory. Or does it need to be continually re-referenced? In which case if disk is like ram, why not have a persistent cache so that future accesses are accelerated as well. If disk space gets low, or the underlying file is modified, then flush that file's decompression cache. Just like how Android went from re-JIT-ing apps every execution to using ART to JIT it once on install and then persist it to disk.
My guess is that file formats for small simple files will evolve to store data as it can be directly used by the CPU, and the CPU will gain additional register load instructions, like for example, loading an FPU register from a string base 10 representation. Something will emerge like how JSON appears to Javascript, a way of representing data that is immediately usable by the environment, with parsing and loading being done at a lower level.
I don't think systems would bother with differentiating in hardware the temporary storage and the permanent storage. It would be one chip, and the software in the operating system would create the virtualisation of separation. "RAM" would be temp files, etc.
Of course, the "RAM" file-system wouldn't be exactly the same as the persistence file-system. Blocks might even continue to be different sizes, and security would be different, with processes having access versus users having access, etc.