No, it is by design. Regardless of whether you're only doing "light" work on it or not, since Lion, OSX loves to use as much RAM as possible to use it as a cache. You'll quickly see RAM usage hit its max even if you have only a few apps open. Basically, the philosophy is to keep as many files in memory to reduce how often the OS needs to access the hard drive. However, Lion and ML in my view, were horrible at managing RAM by often leaving inactive RAM intact even when the OS is paging like crazy. Mavericks though seems to be brilliant at it.
You'd be surprised how much more you can go before it goes past that physical RAM threshold. Even with my RAM maxed out as we speak (8gb) I've yet to have a single page out in Mavericks and my Mini has been running for days with many different app launches in the mean time. It's quickly purging inactive RAM to make room for other apps I launch. You have to understand that of the 8gb being used, only a fraction of it is actively being used by the OS while the rest is inactive (kept in memory just in case it is needed later) and can be freed up by the OS whenever another app is in need.
I worked in a Performance management group at a company that produces an enterprise grade OS. I know exactly how memory hierarchies work. I already defined this process and you just re-described it.. thanks... Now, when you purge something from RAM because its not being 'actively' used .. Where do you get it when you need it? You access the SSD and how much of a hit is that? I showed the access times in a graphic some threads back, please go read it.
In activity monitor under memory, pay attention to the headings "App memory" and "File Cache" (ignore "wired" since that is the bear minimum the OS takes for itself to function and can't be used by the user). A lot of RAM is used by "File cache" but it's RAM that can be quickly discarded by the OS if it needs more RAM for apps. Watch as "File cache" instantly drops when the amount of App memory needed goes up. Right now, my File Cache is almost at 4gb (of the 8gb which is all used). That same 4gb cache will drop instantaneously if RAM is needed elsewhere. The drop in performance is almost insignificant when a purged item needs to be re-accessed. It's simply being read from the hard drive again, but these are usually small files we are talking about so the impact is minimal. Keep in mind, my Mini has a 5400rpm drive. I'm willing to bet that on an SSD, you won't even notice a difference at all.
I know exactly what the latency is. If there was no latency issue, you would use a 256 GB SSD drive for RAM. There obviously is latency and thus why you don't. I looked at nothing but stats for years in my personal job. You can KISS.. There's no reason to describe what is going on. I know exactly what Mavericks does and exactly how the memory Hierarchy works and I know exactly how much RAM i need and its greater than 8GB.
Memory compression is another thing in Mavericks and that can easily allow you to go even beyond the 8gb of RAM that's available without page outs.
Remember, page outs (swap used under Mavericks) is the only thing to really worry about. If you don't get page outs, even when your RAM is maxed out, you will be fine and won't notice any performance hit at all. Even if all your RAM is used, in most usage scenarios, a good chunk of it is available in an instant since it's mostly cache. App memory getting maxed out is when paging occurs and I could only imagine that happening if you're a professional using a bunch of memory-hungry apps, not the average person with normal usage scenarios.
"
Solid-state drives may be fast, but your Mac’s RAM is much faster.
That’s why applications try to load all their necessary data to the Mac’s memory for quick and easy access to the bits and bytes they need most."
OS Kernels have been played w/ since the beginning of time... Sometimes fancy tricks like this are made and then reversed later on in a Kernel revision. Beyond the tricks that are played at the kernel level is real world physical hardware and electrons flowing across wires. There is no trick to that.
> If you compress bits you have to decompress bits
> If you kick things out of ram, you will have to bring them back when you need them
Do you understand how interrupts and Kernel level routines work? What exactly do you think compresses/decompresses memory? Do you think this is a new concept apple just happened to think about.. It's been around forever and has pros/cons.
There is no con to 16GB of ram to me when I know I will need it. No, I don't want interrupts and kernel routines playing optimize the cheapo who didn't buy enough RAM for his needs. I want my cores crunching data sets and performing tasks. I dont want my SSD hit because I decided to not get sufficient ram. That's not the purpose of it and technology has advanced and price points to ensure I don't have to.
I'd love to get in a kernel design and memory management discussion but this is hardly the platform to do so. I do this for a living everyday, have Masters degree in comp. engineering with a focus in embedded systems (where the real OS optimizations occur) and don't need a lecture on OS design and memory hierarchies. That's a comp. arch (sophomore year) lecture.
In the industry, we like to keep things simple as the real implementation is complicated enough...
When ram is a bottle neck in your system (a $2000) one and it only costs $200 to address. You address it. It's great that apple discovered kernel level memory compression. Welcome to decades ago when ram costs $1000 for a 256MB stick and it really mattered to play such tricks. It doesn't today. It costs $200 for an 8GB upgrade. An upgrade that is well worth it to me...
Running VMs kills everything you said b.t.w. But thanks for the comp arch. flashback.