Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Nermal

Moderator
Staff member
Dec 7, 2002
20,644
4,041
New Zealand
I suppose that explains it. But it is a free upgrade and unless you run an old notebook which works better with 10.6.8, I see no reason not to upgrade.

I plan to eventually, I'm just waiting for a replacement video card first (getting random crashes). I'm just sorting out the hardware before messing with the software :)
 

bradleyjx

macrumors member
Jul 7, 2008
58
0
Madison, WI
Another way to think about RAM:

When you get down to it, what a processor is designed to do is to read and process data that comes through it. One of the better ways to make that processor work as fast as possible is to make sure it's never waiting for data to reach it, and there are a bunch of places on a computer which data is stored that serve this purpose.

Closest to the processor are registers; these are small storage locations of data, inside the processor itself, which are manipulated (functionally) directly by the commands of the processor. There aren't a lot of these, mostly because you don't actually need many of these for a computer to work well.

If the data that the computer needs isn't in a register, it begins a fallback process, which you can think like a pyramid: the further down you go, the more data you can store, but it also is further from the top, meaning that it'll take longer for that data to reach the top. If a piece of data is needed, processors today are smart enough to also grab some of the data around it and bring it into higher levels, anticipating that it will also be needed.

At the top levels are the caches, usually three levels in today's processors. These caches are on the processor chip itself, so you're still talking a very fast turnaround, but it's still a bit slower than manipulating registers directly. The processor in today's MBP can hold 64KB of data in the first level of cache, and goes to up to 2-8MB at the third level.

Below that last level of cache is your RAM, which today is measured in GB. The processor takes a bigger hit by asking for data in RAM than it does in cache, but we're talking in the dozens here, when a decent consumer processor can optimally churn through ten billion instructions per second.

The main benefit of RAM is that it reduce the frequency that you need to access your drive; while this is slightly less relevant in the age of SSDs, when spinning hard drives were king, the hit in performance from accessing a hard drive was massive. A 5400RPM drive means it can take up to 1/90th of a second (or a hundred million instructions worth of time wasted!) for a piece of data to be read from a platter. Do that too often, and your computer ground to a halt.

---

When we're talking about memory use, a lot of it depends on how the operating system manages it's resources. One of the interesting tricks of operating systems, though, is that they give applications the appearance that they have as much memory as they want, then manage those resources themselves. This simplifies software development, and also means that how an operating system displays the memory in use can depend. This is where "memory pressure" comes into play in OS X Mavericks, like has been mentioned previously. Even before Mavericks, OS X was pretty liberal about giving programs the "virtual memory" they wanted, then dealing with managing that memory on the back end.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.