PDA

View Full Version : Mountain Lion Memory - is it improved in management?




iOrbit
Jul 17, 2012, 07:43 PM
when i use lion, (8Gb of ram) i regularly have to free memory myself. if i do not, then inactive memory builds, pageouts build, virtual memory increases, all while 'free memory' dwindles down to less than 100mb, the system will not think to free memory, it only free's memory when i quit the whole application. (closing tabs/windows, etc does not seem to free it)

this is a terrible experience on normal hdd's, has anyone noticed a difference on mountain lion , particularly people not using ssd drives?

pleas, this is very important for me, as i'v been considering reverting to windows :(



SpyderBite
Jul 17, 2012, 08:24 PM
I haven't had any of these problems in Lion on my 2012 13" mbp with 16gb. I wonder if its your hardware rather than the OS?

east85
Jul 17, 2012, 08:32 PM
when i use lion, (8Gb of ram) i regularly have to free memory myself. if i do not, then inactive memory builds, pageouts build, virtual memory increases, all while 'free memory' dwindles down to less than 100mb, the system will not think to free memory, it only free's memory when i quit the whole application. (closing tabs/windows, etc does not seem to free it)

this is a terrible experience on normal hdd's, has anyone noticed a difference on mountain lion , particularly people not using ssd drives?

pleas, this is very important for me, as i'v been considering reverting to windows :(

I haven't really noticed more memory in particular (as I didn't monitor lion), but everything running much quicker would suggest that it's managed better. I didn't even do a fresh install either, coming from Lion.

tkermit
Jul 17, 2012, 08:58 PM
Apple certainly advertised improved "Virtual Memory performance" on one of the WWDC12 keynote address slides.

SlCKB0Y
Jul 17, 2012, 09:05 PM
when i use lion, (8Gb of ram) i regularly have to free memory myself. if i do not, then inactive memory builds, pageouts build, virtual memory increases, all while 'free memory' dwindles down to less than 100mb, the system will not think to free memory, it only free's memory when i quit the whole application. (closing tabs/windows, etc does not seem to free it)


OS X considers that unused RAM is wasted RAM, much in the way that Linux does, and so it will try and cache/buffer as much stuff as possible in order to boost system performance. Once RAM is required these buffers are made available to the system. This is generally a good thing.

You need to provide more information that this.

The most important thing we need to know which is not mentioned is - are you actually experiencing performance issues or are you just watching your memory stats?

If you do actually have performance issues, let us know what all the resource usage stats are at.

* What are your page outs AND page ins (OS X will on occasion page out even if you have free RAM available, this is intended so the ratio is more important than the raw number).

* Inactive memory is technically free memory, but what level is this at?

* What software are you running and how many instances? What activities are you general doing on your computer?

Virtual memory is not even a metric you should be looking at as it is completely meaningless in this context. Swap used is more relevant.

nuckinfutz
Jul 17, 2012, 09:09 PM
Very informative Sickb0y

Thank you.

I never really thought about the concept of "Free RAM is wasted RAM" but it makes total sense. You've got to power the memory anyways so you may as well use it.

SlCKB0Y
Jul 17, 2012, 09:23 PM
You've got to power the memory anyways so you may as well use it.

Well the idea is that you should never need to do this - the OS will do this for you. Once you are out of actual free RAM, the OS will release the "inactive memory" as required.

I have seen the original posters comments before made by other people but I've personally not experienced it (even when running Lion of 2GB). I also know how the memory management *should* work.

iOrbit
Jul 17, 2012, 09:29 PM
im tired having to monitor my system and grab screenshots. i'v seen this answer several times before, but it simply doesn't ring true for performance.

os x doesnt do what its supposed to do,

once i am with no free memory left, it does not free the inactive ram, instead it goes to page outs, and everything becomes absolutely awful.

if it was managing properly, it would 'use the inactive memory' but id doesnt, it let it become terrible, like a system with no more memory.

i always run app store, mail, safari, address book, ical, itunes, iphoto and sometimes imovie.

in addition i will run steam (which is a memory leaker its self)

other times i wil run photoshop cs5

iv found so many discussions on google with people who seem to know what they are talking about more, and backing up my experience. i don't know why others dont experience it, is it the way they use their machine? ssds? faults in ours? i dont know.

photoshop working with a file with quite a few layers at 6000 pixel images, will eat up to 2.5 gigs of ram.

steam up to a gig or even a little more.

generally though, my system can run my apps with 4 gigs or even nearly 5 gigs ram free when they are opened.

its after using them for awhile that all free memory is finished and is then described as inactive memory, which is never freed unless apps are quit.

fi i dont purge, then i cant get my memory back without quitting all or restarting.

SlCKB0Y
Jul 17, 2012, 09:37 PM
it does not free the inactive ram, instead it goes to page outs, and everything becomes absolutely awful.
...
if it was managing properly, it would 'use the inactive memory' but id doesnt, it let it become terrible, like a system with no more memory.



Using only terms like "absolutely awful" and "terrible" is akin to talking to a doctor and telling him you "feel sick" and expecting him to know what is going on.

When this happens, what do you actually experience?

Are you seeing system lag? beach balls? etc.

Runt888
Jul 17, 2012, 10:15 PM
If your apps are using or leaking large amounts of memory, that memory isn't inactive or free, and the os doesn't have anything it can do about it other than start swapping to virtual memory.

PurrBall
Jul 17, 2012, 10:15 PM
I'm running 10.8 GM and just had my system screech to a halt with 20 MB free ram and nearly 4 GB inactive. It was just constantly swapping and useless. Apple still has a bit of work to do.

VinegarTasters
Jul 18, 2012, 03:44 AM
The reason that Lion and Mountain Lion is slower/more memory hog than snow leopard is probably because they are moving more and more pieces into a virtual machine. Everyone knows that Java is very slow and a memory hog. Comparing Java to C++, Java takes 10 times more memory to run. It needs that memory because of the virtual machine. Average slowness is 2 times slower to 10 times slower (comparing Java and C/C++). If in C it can be even faster.

If you have a game that is 30 frames per second in C/C++ (most games are: investigate console games), it will usually be 15 frames per second to 3 frames per second in Java or C# or any interpreted language (including python). This makes the games unplayable. That is why most action games don't exist on Android, while there are a lot of hardcore 3D games on iOS. It is because of the virtual memory and interpreted code.

If Apple is going the same route, you should expect 2 to 10 times longer to do average things compared to earlier operating system versions. These things usually show up in code that spend a lot of time in loops, like permutation through items, or main game engine loop.

Of course, Apple is not using Java, but their support of LLVM probably led them to CLang and its LLVM technology that actually does everything Java does... compiling into bytecode first (IR in LLVM terms). java runs the bytecode by interpreting them in a virtual machine. You can execute IR in a virtual machine, or try to recompile it into native code, but it usually requires external libraries that support the IR, which means supporting the same bloat in java. So you will get garbage collectors that grabs whole chunks of memory and index and frees themselves on its own terms (usually when it runs out of memory instead of immediately). Sometimes to save themselves trouble, they will just leave it in IR and run a JIT when needed (just in time compiling). Since Leopard, the OpenGL has this slow layer! Why do you think games on Mac are so slow compared to Windows on same hardware. I think people should investigate it deeper and let the truth out on why all of a sudden it takes all the memory and runs so slow, it is pointing towards virtual machines.

Nozuka
Jul 18, 2012, 04:45 AM
@iOrbit:
You should probably try to do a clean install of ML when it is released instead of an update. Just to make sure its not related to your installation.
You can always go back to your old installation through a backup if it still happens.

SlCKB0Y
Jul 18, 2012, 07:42 AM
The reason that Lion and Mountain Lion...

I do not know where to start with the misinformation in this post.


That is why most action games don't exist on Android, while there are a lot of hardcore 3D games on iOS. It is because of the virtual memory and interpreted code.


Are you trolling? I honestly can't tell.

Firstly, there are plenty of 3D intensive games on Android.

Eg:
http://www.youtube.com/watch?v=EKlKaJnbFek

Secondly, most of these games are almost entirely coded in c or c++, with a very small Java wrapper.
http://arstechnica.com/gadgets/2011/11/arms-new-tools-make-it-easier-for-android-devs-to-use-native-code/

http://developer.android.com/tools/sdk/ndk/index.html


Why do you think games on Mac are so slow compared to Windows on same hardware.

*face palm*
One word. Drivers.

Verloc
Jul 18, 2012, 08:15 AM
http://forums.macrumors.com/showpost.php?p=15264756&postcount=4

VinegarTasters
Jul 18, 2012, 12:10 PM
I do not know where to start with the misinformation in this post.



Are you trolling? I honestly can't tell.

Firstly, there are plenty of 3D intensive games on Android.

Eg:
http://www.youtube.com/watch?v=EKlKaJnbFek

Secondly, most of these games are almost entirely coded in c or c++, with a very small Java wrapper.
http://arstechnica.com/gadgets/2011/11/arms-new-tools-make-it-easier-for-android-devs-to-use-native-code/

http://developer.android.com/tools/sdk/ndk/index.html



*face palm*
One word. Drivers.

If it is the drivers, note that nvidia provides drivers for both windows and osx. On same machine with same game, the Windows version runs with faster framerate.

The opengl layer requiring a virtual machine JIT compilation is common knowledge:
http://lists.cs.uiuc.edu/pipermail/llvmdev/2006-August/006492.html


For Android, if java was not slow, why did they have to code the whole thing in C/C++ for every 3D game? It is BECAUSE Java is so slow and a memory
hog. It is simply a C/C++ program, but forced into a slow Java wrappers.

It would not run in plain java because it would be 3 to 15 frames per second.

The main problem with C/C++ in Java is that you would have to port your program for EVERY device instead of porting it once, to account for the differences in each android device. Instead of drivers API handling that done by the OS, you are doing the operating system's job, which is just plain stupid. No developer has time to do operating systems work. Look how much time and money microsoft and others spend to make sure programs can run on any hardware. Java and android is pushing this job to the developers.

Runt888
Jul 18, 2012, 10:44 PM
The opengl layer requiring a virtual machine JIT compilation is common knowledge:
http://lists.cs.uiuc.edu/pipermail/llvmdev/2006-August/006492.html

JIT and virtual machines are not mutually inclusive - specific parts of the OpenGL pipeline are optimized at run time, but they are not using a virtual machine in the way you seem to think they are.

Also, LLVM is used to optimize and increase performance. Just because the "VM" part used to stand for virtual machine, you seem to think that means "slow."

From Wikipedia:
The name "LLVM" was originally an initialism for "Low Level Virtual Machine", but the initialism caused widespread confusion because virtual machines are just one of the many things that LLVM can be used to build.

You're spouting so much nonsense it's hard to even read it, let alone take you seriously.

sammich
Jul 18, 2012, 11:11 PM
Read the post wrong. Updating

SlCKB0Y
Jul 19, 2012, 07:15 AM
For Android, if java was not slow, why did they have to code the whole thing in C/C++ for every 3D game? It is BECAUSE Java is so slow and a memory
hog. It is simply a C/C++ program, but forced into a slow Java wrappers.


I never claimed Java was fast. I know it is relatively slow when compared to a lower level language like c/c++. I was taking issue with your claim that there were no GFX intensive games on Android because all Android Apps are pure Java. There are MANY games of this type and they DO NOT use java to do the heavy lifting.


The main problem with C/C++ in Java is that you would have to port your program for EVERY device instead of porting it once, to account for the differences in each android device. Instead of drivers API handling that done by the OS, you are doing the operating system's job

Huh? What are you even talking about? Firstly, the main reason Google implemented the NDK and support for c/c++ other than speed was to allow quicker ports as game devs are likely to have their game on other platforms in these languages.

Source: http://developer.android.com/tools/sdk/ndk/overview.html


The NDK can, however, can be an effective way to reuse a large corpus of existing C/C++ code.

Also, the NDK/APIs can handle a lot of the stuff you are talking about.

Did you even read the NDK documentation?

----------


You're spouting so much nonsense it's hard to even read it, let alone take you seriously.

This ^^

----------

If it is the drivers, note that nvidia provides drivers for both windows and osx. On same machine with same game, the Windows version runs with faster framerate.


It doesn't matter who makes the drivers, they are different drivers running on completely different operating systems.

I'm not disputing that the graphics performance on Macs is generally lower when compared to Windows given the same hardware, it's common knowledge that it is. I'm arguing that the biggest factor in that performance difference is the drivers that are included in OS X by the chipset manufacturer and Apple.

01mggt
Jul 19, 2012, 08:08 AM
Humm, seems this post is going off the beaten path. To the OP I have bene using ML with 8gb ram and not had any issues with memory management at all. Even on my 2008 umb with a traditional hdd. Also I agree you need to do a clean install. Not to mention you should probably scale back on keeping some programs open when not in use. It seems as if you are saying you have open safari, contacts, app store steam and the other stuff while also trying to process large images in photoshop.

VinegarTasters
Jul 19, 2012, 10:43 AM
Russ888 is telling me don't criticize Apple.
Sickboy is telling me don't criticize Android.

Well, why not just criticize the technology decisions, who cares about what company or what person made the mistake.

Java/C#/Python is slower than ObjectiveC
ObjectiveC is slower than C++
C++ is slower than C
C is slower than assembly.

Virtual Machines is slower than native code.

JIT compiling is JIT compiling. In java, it is trying to get the virtual machine interpretation into native code. In OpenGL, it is trying to get runtime interpretation into native code.

Clang via LLVM is slower than GCC (minimum 10% slower, and can go up to 100% or more slower depending on how badly the "bytecode" is recompiled into CPU instructions). Remember Clang via LLVM will compile into LLVM IR, THEN it is recompiled into CPU instructions. The "LLVM IR" is similar to Java bytecode. It was purposely build this way so they can easily port the code and support different languages. They did this because C# and managed code uses this similar technology so they are essentially copying each other. Goal was so they can compile different languages to LLVM IR and then recompile the LLVM IR into cpu instructions.

Guess what? LLVM IR can support virtual machines and you can have a front end that compiles interpreted languages like Java. In essence trying to cram every front end compiler result into LLVM IR means you need to carry all the baggage of interpreted languages like automated reference count, code all of a sudden turning on by itself to clean up, EATING UP A TON OF MEMORY BEFOREHAND to support garbage collection. Even if the purpose of LLVM is to optimise and speed up, carrying all that baggage means it is actually SLOWER.

Again, THEY CAN'T MATCH GCC using Clang via LLVM. The best they can do is 10% slower, so supporting this technology is not optimizing and speeding up, it is actually slowing the code.

Is this the cause of Lion being so slow? We know the kernel of OSX now takes up a HUGE amount of memory in Lion and later versions. We know Lion is multiple times slower than Snow Leopard. We know Apple leaves LLVM IR code everywhere (at least in opengl), requiring JIT compiling. What other pieces are left in this "bytecode" requiring interpretation or JIT at runtime? It is no different from a virtual machine.

Now I know a few people are gonna get hurt over this because of these revelations. But no matter how much zeal you have it is not gonna change the fact that it is gonna affect them in the short term and long term. Corel basically tanked trying to get an office version using interpreted languages. Everyone knows Windows runs games faster than OSX. .NET (interpreted) and Java games can never get to AAA level like Modern Warfare and Battlefield.

So in conclusion: interpreted and slow stuff DON'T belong in operating systems (and vast majority of games). The OS is the normally the lowest layer along with drivers. It should be coded in assembly at best, C at worst. Because it is the lowest common denominator for performance, as all other apps run on top of it. Either this lesson can be learned now, or they can follow the path of Corel and find out why people buy games on windows and not on OSX.

luigi.lauro
Jul 19, 2012, 10:50 AM
Again, THEY CAN'T MATCH GCC using Clang via LLVM. The best they can do is 10% slower, so supporting this technology is not optimizing and speeding up, it is actually slowing the code.

False.

As of 2012, LLVM-Clang is as fast as GCC, actually *FASTER* for certain code, slower for others.

Overall they are equal, some is faster for certain tests, some is faster for others, while the average is within 1% difference.

In the future, it's quite probably given the LLVM-Clang more modern and flexible approach that we will see LLVM-Clang improve at a faster pace than GCC, as it has been in the last 4-5 years, catching up with GCC while GCC was struggling improving as fast as LLVM has been.

But even *TODAY* GCC is not faster than LLVM/Clang, they are the same.

References:
http://www.phoronix.com/scan.php?page=news_item&px=MTA5Nzc
http://openbenchmarking.org/result/1204215-SU-LLVMCLANG23

Please stop spreading non-sense about LLVM-Clang as a compiler.

It's a compiler, has nothing to do with VMs, it compiles to native code, the only thing that is important is how fast it compiles code and how fast is the code that it compiles.

And LLVM-Clang produces code it's already as fast as GCC, while being a lot faster and using less disk/memory to compile.

Even major performance-oriented projects such as FreeBSD are switching from GCC to LLVM-Clang. It's simply the future :-)

dcorban
Jul 19, 2012, 03:27 PM
Using only terms like "absolutely awful" and "terrible" is akin to talking to a doctor and telling him you "feel sick" and expecting him to know what is going on.

When this happens, what do you actually experience?

Are you seeing system lag? beach balls? etc.

It's likely his only symptom is dry eyes from staring at Activity Monitor for too long.

If he is truly experiencing massive paging, then it must be from something he has installed. I expect that anyone who goes through the trouble of installing a an app to "free memory" is also the type who will have installed other questionable software. This behaviour goes all the way back to Windows 98, when it was usually the software intended to "speed things up" that caused the most problems.

Runt888
Jul 19, 2012, 06:03 PM
Russ888 is telling me don't criticize Apple.
I never said you couldn't criticize Apple. Just stop spouting nonsense.

Death-T
Jul 19, 2012, 06:37 PM
Not sure that this means anything, but I just opened a butt ton of applications, several games, and so on running Mountain Lion GM with 12 GB of RAM and I still have 6 GB free.

iOrbit
Jul 19, 2012, 06:44 PM
It's likely his only symptom is dry eyes from staring at Activity Monitor for too long.

If he is truly experiencing massive paging, then it must be from something he has installed. I expect that anyone who goes through the trouble of installing a an app to "free memory" is also the type who will have installed other questionable software. This behaviour goes all the way back to Windows 98, when it was usually the software intended to "speed things up" that caused the most problems.

and you're another person who probably wears apple coloured glasses.

VinegarTasters
Jul 19, 2012, 07:10 PM
False.

As of 2012, LLVM-Clang is as fast as GCC, actually *FASTER* for certain code, slower for others.

Overall they are equal, some is faster for certain tests, some is faster for others, while the average is within 1% difference.

In the future, it's quite probably given the LLVM-Clang more modern and flexible approach that we will see LLVM-Clang improve at a faster pace than GCC, as it has been in the last 4-5 years, catching up with GCC while GCC was struggling improving as fast as LLVM has been.

But even *TODAY* GCC is not faster than LLVM/Clang, they are the same.

References:
http://www.phoronix.com/scan.php?page=news_item&px=MTA5Nzc
http://openbenchmarking.org/result/1204215-SU-LLVMCLANG23

Please stop spreading non-sense about LLVM-Clang as a compiler.

It's a compiler, has nothing to do with VMs, it compiles to native code, the only thing that is important is how fast it compiles code and how fast is the code that it compiles.

And LLVM-Clang produces code it's already as fast as GCC, while being a lot faster and using less disk/memory to compile.

Even major performance-oriented projects such as FreeBSD are switching from GCC to LLVM-Clang. It's simply the future :-)

Finally someone posts something worthy of mention.
Look at the link:

http://openbenchmarking.org/result/1204215-SU-LLVMCLANG23


Now before I continue, I want to say is that I am not trying to criticize LLVM. And if that is your thought, you are missing the point. The main topic is slowness of operating systems and games and how the least common denominator affects them. LLVM can be useful in certain areas as well as Java and virtual machines. You have to look at case scenarios and
use appropriate tools. Now with that out of the way...

You will find the link supports the Clang generally 10% slower than GCC, but increases to 100% or more in certain cases. The ONLY case where Clang is faster than GCC over 10% is compiling time, which is irrelevant during
runtime native code. I would spend a whole week compiling and optimizing final code so it runs (runtime) 400% faster in a shipped product, rather than gloat about being able compile (prepare) the code 10% faster. The users
SEE the runtime, THAT is what is important. An engineer can spend 1 year
making a F1 car or 1 week. The speed of the car is important, not how
long it takes him to create the car.

Here are the relevant things that Clang baggage affect performance:

Timed HMMer Search v2.3.2 (database lookup of objects)
20% slower.

Smallpt v1.0 (3D graphical display)
400% slower.


John The Ripper v1.7.9 (Blowfish algorithm)
400% slower.

Why is Clang close to 400% slower in both cases? Now this is CLang mind you, trying to do C! The only way it can be slower is if it got interpretation baggage like in Java and C# and python. This is
like comparing Java to C! Blowfish is encryption where you need to have a very fast loops modifying tables over and over again. 3D displays requires fast lookup and display algorithms also requiring fast loops.

How can CLang increase its speed? Allow a direct path to native code without the required intermediate representation that carries baggage from supporting other languages and virtual machines. If Clang can do that, then
you won't see things like above. Now having said this, I think it IS
their priority now to fix the above, or it will carry over into OSX since
Apple uses it! In fact, swap out the slow parts and code things in assembly or C. Is Lion 400% slower than Snow Leopard because of Clang was used in certain parts
where it was very weak at? Was something left so that it needs to run in virtual machine?

I'd say go the Sony route in console OS. Make the operating system more efficient each release, taking smaller footprint
for same functions, running faster. This means putting more and more pieces into assembly,
get rid of interpretation, virtualization, and things that slow performance. Most importantly, choose performance over other criteria because it is the lowest common
denominator (the bottleneck) of all programs that run on top of it.


Unfortunately, a good benchmark is missing for CLang:
BYTE Unix Benchmark v3.6 (can show basic operating system performance)
TTSIOD 3D Renderer v2.2w (can show basic 3D performance)

The first shows basic operating system performance.
The second shows again 3D games performance.

I am hoping the second case is not also 400% slower.

ElectricSheep
Jul 19, 2012, 08:11 PM
Why is Clang close to 400% slower in both cases? Now this is CLang mind you, trying to do C! The only way it can be slower is if it got interpretation baggage like in Java and C# and python. This is
like comparing Java to C! Blowfish is encryption where you need to have a very fast loops modifying tables over and over again. 3D displays requires fast lookup and display algorithms also requiring fast loops..

I don't know where you got the notion that anything is running in some "Virtual Machine" or is being "Virtualized". The 'Virtual' in LLVM is in name only. This has been repeated several times in this thread, but you continue to ignore it.

How can CLang increase its speed? Allow a direct path to native code without the required intermediate representation that carries baggage from supporting other languages and virtual machines. If Clang can do that, then
you won't see things like above.

I think your understanding of how compilers work is lacking. Pretty much every compiler produces an intermediate representation of the high-level language that is being compiled. This representation is then optimized and passed to a machine code emitter for the specified target architecture (where it could be further optimized). LLVM simply splits the intermediate representations and the back-end machine-code emitter into a separate, open, and well-documented entity. Anyone is now free to write their own front-end to interpret whatever language they want—even one of their own creation—and can produce a full-fledged compiler without having to be experts in operating systems, CPU-architectures, and object-graphs.

Once again, there is no virtual machine or virtual environment involved.

SlCKB0Y
Jul 19, 2012, 08:29 PM
Now before I continue...

This guy has to be trolling, nobody can be this retarded.

Either way, I can no longer stand to read your posts as the stupidity is actually causing my brain to hurt. Welcome to my block list.

I have no problem with people not knowing stuff or people getting stuff wrong. What I have a problem with is people who refuse to recognise when they might be wrong, even in the face of overwhelming evidence that contradicts their viewpoint. No intelligent person would display that kind of absolute inflexibility in thinking.

It demonstrates an inability to process novel information, critically analyse and integrate that information and adapt their existing set of information.

ender land
Jul 19, 2012, 08:54 PM
OP, I agree regarding memory management - I've had problems often (especially with Virtual Machines) and inactive memory not being freed up correctly, though I'm using Snow Leopard.

I can totally empathize with feeling frustrated that NO one else seems to acknowledge these sorts of problems :)

VinegarTasters
Jul 19, 2012, 09:12 PM
.

I don't know where you got the notion that anything is running in some "Virtual Machine" or is being "Virtualized". The 'Virtual' in LLVM is in name only. This has been repeated several times in this thread, but you continue to ignore it.



I think your understanding of how compilers work is lacking. Pretty much every compiler produces an intermediate representation of the high-level language that is being compiled. This representation is then optimized and passed to a machine code emitter for the specified target architecture (where it could be further optimized). LLVM simply splits the intermediate representations and the back-end machine-code emitter into a separate, open, and well-documented entity. Anyone is now free to write their own front-end to interpret whatever language they want—even one of their own creation—and can produce a full-fledged compiler without having to be experts in operating systems, CPU-architectures, and object-graphs.

Once again, there is no virtual machine or virtual environment involved.

I will ignore the flame posts... ad hominem is standard technique for
some posters here. If you can't refute the facts, calling someone names
doesn't make you right.

But I'll answer this post because it has some legitimate argument.
"The only way it can be slower is if it got interpretation baggage like in Java and C# and python." The key word is baggage. A program in Clang may not be using a virtual machine in final running state, but because of the compilation procedure it is turned into LLVM IR, which SUPPORTS interpreted languages. That intermediate form is more distantly removed than
a standard GCC compiler intermediate state that you can compile and link to the target machine code. It is more distantly removed and more abstract carrying more baggage because of its support of interpreted language standard features mentioned earlier (garbage collection, referencing counting, etc)

Please explain to me how a C program can run 400% slower? If the
intermediate step is basically parsing a token tree and substituting symbols with
CPU instructions from a lookup table? 400% is not a small percentage.
A program running 30fps is only going to be running 8fps. Most CPU
manufacturers compete in the 3 to 4 fps on hardware. If you have software
that is slowing it down 22fps, you fix the software first.

Also, stop putting my words out of context. When I mention virtualization I was talking about operating systems, not LLVM. Virtualization is not allowing direct access to hardware. You put another layer between the program
and the hardware. It slows the programs down because of this
middle layer. Here is where I mention virtualization:

"I'd say go the Sony route in console OS. Make the operating system more efficient each release, taking smaller footprint
for same functions, running faster. This means putting more and more pieces into assembly,
get rid of interpretation, virtualization, and things that slow performance. Most importantly, choose performance over other criteria because it is the lowest common
denominator (the bottleneck) of all programs that run on top of it.
"

ElectricSheep
Jul 19, 2012, 11:11 PM
I will ignore the flame posts... ad hominem is standard technique for
some posters here. If you can't refute the facts, calling someone names
doesn't make you right.

But I'll answer this post because it has some legitimate argument.
"The only way it can be slower is if it got interpretation baggage like in Java and C# and python." The key word is baggage. A program in Clang may not be using a virtual machine in final running state, but because of the compilation procedure it is turned into LLVM IR, which SUPPORTS interpreted languages. That intermediate form is more distantly removed than
a standard GCC compiler intermediate state that you can compile and link to the target machine code. It is more distantly removed and more abstract carrying more baggage because of its support of interpreted language standard features mentioned earlier (garbage collection, referencing counting, etc)

Really what you are talking about is Abstraction. Abstraction is not a bad thing. Its the reason why we can write programs in higher-level languages like Objective C instead of handwriting machine code. Its the reason why applications can crash and not take out the entire machine. It is the fundamental reason why we can take for granted many of the great features of modern software that without abstraction would have been insanely difficult if not impossible to bring to reality.

As far as 'baggage' is concerned, that is debately. LLVM is certainly capable of emitting executable binaries which are smaller than those produced by other compilers, so there isn't anything going into the final program.

Please explain to me how a C program can run 400% slower? If the
intermediate step is basically parsing a token tree and substituting symbols with
CPU instructions from a lookup table? 400% is not a small percentage.
A program running 30fps is only going to be running 8fps. Most CPU
manufacturers compete in the 3 to 4 fps on hardware. If you have software
that is slowing it down 22fps, you fix the software first.

That is a gross oversimplification of what compilers do. The most basic, trivial compiler "basically parses a token tree and substitutes symbols with CPU instructions from a lookup table". The resulting executable code will be extremely inefficient.

The reality is that turning a high level language like C into efficient, optimized machine code is an NP-Hard problem. Mature compilers use a lot of tricks and apply a number of different hueristics to optmize code as best they can. The difference between one hueristic and another can account for a 4x performance factor in a given case.

To take such few cases, however, and use them to generalize across the vast landscape of code is extremely shortsighted.

Also, stop putting my words out of context. When I mention virtualization I was talking about operating systems, not LLVM. Virtualization is not allowing direct access to hardware. You put another layer between the program
and the hardware. It slows the programs down because of this
middle layer. Here is where I mention virtualization:

"I'd say go the Sony route in console OS. Make the operating system more efficient each release, taking smaller footprint
for same functions, running faster. This means putting more and more pieces into assembly,
get rid of interpretation, virtualization, and things that slow performance. Most importantly, choose performance over other criteria because it is the lowest common
denominator (the bottleneck) of all programs that run on top of it.
"

That is good for consoles, but not good for PCs. Consoles are built around a single uniform hardware definition that does not change. Consoles are designed to focus on a single task at a time: Play a game. Watch a movier. Yes, underneath there are other tasks running, but they are all to support the lead task. In order to perform this task as efficiently as possible, you allow direct access to the metal. Given that every console is identical, this isn't really much of a problem. Developers are free to cut corners and make assumptions. But, if the game crashes, the whole box goes down and you have to reset. I had enough of doing that to my PC back in the nineties.

bitsoda
Jul 20, 2012, 01:23 AM
I know exactly what OP is talking about because my computer suffers from the same affliction. There are times when I want to clutch my MacBook Pro, spiral around three times, and release it at a high velocity just to see it smash against a nice, concrete wall. I own an early 2011 MacBook Pro running 10.7.4 and this thing performs like a walrus on a tar floor. If I don't restart the machine at least twice a day, the machine is rendered unusable. Something as simple as opening a new tab in Chrome will bring about the ******** beachball for a good ~10 seconds before I can do anything else.

Originally, I thought my problem was related to the fact that I upgraded from SL to Lion. But after a clean install, the problem persists. Right now I have about 900 MB of inactive memory and my swap is 800 MB. I only open Activity Monitor once I notice sluggish performance to confirm my suspicions of pure failure to manage memory by the OS. Snow Leopard -- or any OS I've used in the past decade -- never behaved like this.

I'm not sure what to do. Running iTunes, Chrome (with ~15 tabs), Dictionary, Transmission, Spotify, Terminal, and SublimeText 2 is ostensibly too much for my MacBook Pro to handle. I'm at my wit's end with Lion, and nobody has been able to offer a solution.

luigi.lauro
Jul 20, 2012, 04:58 AM
You will find the link supports the Clang generally 10% slower than GCC, but increases to 100% or more in certain cases. The ONLY case where Clang is faster than GCC over 10% is compiling time, which is irrelevant during
runtime native code. I would spend a whole week compiling and optimizing final code so it runs (runtime) 400% faster in a shipped product, rather than gloat about being able compile (prepare) the code 10% faster. The users
SEE the runtime, THAT is what is important. An engineer can spend 1 year
making a F1 car or 1 week. The speed of the car is important, not how
long it takes him to create the car.

Again, false.

This was probably true some months/years ago, but now LLVM/Clang is as good and as fast as GCC. Actually *FASTER*, in several scenarios.

And I'm not talking about speed in compilation, but speed of the COMPILED application, which is what matters.

I showed you a RECENT unbiased open-source benchmark of latest GCC vs latest CLANG, that shows that neither is faster of the other, they are on-par, with negligible speed differences in all cases.

Show me a RECENT unbiased benchmark (Clang 3.1+ vs GCC 4.7+) that shows that 100%/400% difference, and then I'll second what you say, but until you have provided one (like I did), you are just a troll with very little knowledge about the CURRENT state of the compilers.

But the truth is that you WILL NOT FIND ANY, because it's a simple fact that Clang 3.1 is AS FAST as GCC 4.7, in compiled application speed.

And I'm not talking about corner cases such as a single application with very bad code written that behaves correctly only with GCC idiosincracies (such as smallpt), but to see this 400% in at least a 5-10% of the cases.

You will ALWAYS find corner cases that will not behave 'good' with a new generation compiler, heck, if they had to do a GCC 5.0 with a new architecture, you can be 100000% sure that you will find application going 100 times slower before the compiler settles down and the application don't fix the issue they have with it.

But this has nothing to related with the performance of the compiler: it's just a 'compatibility' issue, which will be solved by the compiler or the application code sooner or later.

You would never say a certain given Nvidia GPU is 400% slower in certain games, because you find 2 games out of 300 where for a compatibility issue the GPU run at a much reduced performance. You would flag that as a bug/compatibilty problem and work around it.

Full stop.

ElectricSheep
Jul 20, 2012, 08:43 AM
I've seen a few links to the general Apple Support article on Activity Monitor and Memory Usage; https://developer.apple.com/library/mac/#documentation/performance/conceptual/managingmemory/articles/aboutmemory.html ("Apple's own developer documentation) contains deeper insight as to how the virtual memory subsystem works and what these page lists actually represent.

(Quoted from the above)

Page Lists in the Kernel

The kernel maintains and queries three system-wide lists of physical memory pages:

The active list contains pages that are currently mapped into memory and have been recently accessed.
The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be removed from memory at any time.
The free list contains pages of physical memory that are not associated with any address space of VM object. These pages are available for immediate use by any process that needs them.
When the number of pages on the free list falls below a threshold (determined by the size of physical memory), the pager attempts to balance the queues. It does this by pulling pages from the inactive list. If a page has been accessed recently, it is reactivated and placed on the end of the active list. In Mac OS X, if an inactive page contains data that has not been written to the backing store recently, its contents must be paged out to disk before it can be placed on the free list. (In iOS, modified but inactive pages must remain in memory and be cleaned up by the application that owns them.) If an inactive page has not been modified and is not permanently resident (wired), it is stolen (any current virtual mappings to it are destroyed) and added to the free list. Once the free list size exceeds the target threshold, the pager rests.

The kernel moves pages from the active list to the inactive list if they are not accessed; it moves pages from the inactive list to the active list on a soft fault (see “Paging In Process”). When virtual pages are swapped out, the associated physical pages are placed in the free list. Also, when processes explicitly free memory, the kernel moves the affected pages to the free list.



Paging Out Process

In Mac OS X, when the number of pages in the free list dips below a computed threshold, the kernel reclaims physical pages for the free list by swapping inactive pages out of memory. To do this, the kernel iterates all resident pages in the active and inactive lists, performing the following steps:

If a page in the active list is not recently touched, it is moved to the inactive list.
If a page in the inactive list is not recently touched, the kernel finds the page’s VM object.
If the VM object has never been paged before, the kernel calls an initialization routine that creates and assigns a default pager object.
The VM object’s default pager attempts to write the page out to the backing store.
If the pager succeeds, the kernel frees the physical memory occupied by the page and moves the page from the inactive to the free list.

Note: In iOS, the kernel does not write pages out to a backing store. When the amount of free memory dips below the computed threshold, the kernel flushes pages that are inactive and unmodified and may also ask the running application to free up memory directly. For more information on responding to these notifications, see “Responding to Low-Memory Warnings in iOS.”
Paging In Process



The final phase of virtual memory management moves pages into physical memory, either from the backing store or from the file containing the page data. A memory access fault initiates the page-in process. A memory access fault occurs when code tries to access data at a virtual address that is not mapped to physical memory. There are two kinds of faults:

A soft fault occurs when the page of the referenced address is resident in physical memory but is currently not mapped into the address space of this process.
A hard fault occurs when the page of the referenced address is not in physical memory but is swapped out to backing store (or is available from a mapped file). This is what is typically known as a page fault.
When any type of fault occurs, the kernel locates the map entry and VM object for the accessed region. The kernel then goes through the VM object’s list of resident pages. If the desired page is in the list of resident pages, the kernel generates a soft fault. If the page is not in the list of resident pages, it generates a hard fault.

For soft faults, the kernel maps the physical memory containing the pages to the virtual address space of the process. The kernel then marks the specific page as active. If the fault involved a write operation, the page is also marked as modified so that it will be written to backing store if it needs to be freed later.

For hard faults, the VM object’s pager finds the page in the backing store or from the file on disk, depending on the type of pager. After making the appropriate adjustments to the map information, the pager moves the page into physical memory and places the page on the active list. As with a soft fault, if the fault involved a write operation, the page is marked as modified.


It is important to understand that pages on the inactive list are still mapped to valid VM Objects. The kernel cannot simply move them at a whim to the free-page list; Applications must explicitly release their memory. If an inactive page has not been written to a backing store since being changed (dirty), it must be swapped before it can be freed. Failure to do so destroys valid memory objects in userland, and could cause application crashes as well as lost-data/data corruption.

Additionally, the kernel will not actively traverse page lists looking to move inactive pages to free until the target free-page count has hit a certain threshold, which you can view with sysctl. The default target free-page list size is 2000. For a 4kb sizer per page, this works out to 8 megabytes.

While that number seems stupidly low, it echoes the understanding that 1) unused memory is wasted memory, and 2) applications know best about which of their memory objects should be in memory and which should not, not the kernel. So, the kernel takes the approach that unless more memory that is available on the free-page list is being requested, leave the pages alone. The fact that a page is on the inactive list because it has not been accessed in some time does not guarantee that it will not be accessed in the near future. In such a case, it is better to have a soft fault (reactivate the page) than a hard fault (re-read the page back from the disk).

mabaker
Jul 20, 2012, 11:56 AM
Not sure that this means anything, but I just opened a butt ton of applications, several games, and so on running Mountain Lion GM with 12 GB of RAM and I still have 6 GB free.

That is nice. Thanks.

cili0
Jul 20, 2012, 01:55 PM
IMHO a quite informative reading:

http://workstuff.tumblr.com/post/20464780085/something-is-deeply-broken-in-os-x-memory-management

I hope the situation will improve with Mountain Lion.

ciao,
cili0.

djrod
Jul 20, 2012, 05:23 PM
im tired having to monitor my system and grab screenshots. i'v seen this answer several times before, but it simply doesn't ring true for performance.

os x doesnt do what its supposed to do,

once i am with no free memory left, it does not free the inactive ram, instead it goes to page outs, and everything becomes absolutely awful.

if it was managing properly, it would 'use the inactive memory' but id doesnt, it let it become terrible, like a system with no more memory.

i always run app store, mail, safari, address book, ical, itunes, iphoto and sometimes imovie.

in addition i will run steam (which is a memory leaker its self)

other times i wil run photoshop cs5

iv found so many discussions on google with people who seem to know what they are talking about more, and backing up my experience. i don't know why others dont experience it, is it the way they use their machine? ssds? faults in ours? i dont know.

photoshop working with a file with quite a few layers at 6000 pixel images, will eat up to 2.5 gigs of ram.

steam up to a gig or even a little more.

generally though, my system can run my apps with 4 gigs or even nearly 5 gigs ram free when they are opened.

its after using them for awhile that all free memory is finished and is then described as inactive memory, which is never freed unless apps are quit.

fi i dont purge, then i cant get my memory back without quitting all or restarting.


Are you quiting the apps ( cmd Q ) or just closing the windows ( cmd W or the red light button ) because Photoshop for example eats all the ram it can and the memory remains used until you completely quit the app, it does not go to inactive memory.

VinegarTasters
Jul 21, 2012, 06:50 AM
Again, false.

This was probably true some months/years ago, but now LLVM/Clang is as good and as fast as GCC. Actually *FASTER*, in several scenarios.

And I'm not talking about speed in compilation, but speed of the COMPILED application, which is what matters.

I showed you a RECENT unbiased open-source benchmark of latest GCC vs latest CLANG, that shows that neither is faster of the other, they are on-par, with negligible speed differences in all cases.

Show me a RECENT unbiased benchmark (Clang 3.1+ vs GCC 4.7+) that shows that 100%/400% difference, and then I'll second what you say, but until you have provided one (like I did), you are just a troll with very little knowledge about the CURRENT state of the compilers.

But the truth is that you WILL NOT FIND ANY, because it's a simple fact that Clang 3.1 is AS FAST as GCC 4.7, in compiled application speed.

And I'm not talking about corner cases such as a single application with very bad code written that behaves correctly only with GCC idiosincracies (such as smallpt), but to see this 400% in at least a 5-10% of the cases.

You will ALWAYS find corner cases that will not behave 'good' with a new generation compiler, heck, if they had to do a GCC 5.0 with a new architecture, you can be 100000% sure that you will find application going 100 times slower before the compiler settles down and the application don't fix the issue they have with it.

But this has nothing to related with the performance of the compiler: it's just a 'compatibility' issue, which will be solved by the compiler or the application code sooner or later.

You would never say a certain given Nvidia GPU is 400% slower in certain games, because you find 2 games out of 300 where for a compatibility issue the GPU run at a much reduced performance. You would flag that as a bug/compatibilty problem and work around it.

Full stop.

I used YOUR link. None of the Clang was faster than GCC by 10% except one "compiling" one.

In YOUR link, it showed GCC faster than Clang by 10% on average,

In YOUR link, it ALSO showed the GCC faster than Clang 20%.

In YOUR link, it also showed GCC faster than Clang by 400% not on just one
but two benchmarks.

Again, NONE of the benchmarks in YOUR link showed Clang 10% or more faster than GCC EXCEPT the "compiling" one.

I will accept 10% for maybe timing differences or errors on either side. But 20% and 400% is no laughing matter. Obviously I'm not here to argue with you. Perhaps you are part of llvm, and if you feel you must have last say on this, go ahead. I am sure others can look at the benchmarks themselves. I am just one of the OSX users with no ulterior motives than to have a faster operating system. What you say won't change the fact that Lion is damn slow and if you wish to push the blame elsewhere at least read my earlier responses and acknowledge the problem exists. 400% is not a minor problem. It is game breaking, people looking elsewhere for another platform breaking problem.

And before you start blaming the slowness on other things (like memory manager), note this fact:

On Snow Leopard, the default compiler is GCC 4.2 (WITH NO LLVM)
On Lion, the default compiler is GCC-LLVM, then later Clang-LLVM (because GCC-LLVM was actually
half broken).

So the major changes from Snow Leopard to Lion is the mandatory use of LLVM.
In the case of LLVM backend producing code running in virtual machine, it has support for
grabbing chunks of memory to do memory allocation (they need to in order to do automatic
reference counting and garbage collection). Kernel bloat? Could it be that LLVM, in its support of interpreted language features, carried these baggage, which resulted in bloated and slow code even
if you are compiling static CLang or GCC code? Remember, LLVM intermediate IR
(byte code in java) is VERY FAR REMOVED from standard GCC intermediate code. It is more abstract,
to the point where you can actually run the llvm IR inside a virtual machine (no different than C# or Java).
So the process from llvm IR to regular .o or regular binary executable is not as cleancut as GCC.

In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.nabble.com/Memory-and-time-consumption-for-larger-files-td683717.html

Remember in OpenGL, there were some code left in intermediate state? When that kicks in the llvm
compiler starts up. We don't know what other parts of the OSX were left in this state that REQUIRES
compilation at runtime, and JIT compilation (like Java JIT). Perhaps more and more pieces in
Snow Leopard and cumulating in full blown llvm requirement in Lion. Only in Lion was full
LLVM required everywhere. This could lead to kernel
bloat because OpenGL (driver) is near the Kernel level when running when these non-compiled
code needs to be JIT compiled. It could be other low level pieces. Remember this is 5 TIMES the required memory. So something
that normally requires 1GB would now require 5GB during runtime. A Mac Mini only has 4GB and
so are lots of earlier Mac machines. A lot of disk thrashing will occur as more things are moved
back and forth to the harddrive to accomodate that startup of the llvm virtual machine backend just
to compile, and if virtual machine is used, the amount of memory never dissipates.

mabaker
Jul 21, 2012, 06:59 AM
IMHO a quite informative reading:

http://workstuff.tumblr.com/post/20464780085/something-is-deeply-broken-in-os-x-memory-management

I hope the situation will improve with Mountain Lion.

ciao,
cili0.

Very good read. This is exactly what I was saying about Snow Leopard and Lion. The VM manager is shot. It seems they have addressed it in ML, though.

theosib
Jul 23, 2012, 09:41 AM
I have an early 2011 MacBook Pro with 8GB of RAM, and I too have been plagued by Lion's memory management bugs.

I'll typically have a handful of apps open, including Safari, Mail, Smultron (a lightweight code editor), Terminal, and MS Word. Sometimes I'll also have open a news reader, and maybe an IRC client. It takes a few days, but eventually, my computer would just grind to a crawl. It would be completely unusable. Just using any application required a lot of patience because it would start beach balling while I was typing. Switching apps could take 30 seconds to a minute.

And when Time Machine would start backing up... time to walk away, because the computer basically grinds to a halt. The simplest things would take 5 to 10 minutes. I'm not exaggerating this.

Sometimes, I'd like to run Windows in Parallels. I assign 2GB to the VM. If I want to do that, I have to have NO other Mac apps running. For instance, if I want to look things up in a web browser, I have to run IE in Windows instead of Safari on the Mac host, otherwise, everything will slow down. If I run anything on the Mac host, everything slows down.

The only explanation I've been able to find for this is that the kernel is swapping out anonymous pages, favoring disk caching. And it does this even if there is only one or two apps running.

I've noticed some strange things. The OS X kernel still typically reach a gigabyte and hover around there. Safari will often go well over a GB, even if there aren't that many pages open. So those are eating up memory like it's water.

Just to emphasize this: I'm not saying that the system gets slightly slow. I'm saying that it will stop responding to user input for minutes at a time. If I'm lucky enough to get the dock to respond, I can alt-tab all I want, and the only app that will quickly take focus is MAYBE Terminal.

And you're not going to convince me that I'm "holding it wrong" by running too many apps, because when I was running Snow Leopard, I could have a LOT more apps open at once with no performance problems. Although I've seen people complain about this as far back as Leopard, the problems for me started with Lion. Others complaining about this with Lion have tried doing clean installs to no avail, BTW.

About a week ago, I broke down and bought at 16GB memory upgrade. The effects have been dramatic. I can run Parallels and all my apps at once. The system slows down noticeably while Time Machine is running, but it's usable. So far.

I've reported this to Apple, and I've been asked to provide various information and run various tools. Hopefully they're taking it seriously. For me, this problem was so easily reproducible that I think they found my computer to be a good source of information. One tool they had me run captured I/O activity. The performance problem is caused in part by a massive amount of swapping activity, and as a result, this tracing tool ended up with huge gaps in its trace while logging to the internal drive. I had to connect a USB drive just to get a workable trace. The trace was massive, and I had to get Apple to give me a temporary FTP account just to upload it.


BTW, the guy comparing the Java VM to LLVM has no clue what he's talking about. LLVM plays a role similar to GIMPLE, in that it is an intermediate representation of code being compiled, between the source code and the target machine language. Among the major advantages of LLVM is that LLVM code has well-defined textual and binary representations, allowing the front end and back end of the compiler to be run separately. You can compile to LLVM and then compile later from LLVM to the target machine. Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented. Because LLVM is a well-defined intermediate language, it has facilitated research in optimizing compilers, leading to better results, in many cases, than GCC. The reason that Java is memory-hungry has to do with the garbage-collected memory management. And while it's certainly true that interpreted languages will be slower than compiled languages, comparing C, C++, Assembly, and even Java isn't nearly so straightforward.

Puevlo
Jul 23, 2012, 10:20 AM
Apple have already admitted that Lion lacked proper memory management. It should be fixed for Mountain Lion.

VinegarTasters
Jul 23, 2012, 09:14 PM
I have an early 2011 MacBook Pro with 8GB of RAM, and I too have been plagued by Lion's memory management bugs.

I'll typically have a handful of apps open, including Safari, Mail, Smultron (a lightweight code editor), Terminal, and MS Word. Sometimes I'll also have open a news reader, and maybe an IRC client. It takes a few days, but eventually, my computer would just grind to a crawl. It would be completely unusable. Just using any application required a lot of patience because it would start beach balling while I was typing. Switching apps could take 30 seconds to a minute.

And when Time Machine would start backing up... time to walk away, because the computer basically grinds to a halt. The simplest things would take 5 to 10 minutes. I'm not exaggerating this.

Sometimes, I'd like to run Windows in Parallels. I assign 2GB to the VM. If I want to do that, I have to have NO other Mac apps running. For instance, if I want to look things up in a web browser, I have to run IE in Windows instead of Safari on the Mac host, otherwise, everything will slow down. If I run anything on the Mac host, everything slows down.

The only explanation I've been able to find for this is that the kernel is swapping out anonymous pages, favoring disk caching. And it does this even if there is only one or two apps running.

I've noticed some strange things. The OS X kernel still typically reach a gigabyte and hover around there. Safari will often go well over a GB, even if there aren't that many pages open. So those are eating up memory like it's water.

Just to emphasize this: I'm not saying that the system gets slightly slow. I'm saying that it will stop responding to user input for minutes at a time. If I'm lucky enough to get the dock to respond, I can alt-tab all I want, and the only app that will quickly take focus is MAYBE Terminal.

And you're not going to convince me that I'm "holding it wrong" by running too many apps, because when I was running Snow Leopard, I could have a LOT more apps open at once with no performance problems. Although I've seen people complain about this as far back as Leopard, the problems for me started with Lion. Others complaining about this with Lion have tried doing clean installs to no avail, BTW.

About a week ago, I broke down and bought at 16GB memory upgrade. The effects have been dramatic. I can run Parallels and all my apps at once. The system slows down noticeably while Time Machine is running, but it's usable. So far.

I've reported this to Apple, and I've been asked to provide various information and run various tools. Hopefully they're taking it seriously. For me, this problem was so easily reproducible that I think they found my computer to be a good source of information. One tool they had me run captured I/O activity. The performance problem is caused in part by a massive amount of swapping activity, and as a result, this tracing tool ended up with huge gaps in its trace while logging to the internal drive. I had to connect a USB drive just to get a workable trace. The trace was massive, and I had to get Apple to give me a temporary FTP account just to upload it.


BTW, the guy comparing the Java VM to LLVM has no clue what he's talking about. LLVM plays a role similar to GIMPLE, in that it is an intermediate representation of code being compiled, between the source code and the target machine language. Among the major advantages of LLVM is that LLVM code has well-defined textual and binary representations, allowing the front end and back end of the compiler to be run separately. You can compile to LLVM and then compile later from LLVM to the target machine. Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented. Because LLVM is a well-defined intermediate language, it has facilitated research in optimizing compilers, leading to better results, in many cases, than GCC. The reason that Java is memory-hungry has to do with the garbage-collected memory management. And while it's certainly true that interpreted languages will be slower than compiled languages, comparing C, C++, Assembly, and even Java isn't nearly so straightforward.


If you feel you have something to contribute, feel free to state what you feel is not correct. Otherwise, your statements are basically a rehash of what I said, but not contradicting anything. The ONLY thing that may seem different is this line:

"Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented."

But you are not even sure yourself. It is pretty funny the way you write it...

"IF I recall correctly...". "can (BUT NEEDN'T) use...". "I BELIEVE llvm...". "no... although it COULD..."

So you are not contributing any facts, just your opinions. I'll answer them
for you. Running the back end DOES have something to do with a virtual
machine. You obviously didn't look at the whole thread. In case you
missed it:

http://lists.cs.uiuc.edu/pipermail/l...st/006492.html

Now, in case you are not technically inclined. I'll pull the documentation for you:

"Code that is available in LLVM IR can have a wide variety of tools applied to it. For example, you can run optimizations on it (as we did above), you can dump it out in textual or binary forms, you can compile the code to an assembly file (.s) for some target, or you can JIT compile it."

See that? binary... OR JIT compile it. Either you create a binary, OR you JIT compile it. Lets continue...


"In order to do this, we first declare and initialize the JIT. This is done by adding a global variable and a call in main:

...
let main () =
...
(* Create the JIT. *)
let the_execution_engine = ExecutionEngine.create Codegen.the_module in
...
This creates an abstract "Execution Engine" which can be either a JIT compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler for you if one is available for your platform, otherwise it will fall back to the interpreter."

See that? ExecutionEngine is either the JIT or the interpreter (exact same
thing in Java and C# world). We are now inside a virtual machine either
just in time compiled or interpreted on the fly.

Virtual Machines are memory hogs due to supporting garbage collections and automated reference counting, in addition to implementing a whole CPU virtually. In addition, the LLVM backend IS a virtual machine. It needs to
in order to do JIT and interpretation of the LLVM IR. So anytime that llvm backend runs, IT IS IN VIRTUAL MACHINE mode.

In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.nab...-td683717.html

See that? 5 TIMES the required memory. The kernel pulls in drivers into
itself and if that driver needs to run inside a virtual machine, it is going
to eat up memory fast. If something takes 1 GB to compile in GCC, but now
takes 7GB to compile when going with LLVM, how can a Mac that only
has 4GB memory going to come up with that memory?

No amount of memory management will work if there is no memory to manage. Why? Because they are all EATEN UP by the compiler! It is going to go the the
harddrive to offload some stuff so it has some real main memory to work with.

Now the main point in this post is about the baggage Clang left in the LLVM IR. It is more abstracted than an efficient C compiler intermediate state. Trying to support all those garbage collection, reference counting, etc removes you so far from the CPU instructions that by the time you are going into CPU machine code generation, it ends up NOT faster. Which the benchmark shows. 400% IS NOT a small problem. It shows up in games, which is VERY IMPORTANT criteria when people buy computers (especially when it runs on windows or osx).

Lastly, when did Apple say they goofed up on the memory management? Provide references. Looks like the only one providing facts and references is me. The rest just "guesses" and trolling. So what should
Apple do? Dump llvm? It can simply start moving more and more pieces away from JIT or interpreter.
Start with all static binaries. Get away from Objective-C and use C, later start moving pieces into assembly for those things that are not going to
be changed. Objective-C is too slow for performance critical areas like operating systems (message passing is just plain slower than procedural calls). Start looking at performance as a higher criteria in selecting languages, compiler, etc in
the kernel and operating systems to start moving away from SLOW stuff. This includes dumping llvm if llvm's
goals is starting to go after C#'s multiple languages multiple targets and not performance. There are tradeoffs when you try to be everything to everyone. The XBOX third party games using C# is a failure.
Battlefield and Call of Duty and all AAA games run in low level C/C++ or assembly on PS3, PC, and XBox.
Imagine Apple putting a slow layer between games and the hardware. If that happens, no amount of
coding on top of OSX is going to reach AAA games because the OS is slowing them down! This is why
Windows games run faster than OSX games. Its the operating system's fault.

Remember NeXT? It had a period where they were all excited over writable optical disks instead of
harddrives? Guess what happened? Yep, they dumped it. It was just plain too slow. NeXT also
failed as a company. Overpriced and slow. Them moving to harddrives in later models didn't save
them, and required being merged into Apple. Similar with the CPU (motorola, not being able to
keep up in performance with Intel). So instead of making mistakes again and again, just plain
put performance as a criteria in the beginning. Corel failed trying to move to Java (too slow). Android lacks AAA games because of the Java requirement, which is so sad for the game developers using
C/C++ on it. They not only need to deal with the two languages (slow java OS/wrappers and C/C++), but
do all the operating systems' job of compatibility between different devices.

Here is the post again, please state what you think is wrong and provide references.:

I used YOUR link. None of the Clang was faster than GCC by 10% except one "compiling" one.

In YOUR link, it showed GCC faster than Clang by 10% on average,

In YOUR link, it ALSO showed the GCC faster than Clang 20%.

In YOUR link, it also showed GCC faster than Clang by 400% not on just one
but two benchmarks.

Again, NONE of the benchmarks in YOUR link showed Clang 10% or more faster than GCC EXCEPT the "compiling" one.

I will accept 10% for maybe timing differences or errors on either side. But 20% and 400% is no laughing matter. Obviously I'm not here to argue with you. Perhaps you are part of llvm, and if you feel you must have last say on this, go ahead. I am sure others can look at the benchmarks themselves. I am just one of the OSX users with no ulterior motives than to have a faster operating system. What you say won't change the fact that Lion is damn slow and if you wish to push the blame elsewhere at least read my earlier responses and acknowledge the problem exists. 400% is not a minor problem. It is game breaking, people looking elsewhere for another platform breaking problem.

And before you start blaming the slowness on other things (like memory manager), note this fact:

On Snow Leopard, the default compiler is GCC 4.2 (WITH NO LLVM)
On Lion, the default compiler is GCC-LLVM, then later Clang-LLVM (because GCC-LLVM was actually
half broken).

So the major changes from Snow Leopard to Lion is the mandatory use of LLVM.
In the case of LLVM backend producing code running in virtual machine, it has support for
grabbing chunks of memory to do memory allocation (they need to in order to do automatic
reference counting and garbage collection). Kernel bloat? Could it be that LLVM, in its support of interpreted language features, carried these baggage, which resulted in bloated and slow code even
if you are compiling static CLang or GCC code? Remember, LLVM intermediate IR
(byte code in java) is VERY FAR REMOVED from standard GCC intermediate code. It is more abstract,
to the point where you can actually run the llvm IR inside a virtual machine (no different than C# or Java).
So the process from llvm IR to regular .o or regular binary executable is not as cleancut as GCC.

In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.nab...-td683717.html

Remember in OpenGL, there were some code left in intermediate state? When that kicks in the llvm
compiler starts up. We don't know what other parts of the OSX were left in this state that REQUIRES
compilation at runtime, and JIT compilation (like Java JIT). Perhaps more and more pieces in
Snow Leopard and cumulating in full blown llvm requirement in Lion. Only in Lion was full
LLVM required everywhere. This could lead to kernel
bloat because OpenGL (driver) is near the Kernel level when running when these non-compiled
code needs to be JIT compiled. It could be other low level pieces. Remember this is 5 TIMES the required memory. So something
that normally requires 1GB would now require 5GB during runtime. A Mac Mini only has 4GB and
so are lots of earlier Mac machines. A lot of disk thrashing will occur as more things are moved
back and forth to the harddrive to accomodate that startup of the llvm virtual machine backend just
to compile, and if virtual machine is used, the amount of memory never dissipates.

a3vr
Jul 23, 2012, 09:40 PM
I've also experienced this memory issue, tends to happen with Lightroom open while doing large imports and processing. Lion doesn't release the inactive and when that happens it basically goes all to page out, Gigs worth in a matter of minutes. A quick purge and everything goes back to normal. With that said, it's only happened on a couple of occasions and rarely is an issue, but it's still a memory problem that needs to be fixed.

Michaelgtrusa
Jul 24, 2012, 08:41 AM
Then why did Apple sell lion to the public? Well the same reason the sold the 2009 27" iMac and the old timecapsuls etc.

ElectricSheep
Jul 24, 2012, 10:26 AM
I've also experienced this memory issue, tends to happen with Lightroom open while doing large imports and processing. Lion doesn't release the inactive and when that happens it basically goes all to page out, Gigs worth in a matter of minutes. A quick purge and everything goes back to normal. With that said, it's only happened on a couple of occasions and rarely is an issue, but it's still a memory problem that needs to be fixed.

This is exactly the behavior you will see if an application is leaking memory. As I have said before, inactive memory is still mapped to valid objects allocated by running applications. They have not been accessed recently, but the kernel cannot simply throw them out without destroying the integrity of the application runtime. They must be paged out to disk before the memory can be moved to the free list.

nontroppo
Jul 24, 2012, 10:27 AM
And when Time Machine would start backing up... time to walk away, because the computer basically grinds to a halt. The simplest things would take 5 to 10 minutes. I'm not exaggerating this.


I do wonder if Time Machine is behind a lot of these problems. I've never seen swapping in Lion on an 8GB 2010 MBP or a large block of different Mac Pros (from 4 to 12GB RAM), running heavy computational analyses in interpreted Matlab (a Java-based behemoth), Parallels, Creative suite, Office etc. -- but we never use Time Machine.

No one has actually discovered what has changed in mountain lion, it would be great to understand the technical changes that seem to have alleviated problems for some of you...

Paradoxally
Jul 24, 2012, 11:00 AM
ML is definitely better. Look at my MB Pro 13" mid-2009, I just upgraded to 8 GB last week because 4 was just not enough for anything after SL, and opened a ton of apps just to check how memory was doing.

It's pretty amazing.

http://i.imgur.com/ysefRl.png (http://i.imgur.com/ysefR.png)

----------



I'm not sure what to do. Running iTunes, Chrome (with ~15 tabs), Dictionary, Transmission, Spotify, Terminal, and SublimeText 2 is ostensibly too much for my MacBook Pro to handle. I'm at my wit's end with Lion, and nobody has been able to offer a solution.

There is, it's called Mountain Lion. :) You can get it tomorrow (most likely). Be sure to have 8 GB of RAM (as I said before, 4 GB is not enough for anything above SL because you'll page out a lot).

RoelJuun
Jul 24, 2012, 11:28 AM
Wirelessly posted

ML is definitely better. Look at my MB Pro 13" mid-2009, I just upgraded to 8 GB last week because 4 was just not enough for anything after SL, and opened a ton of apps just to check how memory was doing.

It's pretty amazing.

http://i.imgur.com/ysefRl.png (http://i.imgur.com/ysefR.png)

----------



I'm not sure what to do. Running iTunes, Chrome (with ~15 tabs), Dictionary, Transmission, Spotify, Terminal, and SublimeText 2 is ostensibly too much for my MacBook Pro to handle. I'm at my wit's end with Lion, and nobody has been able to offer a solution.

There is, it's called Mountain Lion. :) You can get it tomorrow (most likely). Be sure to have 8 GB of RAM (as I said before, 4 GB is not enough for anything above SL because you'll page out a lot).

You do realize that it's ridiculous to need at least > 4 gigs of ram for basic functionality?? And Apple still sells computers with 2 gigs of ram and a 5400 rpm disk..

nuckinfutz
Jul 24, 2012, 11:38 AM
Wirelessly posted



You do realize that it's ridiculous to need at least > 4 gigs of ram for basic functionality?? And Apple still sells computers with 2 gigs of ram and a 5400 rpm disk..

It is ridiculous. Lucky we don't need 4 GB of RAM for Mountain Lion. Of course it is recommended but not necessary.

Paradoxally
Jul 24, 2012, 01:49 PM
Wirelessly posted



You do realize that it's ridiculous to need at least > 4 gigs of ram for basic functionality?? And Apple still sells computers with 2 gigs of ram and a 5400 rpm disk..

Well, I know it is, but I don't plan on switching to Windows and Snow Leopard is taking it's toll so I had to upgrade to ML...plus, 8 GB is starting to become the norm now (especially in non-SSD computers). SSD computers don't need so much RAM because the flash storage is so fast that swapping is barely noticeable, although you don't really want to waste read/write cycles.

----------

It is ridiculous. Lucky we don't need 4 GB of RAM for Mountain Lion. Of course it is recommended but not necessary.

You don't, but like I said it's not gonna be a pleasant experience. Adding RAM is like a breath of fresh air. My Mac is 3 years old and this was the cheapest upgrade I could do, and I figured 'why not'? The processor is fine, I don't need AirPlay, the only thing that was bothering me were the page outs. That, or get an SSD, although it's more expensive.

50548
Jul 24, 2012, 03:17 PM
when i use lion, (8Gb of ram) i regularly have to free memory myself. if i do not, then inactive memory builds, pageouts build, virtual memory increases, all while 'free memory' dwindles down to less than 100mb, the system will not think to free memory, it only free's memory when i quit the whole application. (closing tabs/windows, etc does not seem to free it)

this is a terrible experience on normal hdd's, has anyone noticed a difference on mountain lion , particularly people not using ssd drives?

pleas, this is very important for me, as i'v been considering reverting to windows :(

Are you using iMessage? Because it has a TERRIBLE bug that eats up all RAM as well as the boot disk's free space with swap files.

Paradoxally
Jul 24, 2012, 03:27 PM
Are you using iMessage? Because it has a TERRIBLE bug that eats up all RAM as well as the boot disk's free space with swap files.

On ML? I have NEVER, never from testing it from the first DP, had that problem. The only problem that existed (not anymore) was the notifications persisting even if you had read the message, which caused the badge to always show up in the Dock.

mousersvk
Jul 29, 2012, 05:54 PM
I own an early 2011 MacBook Pro running 10.7.4 and this thing performs like a walrus on a tar floor. If I don't restart the machine at least twice a day, the machine is rendered unusable. Something as simple as opening a new tab in Chrome will bring about the ******** beachball for a good ~10 seconds before I can do anything else.

Originally, I thought my problem was related to the fact that I upgraded from SL to Lion. But after a clean install, the problem persists. Right now I have about 900 MB of inactive memory and my swap is 800 MB. I only open Activity Monitor once I notice sluggish performance to confirm my suspicions of pure failure to manage memory by the OS. Snow Leopard -- or any OS I've used in the past decade -- never behaved like this.

I'm not sure what to do. Running iTunes, Chrome (with ~15 tabs), Dictionary, Transmission, Spotify, Terminal, and SublimeText 2 is ostensibly too much for my MacBook Pro to handle. I'm at my wit's end with Lion, and nobody has been able to offer a solution.

How much memory do you have? I own an early 2011 MBP as well, I've been running nothing else than Lion (well now I have ML) and YES, I felt much better once I've upgraded the RAM to 16 GB.
Paging was not solved in the best way obviously, BUT:

Chrome is not the most memory-efficient browser I think. Try to run the apps in 32-bit mode, they eat much less memory.
You don't have to restart the memory to simulate cold-start of disk cache; run 'purge' command to basically flush all the memory caches -- if you need to

I occasionaly had to purge the memory to prevent any unnecessary page-outs (with very expensive initial creation of page files) when I had 8GBs. But I've been using JEE version of Eclipse with several plugins, Mail, Safari with quite a lot of tabs, iTerm and a bunch of other apps (Sublime Text 2 among others).

With modern SSD-based systems this really isn't a very big issue anymore (although it still IS an issue) -- my gf has a MBA (late 2010, 2 GB memory), running Lion on it with 20+ Chrome tabs and only has some problems when editing some bigger files in MS Office simultaneously. For her it is just 2-3s when she feels like computer is stuck (starting to page-out), but nothing critical.

But generally I agree that such thing like OS memory management, should be written the most efficient way possible.

50548
Jul 29, 2012, 06:07 PM
ML is definitely better. Look at my MB Pro 13" mid-2009, I just upgraded to 8 GB last week because 4 was just not enough for anything after SL, and opened a ton of apps just to check how memory was doing.

It's pretty amazing.

Image (http://i.imgur.com/ysefR.png)

----------



There is, it's called Mountain Lion. :) You can get it tomorrow (most likely). Be sure to have 8 GB of RAM (as I said before, 4 GB is not enough for anything above SL because you'll page out a lot).

Huh? You double your RAM and say that ML is the reason why it's "much better" afterwards? What you show in Act Mon would be practically the same if not better under SL...the improvement there is because of more RAM, not because of ML...

Paradoxally
Jul 29, 2012, 08:52 PM
Huh? You double your RAM and say that ML is the reason why it's "much better" afterwards? What you show in Act Mon would be practically the same if not better under SL...the improvement there is because of more RAM, not because of ML...

No, because I did the same test under Lion and it paged out...I couldn't test with SL but it always worked well with 4 GB RAM so that's not an issue.

It's a known fact that ML has improved memory management overall.

smithrh
Aug 4, 2012, 09:49 AM
I was in the group that was experiencing memory management issues since Snow Leopard.

I am actually pretty happy to report that I've got nearly 7 straight days of uptime on ML and I have 0 page outs for that entire time. In either SL or Lion, that would have been impossible to match. Typically I was rebooting every 2-3 days as beachballing associated with pageouts would be a very normal thing after that amount of time.

Once I decide to see how long I could stand the behavior and it was only 5 days before my usage pattern (nothing outlandish) caused more pageouts than page ins with extremely frequent and long instances of beachballing.

During the SL/Lion timeframe, those of us having these memory issues were lectured time and time again about how we just didn't understand how this worked, inactive memory is free memory, etc etc. There are still a few people I'd like to flog over this, but whatever.

Bottom line, it was a real issue for at least some subset of users, and I'd say it's been addressed at this point.

iOrbit
Aug 4, 2012, 09:56 AM
i thought i would share an update for me using my mac -

im using mountain lion now, ram stil seems to be held onto by programs, and it Can page out (i.e. steam , reinstalling backed up games, it sucks up memory, doesnt release it, and keeps on using 'new' free memory. causing page outs)

however, in general, i am not seeing page outs happen the way it would by just using the system generally on lion.

its definitely improved.

smithrh
Aug 4, 2012, 10:05 AM
Yeah, unfortunately I do still see some complaints about inappropriate pageout activity on ML, so it's not fixed for everyone, but at least for my app usage patterns, I'm in pretty good shape now.

Just so everyone is clear on what the issue was for me, on a 8 gig Mac, I'd have 2G wired, 2G active and 4G inactive - and I'd get massive swapping activity.

bitsoda
Aug 25, 2012, 03:08 PM
How much memory do you have? I own an early 2011 MBP as well, I've been running nothing else than Lion (well now I have ML) and YES, I felt much better once I've upgraded the RAM to 16 GB.
Paging was not solved in the best way obviously, BUT:

Chrome is not the most memory-efficient browser I think. Try to run the apps in 32-bit mode, they eat much less memory.
You don't have to restart the memory to simulate cold-start of disk cache; run 'purge' command to basically flush all the memory caches -- if you need to

I occasionaly had to purge the memory to prevent any unnecessary page-outs (with very expensive initial creation of page files) when I had 8GBs. But I've been using JEE version of Eclipse with several plugins, Mail, Safari with quite a lot of tabs, iTerm and a bunch of other apps (Sublime Text 2 among others).

With modern SSD-based systems this really isn't a very big issue anymore (although it still IS an issue) -- my gf has a MBA (late 2010, 2 GB memory), running Lion on it with 20+ Chrome tabs and only has some problems when editing some bigger files in MS Office simultaneously. For her it is just 2-3s when she feels like computer is stuck (starting to page-out), but nothing critical.

But generally I agree that such thing like OS memory management, should be written the most efficient way possible.

I just bit the bullet -- a very nominal bullet I must admit -- and shelled out the $40 for an 8GB memory kit for my MB Pro. It now runs Mountain Lion great with no complaints. Hell, it even runs Counter-Strike: GO remarkably well with its Intel HD 3000. I get about 60 FPS at native resolution with settings at medium.

avatar976
Sep 9, 2012, 04:31 AM
I'm no coder, reporting from a user perspective. In my MBP early 2011 with 16GB with 10.7.4, after 7+ days of uptime, I used to see inactive memory grow to a point where heavy swapping was needed and the system grew slower, which required me to use the purge command. As far as I can tell, it was not the system using the available memory efficiently, it was simply wasting it. And no, I don't care at all for memory stats as far as the system runs smoothly without eating GBs from my albeit small SSD.

BTW my wife's 2011 MB Air with 4GB seemed to handle memory and swapping much more efficiently even with 10.7.4.

Since 10.8 the OS seems to reclaim inactive memory more efficiently, which allows me to launch RAM-heavy apps such as VMWare Fusion after 10+ days of uptime with no swapping whatsoever.

Maybe I'm being a bit simplistic, but a computer with 16GB RAM doing nothing but normal stuff and a bit of virtualization once in a while, shouldn't be swapping at all, especially if you consider that the manufacturer of that system keeps selling computers with 4GB stock (2GB until 2011 for some models).

Unhyper
Sep 9, 2012, 05:59 AM
I thought by adding more RAM, I'd have less beach-balling, but so far that's not my experience. Using a mid-2011 iMac with 12GB and OSX 10.8.1.

Attaching a screenshot of Activity Monitor. It's in Finnish but hopefully you can still see what's what.

Running apps: Finder (a few open windows), Firefox, Mail, VLC, iTunes, Vuze, Twitter, Activity Monitor, Evernote.

Are these figures normal? Especially confused by the 30 GB of page-outs.

mousersvk
Sep 9, 2012, 06:26 AM
I thought by adding more RAM, I'd have less beach-balling, but so far that's not my experience. Using a mid-2011 iMac with 12GB and OSX 10.8.1.

Attaching a screenshot of Activity Monitor. It's in Finnish but hopefully you can still see what's what.

Running apps: Finder (a few open windows), Firefox, Mail, VLC, iTunes, Vuze, Twitter, Activity Monitor, Evernote.

Are these figures normal? Especially confused by the 30 GB of page-outs.

After a long usage of programs (launching, exiting) -- these figures could quite possibly be normal for OS X memory management behaviour. In fact, currently I have 7GBs of inactive memory and 2.4GBs of free memory.

The quite important number is "Swap used", which in your case is <70MBs. In case you run an application that needs some more free RAM and is not in your memory, this will cause a clean-up of inactive memory (a page-out) and possibly may cause some delays.

The other important factor is: are the page-ins / page-outs frequent? In your case probably not (0 bytes / sec), but these numbers always change and when there's nothing going on, they'll be 0. If you feel "threatened" by the low amount of currently free memory, simply run "sudo purge" and wait a moment.

You might want to check these links for more technical details:
http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day
https://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html

mayuka
Sep 9, 2012, 10:21 AM
however, in general, i am not seeing page outs happen the way it would by just using the system generally on lion.

My problem was with Lion that the "kernel task" took up a lot of memory after I did some intensive work (running vmware, stacking software, adobe stuff, etc.). Problems occurred not couple of hours after the last boot but after a week or so having not booted the Macbook. After a week "kernel task" took up almost 1 GB of RAM. No wonder other memory intensive apps have been swapped out.

You should do your tests again after 1 or 2 weeks of uptime and see what it looks like.

cookiesnfooty
Sep 9, 2012, 03:13 PM
I am sorry to hear you guys are experiencing these problems.

Just to state I have a 2011MBP 13" with 16GB RAM connected to a 24" monitor I regurlarly use a Windows 8 and also a Windows XP VM while coding in Visual Studio 2012 (XP is for testiing apps running without extra DLL's installed).

While doing this I often have a movie open and multiple Chrome windows and mail, the macbook never seems to slow down or experience issues, heck I even left the VM running and started playing Diablo 3 and didn't experience any issues (movie, browsers and Diablo wasn't launched in the VM environment).

Could there be something internally wrong with your setups causing the issues? Or even permission related? I am using SSD's rather than traditional HDD so this may help with speed but never experienced an issue or performance issue.

I have also played Lord Of The Rings Online through the VM and this ran well and looked good.

mocenigo
Oct 8, 2012, 08:13 AM
Finally someone posts something worthy of mention.
Look at the link:

http://openbenchmarking.org/result/1204215-SU-LLVMCLANG23

Smallpt v1.0 (3D graphical display)
400% slower.

John The Ripper v1.7.9 (Blowfish algorithm)
400% slower.

Why is Clang close to 400% slower in both cases?


Because some of these benchmarks support multi-threading and use OpenMP, which currently is supported natively by gcc but not by Clang. Once OpenMP support is added to Clang, a performance gap close to 4 will disappear on quad-core machines and so on. Note that you can STILL write multi-threaded applications to compile under Clang, but you must write specific code.

You seem not to understand that in LLVM, VM is not a Virtual Machine in the sense that it is emulated (even though you CAN compiler to byte code and use a JIT - or a flash recompilation - but this is not currently done in Apples x86 an ARM software). It is called a "VM" because the compiler has a kind of intermediate "virtual machine" that preserves a lot of semantics about the types and objects, permitting some optimizations that traditional compilers cannot perform. On the other hand, compilers like gcc exploit some architecture-specific aspects earlier during the compilation process, permitting other optimizations. It is a trade off, and in fact some hybrid compilers (like dragonegg) can combine some of both approaches.

Roberto