PDA

View Full Version : Apple fix the ram handling!




Ritmo
Sep 20, 2012, 12:28 PM
Ok. It seems that it doesn't matter how much ram you have, OSX will it eat just for fun. When I had 4Gb I thought "well I am using virtual machine and quite a lot of apps". Buying 8Gb helped a bit but it was still swapping. But today I've seen the worst and no matter how big apple fan boy you are, you have to admit this is pretty bad.



GGJstudios
Sep 20, 2012, 12:30 PM
Ok. It seems that it doesn't matter how much ram you have, OSX will it eat just for fun. When I had 4Gb I thought "well I am using virtual machine and quite a lot of apps". Buying 8Gb helped a bit but it was still swapping. But today I've seen the worst and no matter how big apple fan boy you are, you have to admit this is pretty bad.

Mac OS X will use all the RAM you have installed. That's what it's there for. The only thing you need to watch for is page outs. If you have no page outs with your normal workload, you can forget about memory issues. Page outs are cumulative since your last restart, so the best way to check is to restart your computer and track page outs under your normal workload (the apps, browser pages and documents you normally would have open). If your page outs are significant (say 1GB or more) under normal use, you may benefit from more RAM. If your page outs are zero or very low during normal use, you probably won't see any performance improvement from adding RAM.

Mac OS X: Reading system memory usage in Activity Monitor (http://support.apple.com/kb/HT1342)

Running a VM is obviously placing high demands on memory. If that represents normal use for you, you may need more RAM.

Ritmo
Sep 20, 2012, 12:51 PM
Mac OS X will use all the RAM you have installed. That's what it's there for. The only thing you need to watch for is page outs. If you have no page outs with your normal workload, you can forget about memory issues. Page outs are cumulative since your last restart, so the best way to check is to restart your computer and track page outs under your normal workload (the apps, browser pages and documents you normally would have open). If your page outs are significant (say 1GB or more) under normal use, you may benefit from more RAM. If your page outs are zero or very low during normal use, you probably won't see any performance improvement from adding RAM.

Mac OS X: Reading system memory usage in Activity Monitor (http://support.apple.com/kb/HT1342)

Running a VM is obviously placing high demands on memory. If that represents normal use for you, you may need more RAM.

Sir, are you working for apple? Because that i so helpful!
Obviously I'm not using mac for the first time. And as you can see from the picture I'm complaining about how OSX manages the RAM. It's swapping 0,5Gb for no reason because there is still 3Gb of inactive RAM that could be released for use.

dyn
Sep 20, 2012, 01:27 PM
Than it is a very wise idea to read up on what swap is exactly and what its uses are. The OS will swap when it runs out of virtual memory addresses. In some cases apps actually make use of swap to save on RAM. VMware Fusion is one of those. It is why it is so memory efficient. Swap is not very fast memory is whatever isn't needed right away can be put in there. By doing that you free up the much faster RAM which will make certain operations a lot faster.

It has nothing to do with being a fanboy, it has everything to do with knowing what on earth swap is and how it works. That and the overall memory management in OS X as explained in GGJstudios link. In other words: read the darn link!

sidewinder
Sep 20, 2012, 01:55 PM
The OS will swap when it runs out of virtual memory addresses.

You might want to read that again and think about how it is wrong.

S-

dcorban
Sep 20, 2012, 02:07 PM
The basic idea is that the size of the swap file is almost meaningless in regards to performance.

Stop staring at the stats and use the computer. When it starts feeling slow due to swapping, then be concerned.

nutmac
Sep 20, 2012, 02:42 PM
Mac OS X will use all the RAM you have installed. That's what it's there for. The only thing you need to watch for is page outs. If you have no page outs with your normal workload, you can forget about memory issues. Page outs are cumulative since your last restart, so the best way to check is to restart your computer and track page outs under your normal workload (the apps, browser pages and documents you normally would have open). If your page outs are significant (say 1GB or more) under normal use, you may benefit from more RAM. If your page outs are zero or very low during normal use, you probably won't see any performance improvement from adding RAM.
You are passing outdated information.

Much of the inactive memory is used for disk cache. In theory, when free memory is running low, OS X's dynamic pager is supposed to flush disk cache from inactive memory.

Both Lion and Mountain Lion (especially Lion) are not very good at reclaiming memory. Running "purge" command from Terminal (you may need to install Xcode) will flush disk cache out of inactive memory and return most of it back to free memory pool. But that is obviously not something any of us wants to do.

Adding more RAM will certainly decrease the likelihood of free memory running out, but if you rarely reboot your Mac, you will probably run out of it eventually.

GGJstudios
Sep 20, 2012, 03:56 PM
It's swapping 0,5Gb for no reason because there is still 3Gb of inactive RAM that could be released for use.
You don't know that. As page outs are cumulative since your last restart, they could have occurred at any time, not at the time of the screen shot, when there is both free and inactive memory available.
You are passing outdated information.
No, it's not outdated. I have yet to see anyone provide proof that page outs are occurring at a time when there is free or inactive memory available, even on Lion or ML. To prove that, you need a video showing the page outs increasing while free and inactive memory is available at the time of the page outs. People don't seem to grasp the fact that the page outs reading is not a real-time indication that page outs are occurring now, but rather a historical accumulation of page outs that have happened in the past, since the last restart.

dyn
Sep 20, 2012, 04:12 PM
You might want to read that again and think about how it is wrong.

No need because it is 100% correct. If you want to take part in this discussion I strongly suggest that you do your homework properly. Apple is talking in its developer documentation about virtual memory addresses. When it runs out of these addresses it will start to shift things in memory and eventually swap.

Now go read up on how it really works and admit you are wrong in every way (https://developer.apple.com/library/mac/#documentation/performance/conceptual/managingmemory/articles/aboutmemory.html) ;)

GGJstudios
Sep 20, 2012, 04:22 PM
The OS will swap when it runs out of virtual memory addresses.
Actually, it swaps when it runs out of physical memory, not virtual.
As far as a program is concerned, addresses in its logical address space are always available. However, if an application accesses an address on a memory page that is not currently in physical RAM, a page fault occurs.

sidewinder
Sep 20, 2012, 04:35 PM
No need because it is 100% correct. If you want to take part in this discussion I strongly suggest that you do your homework properly. Apple is talking in its developer documentation about virtual memory addresses. When it runs out of these addresses it will start to shift things in memory and eventually swap.

Now go read up on how it really works and admit you are wrong in every way (https://developer.apple.com/library/mac/#documentation/performance/conceptual/managingmemory/articles/aboutmemory.html) ;)

Maybe you should go read up on how it really works.

The system never runs out of "virtual memory addresses" as you call them. Each process has a logical (virtual) address space created for it by the virtual memory manager. This space is chopped up into 4KB pages. This logical address space is always available to the process.

What the system can do is run out of physical RAM. That is when these pages can get swapped out of memory onto disk and vice versa.

So this sentence of yours:

The OS will swap when it runs out of virtual memory addresses.

Is completely incorrect.

S-

nutmac
Sep 20, 2012, 05:33 PM
No, it's not outdated. I have yet to see anyone provide proof that page outs are occurring at a time when there is free or inactive memory available, even on Lion or ML. To prove that, you need a video showing the page outs increasing while free and inactive memory is available at the time of the page outs.

I don't really want to create a video to prove you wrong but if I run memory intensive task like (1) Final Cut Pro export or analyzing clips, (2) Aperture view entire photos, (3) Handbrake encode 1080p movie, inactive memory will gradually increase and deplete free memory. Eventually, free memory reaches 0 MB and page outs count starts to go up from 0. FYI, my 2011 MacBook Pro has 16GB RAM.

This issue is also documented by Adam Fields (http://workstuff.tumblr.com/post/20464780085/something-is-deeply-broken-in-os-x-memory-management) as well as 56-page long discussion at Apple's support community site (https://discussions.apple.com/thread/3193912?start=825&tstart=0).

GGJstudios
Sep 20, 2012, 05:44 PM
I don't really want to create a video to prove you wrong but if I run memory intensive task like (1) Final Cut Pro export or analyzing clips, (2) Aperture view entire photos, (3) Handbrake encode 1080p movie, inactive memory will gradually increase and deplete free memory.
Inactive memory doesn't "deplete" free memory. Read the link in my first post to understand what free and inactive memory is, and how memory is marked as inactive.
Eventually, free memory reaches 0 MB and page outs count starts to go up from 0.
At the time page outs start increasing, free memory is zero, but what about inactive memory? That's the part that people claim, but never prove.

nutmac
Sep 20, 2012, 06:36 PM
Inactive memory doesn't "deplete" free memory. Read the link in my first post to understand what free and inactive memory is, and how memory is marked as inactive.

At the time page outs start increasing, free memory is zero, but what about inactive memory? That's the part that people claim, but never prove.

Sure it does. Inactive memory is not directly available to applications. OS X will use free memory for disk cache, which then becomes inactive memory. At its discretion (e.g., when free memory is running low), it will release inactive memory back to free memory pool, by doing things like flushing disk cache.

Unfortunately, Lion and (to less extent) Mountain Lion do not release inactive memory very well. Murphy's Law is that as inactive memory rises, free memory will decline. You just can't have both.

When you run out of free memory, OS X will start swapping memory (page out) into disk (virtual memory). And when you start paging out, you will start to get things like beach balls (less severe on flash storage/SSD). The only workaround at this point is to reboot your Mac.

Here's before:
http://f.cl.ly/items/2X151N2m3c3V0O2q0A3I/before.png
Here's after:
http://f.cl.ly/items/1y0P460m0q0K242P2u1b/after.png
In theory, this after state shouldn't happen.

GGJstudios
Sep 20, 2012, 06:56 PM
Sure it does. Inactive memory is not directly available to applications.
Directly from the Apple link I first posted:
Inactive memory is available for use by another application, just like Free memory.
When you run out of free memory, OS X will start swapping memory (page out) into disk (virtual memory).
Page outs occur when you have no free or inactive memory.
And when you start paging out, you will start to get things like beach balls (less severe on flash storage/SSD). The only workaround at this point is to reboot your Mac.
You don't have to reboot. If you close the apps that are placing high demands on memory, that memory will be freed up by Mac OS X, returning to either free or inactive memory, or both.

Here's before:
Here's after:
In theory, this after state shouldn't happen.
As I said before, the after picture shows that page outs occurred since the last restart, but does not prove that they occurred at a time when there was inactive memory available. I would be happy to concede that this is happening, but I've never seen any proof.

dukebound85
Sep 20, 2012, 06:59 PM
Directly from the Apple link I first posted:


Page outs occur when you have no free or inactive memory.

You don't have to reboot. If you close the apps that are placing high demands on memory, that memory will be freed up by Mac OS X, returning to either free or inactive memory, or both.

As I said before, the after picture shows that page outs occurred since the last restart, but does not prove that they occurred at a time when there was inactive memory available. I would be happy to concede that this is happening, but I've never seen any proof.

From my experiences, as I keep posting about on this topic but you never acknowledge, if I run an app that requires a bunch of memory, inactive != free. My experience is running matlab files manipulating multiple variables well over 5 gigs (sometimes up to 10gigs) apiece

Running the purge command does help with the task I am trying to conduct in those cases as it frees up the inactive ram for matlab, which is constantly wanting as much as it can get. When I have a bunch of inactive memory and no free memory, I get loads of pageouts and very poor performance.

Not sure why you keep saying they are the same when they are clearly not in situations I have expereinced, told you about, yet ignored when you keep posting the same info.

SpyderBite
Sep 20, 2012, 07:02 PM
If the problem doesn't exist for everybody (I am not experiencing the same issue as you) then how can you be certain that it is something that Apple needs to address and it isn't something localized on your system or similar setup?

Tech Support 101: if everybody is having the same problem it is a problem with the code. If only you are having a problem it is a problem with the way you are using the code.

nutmac
Sep 20, 2012, 07:11 PM
Directly from the Apple link I first posted:

Page outs occur when you have no free or inactive memory.

You don't have to reboot. If you close the apps that are placing high demands on memory, that memory will be freed up by Mac OS X, returning to either free or inactive memory, or both.

As I said before, the after picture shows that page outs occurred since the last restart, but does not prove that they occurred at a time when there was inactive memory available. I would be happy to concede that this is happening, but I've never seen any proof.

No no no. Look, I write software for living and I deal with garbage collection all the time.

Apple's support document is "dummied down" for average Joes. Inactive memory can mean many things. But it generally means one of two things.

When you quit an app, its contents are first moved from active to inactive memory pool. If you re-run the app before inactive memory is reclaimed (and becoming free), it will be able to access them again.
When app needs to access data from disk, and there's free memory, it is first transferred to inactive memory (into disk cache). Sometimes, OS X puts restriction on the cache size. But on modern OSes like Mountain Lion and newer versions of Linux, there's no limit ("free memory is wasted memory").

So what happens when you run out of inactive memory and app needs to create more object (write to memory)? If the data is in inactive memory (e.g., disk cache), all is well and it will be converted to active again. Otherwise, OS X will need to either release parts of inactive memory (and therefore making it available as free memory) or create virtual memory (which is disk being simulated as memory). The first part (reclaiming inactive to free) is a lot harder than it looks and doesn't happen as quickly as creating virtual memory.

Writing to virtual memory is called page outs and is much slower than writing to memory. Virtual memory is a series of very large files on /var/vm directory. Since multiple apps and processes can use virtual memory files, it is next to impossible to reclaim virtual memory without restarting your Mac. In other words, even when OS regains free memory, it will not move data from virtual memory to free memory.

GGJstudios
Sep 20, 2012, 07:44 PM
When I have a bunch of inactive memory and no free memory, I get loads of pageouts and very poor performance.
It's this that I have never seen proven, regardless of user claims. I have never seen anyone prove that page outs occur at a time when inactive memory is available.
Inactive memory can mean many things. But it generally means one of two things.
Thanks for the details, but I'm quite familiar with how memory management works. Nothing you've said proves paging out occurs when inactive memory is available, or that restarting your Mac is required to free up memory. I've heard such claims from several people, but zero proof.

Ledgem
Sep 20, 2012, 09:16 PM
People don't seem to grasp the fact that the page outs reading is not a real-time indication that page outs are occurring now, but rather a historical accumulation of page outs that have happened in the past, since the last restart.
I believe you, but I think there's something weird with 10.8's memory handling. With OS X 10.7 and 6 GB of RAM, my memory usage would usually hit the 60%'s pretty quickly, and over the course of the week it would rise to 70%. This is all with normal usage - mostly email and web-browsing. Using Aperture or other media applications obviously pushes the RAM usage up farther and induces page outs.

When I installed OS X 10.8 on that same system, my memory usage would stick around the 40-50%'s. I thought that OS X 10.8 was just more memory-efficient, but I was getting page outs even with my normal day-to-day usage. They weren't so numerous as to cause a system slowdown, but they were greater in number than with OS X 10.7.

Now I'm using a system with 16 GB of RAM. My memory usage is typically well under 40% for day-to-day usage, yet I still build up a swap file and experience page outs. The swap file is pretty small and the page outs are much, much less in number compared to my old computer with 6 GB of RAM, but it strikes me as odd that I get any paging out at all when I'm not using anything particularly RAM-intensive, nor am I multitasking heavily. Paging out when RAM utilization is less than 50%? I don't remember having page outs like this under OS X 10.7, and definitely not under OS X 10.6 even when I had only 4 GB of RAM.

nutmac
Sep 20, 2012, 09:20 PM
As I said before, the after picture shows that page outs occurred since the last restart, but does not prove that they occurred at a time when there was inactive memory available. I would be happy to concede that this is happening, but I've never seen any proof.

Before and after screenshots are less than 15 minutes of each other. All I did was regenerate preview images in Aperture. If inactive memory is indeed available for anyone, OS X should never create virtual memory (and thus increasing page out count) in the first place.

SecretNY
Sep 20, 2012, 09:30 PM
I updated yesterday to 10.8.2 and I'm on a 13" Macbook Pro 2011 with 8GB. I never used all the free memory but after the update my mac was running so slow I checked and I had 75MB of free memory and 4 GB of inactive? I think theres a problem...

Jenni8
Sep 20, 2012, 11:23 PM
I have a Mid 2011 iMac and I do experience problems with memory. Although I don't have loads of experience, I have personally determined it a software issue. Not an Apple software issue, but an issue with the program I am running.

Almost any program runs smooth and superb with the exception of Safari of course as it is a memory hog, although it seems to have improved with each update since Lion. I love Safari and will only open Chrome when something doesn't quite open right due to Flash. Urg.

Here is my point of where I'm going: What programs are you running when this memory issue occurs? Because the only time I have issues is when I run BOTH Lightroom and Photoshop at the same time. Not by themselves, but at the same time. It can get frustrating as it's easier for my workflow to have both open at the same time sometimes.

I already know I need more RAM to fix my issue as I have a TON of pics and that is the primary reason for my issues. I will eventually get more RAM as I've been ready to, I can't count how many times, as I only have 4gigs. When I do upgrade I will upgrade to at least 12. It's just that with all that is going on in my personal life, I'd rather spend money on food or life stuff as times are harder than when I got my Mac. I did help my issue my creating a new Library for Lightroom with only my more recent photos and that has helped a lot. But as I keep adding more it does get slower with taking all the RAM and then everything goes to a crawl. (Maybe I need a better setup of how much Cache I use, as I haven't found the magic number yet.)

I found the problem with most programs with taking up too much RAM depends solely on how much Cache I allow for certain programs. As I've adjusted things, things run smoother.

Now this is just my 2 cents as I am no expert or seasoned user. But that is at least my observation. Also when I first got my iMac, It did have Snow Leopard on it and Photoshop and Lightroom Did run slower with the older version, even though I had WAY less apps on at the time.

Now from any pro user, does what I just explained make sense? Or do I have it twisted? I'm still learning and I feel I know a great deal in the short time I've owned my Mac. Also when I do get somewhat frustrated for whatever reason, it's usually my fault for an issue, as I love to tinker with my system a lot to make it MINE, or I just remember the horror of a PC I owned before my iMac and then I relax, cause it's still a 1,000 times better.

Ledgem
Sep 21, 2012, 01:06 AM
Now from any pro user, does what I just explained make sense? Or do I have it twisted? I'm still learning and I feel I know a great deal in the short time I've owned my Mac. Also when I do get somewhat frustrated for whatever reason, it's usually my fault for an issue, as I love to tinker with my system a lot to make it MINE, or I just remember the horror of a PC I owned before my iMac and then I relax, cause it's still a 1,000 times better.
Yes, what you've said is correct. Multimedia programs in particular will take up a lot of memory, just because that's the nature of multimedia editing. Most of those programs don't have a set limit. If what you're doing is very demanding, their memory usage will increase up to the system's maximum capacity to accomodate it.

The problem mentioned in this thread doesn't really deal with that, though. The operating system is responsible for managing the memory between programs. If the memory becomes full, the operating system will determine what's in there that isn't currently being used (or that was last touched the least recently), and then put it into a "page file" on the hard drive. This is what paging out is. When the paged out information is needed, it will be transferred from the hard drive to the memory. If the memory is still very full, then the operating system basically swaps things around: the next least-used thing in the memory is put into the swap file and is replaced with what the system needs. Alternately, if enough memory has been freed up, then no exchange is necessary: information goes from the hard drive to the memory, and that's it.

It's worth noting that involving the hard drive drastically slows things down. Thus, paging out is very undesirable.

The problem in this case is that the operating system is paging out even when there's a lot of free memory. In my case, for example, I may be using 7 GB of RAM, but I have 9 GB free. Despite this, the operating system will occasionally page out. In theory, there's no reason why it should be paging out under such circumstances.

twintin
Sep 21, 2012, 02:19 AM
Not sure why you keep saying they are the same when they are clearly not in situations I have expereinced, told you about, yet ignored when you keep posting the same info.

I think there may be some misconception regarding what Inactive memory is. Reading up on FreeBSD memory management it seems Inactive memory is either:

- memory recently released (clean);
- memory recently paged out (clean);
- or memory that has not been touched for a longer period (dirty).

Clean memory is indeed similar to free memory. Only reason it is kept as inactive is to be able to reclaim it faster in the event the system that most recently used it is restarted/awaked again. However, dirty inactive memory must first be paged out before it can be used and hence can be more expensive to use.

Basically, even with lot of inactive memory, depending on how much of it is dirty respectively clean you may or may not encounter lot of page outs (the man page states that purge flushes disk caches).

Since unixes like OS X, FreeBSD and Linux love to cache lot of I/O accesses to speed those accesses up (especially if you use a HDD) I can imagine that disk intensive applications with frequent read and writes may eventually hog the system due to lot of caches being used.

Since the purge command seems to resolve your issue, what I described above seems to be the most likely cause of your problem.

Personally, even after weeks of uptime I have never experienced any kind of extensive page outs, slow downs or other memory issues and that with Safari, Mail and some other apps constantly running. Since none of my apps are disk intensive I guess my system uses very few caches and those used are probably periodically flushed from time to time leaving me with lot of "free" memory.

Basically, I believe there is nothing wrong with the memory management. This is just how it is designed to work. As with all algorithms there are always trade offs made and hence no algorithm will work perfectly for every one all the time. What you have encountered is the trade off made by the memory management algorithm of OS X.

The memory management in OS X does not differ much from other unixes like FreeBSD and Linux and I'm sure you can find similar discussions in forums related to those unixes as well.

Does all this make sense to you ?

VinegarTasters
Sep 21, 2012, 03:02 AM
There are a lot of inherently bad things happening underneath it all.

First of all, you need to understand why OSX gets into these "laggy" situations in the first place. One is the move to LLVM, where the compiler is not optimized for performance, but for supporting multiple languages. It has virtual machine written even inside its name! It is a byproduct of competition with .NET, which was a competitor with Java. These technologies are BAD for performance. They force slow as hell garbage collection and automatic reference counting on programmers to save programmer noobs from leaking memory. To do that you just create object without worrying about when to release them (free up the memory used by objects). The garbage collector runs when you eventually take up all the main memory. What it does is GO THROUGH EVERY memory and release those that are not held by any active programs. It is a slow process, and it takes a lot of CPU time. No AAA games can survive it, so no AAA games will use JAVA, or .NET languages (like C#). Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING. The compiler will insert code "thinking" it is the right time to create and release memory. To be safe, it will only release usually when the program exits.
That is what most Java programs do anyways, grab all the memory, and don't run garbage collector until you have used up all virtual memory. It never runs. When it does the game crawls, people notice the bad performance of Java, so the garbage collector essentially does nothing until ALL memory is used up.

Now that you have reached here. How does this relate to Mountain Lion? OSX uses Objective-C, which has object oriented stuff patched onto regular C. Instead of using C++ which uses method calls, Objective-C uses message passing. SLOW! It needs to parse the message to find out what object to call. Whereas C++ just has pointers to the actual object, no parsing. And the biggest bummer? Garbage collection is default on Objective-C. You don't allocate objects, you just eat up all the memory and the garbage collector runs (the same Java, .NET, etc technology that is bad for performance). In order to save themselves Apple tries to get away from garbage collectors, by using ARC (AutomaticReferenceCounting) in Mountain Lion, and garbage collectors are deprecated. (In Lion, Garbage Collectors are not deprecated). The situation gets bad here. Now that garbage collectors are not default, the model has changed. Programmers now need to explicitly tell the OS when it is ok to release memory in ARC, or Mountain Lion will assume they want to keep using memory. If you don't program your program telling OSX that it is ok to be released, it is NOT going to be released. So all programs that were programmed from Lion and earlier ((using ARC only) )will keep leaking memory in Mountain Lion. Because they don't even have code to tell Mountain Lion it is ok to free memory. BINGO! Why is the disk swapping so much? Why am I out of memory?

Now this is not the main problem. The main problem is that garbage collection and ARC is still supported, and the fact the OSX still uses Objective-C, which is stuck with such slow technology from a by-gone era. Message passing is too slow. Garbage collection is slow. ARC is slow. Only C++ and C with manual allocation and release is fast. Do you know why garbage collection is not supported in iOS? Yep, bad for mobile battery life with CPU draining all the juice and low main memory. Instead of fixing the problem (bad technology), they are trying to patch the technology. The move to garbage collectors to ARC, is moving more of the responsibility to programmers on memory management, back to the original way programmers did it in the first place (manual management using C/C++). But the problems is that Objective-C is stuck with this ARC that is supposed to be an improvement, but is still not as fast and good to memory management as plain programmer created/released memory. The only way is to go to the lower level and use C/C++ where you can actually touch the memory and malloc/release the memory yourselves. Since llvm is supposed to support c and c++, they still have hope if you start moving chunks of the operating system to c/c++, and remove all the objective-c code that relies on ARC or garbage collection, which is keeps around a virtual machine handling the memory management.

.NET and these interpreted technology is so bad for business that XNA is being dumped and Windows Phone 8 no longer requires it. You can now do C++ directly on top of Direct3D, instead of that SLOW .NET C# layer that destroyed their third party gaming business on XBOX360. Yes it is that bad. Apple will try to cover it up, but eventually, the technology will show itself in ugly places. All these complaints on performance are a byproduct of bandaid fixes.

GGJstudios
Sep 21, 2012, 10:18 AM
If inactive memory is indeed available for anyone, OS X should never create virtual memory (and thus increasing page out count) in the first place.
That's not true because there isn't always sufficient free or inactive memory available, thus page outs occur. If memory demands exceed all available free and inactive memory, paging is to be expected.
I never used all the free memory but after the update my mac was running so slow I checked and I had 75MB of free memory and 4 GB of inactive? I think theres a problem...
No, that doesn't represent a problem. It simply shows that you used most of your free memory at some time and those apps have been closed, leaving the memory available to other apps. It's marked as inactive to improve performance, in case you re-launch the same apps. If you don't, your inactive memory is just like free memory.

dyn
Sep 21, 2012, 12:27 PM
Actually, it swaps when it runs out of physical memory, not virtual.
Actually no because it is only half the story you are quoting:


Virtual memory allows an operating system to escape the limitations of physical RAM. The virtual memory manager creates a logical address space (or “virtual” address space) for each process and divides it up into uniformly-sized chunks of memory called pages. The processor and its memory management unit (MMU) maintain a page table to map pages in the program’s logical address space to hardware addresses in the computer’s RAM. When a program’s code accesses an address in memory, the MMU uses the page table to translate the specified logical address into the actual hardware memory address. This translation occurs automatically and is transparent to the running application.

This is what precedes your quote.

Maybe you should go read up on how it really works.

The system never runs out of "virtual memory addresses" as you call them.

Ah in that case you should start reading that link because with that last sentence you show you have not done this at all... It is not me who is calling it that way, it is Apple. Big difference!

That documentation also says there is a limitation:

Both OS X and iOS include a fully-integrated virtual memory system that you cannot turn off; it is always on. Both systems also provide up to 4 gigabytes of addressable space per 32-bit process. In addition, OS X provides approximately 18 exabytes of addressable space for 64-bit processes. Even for computers that have 4 or more gigabytes of RAM available, the system rarely dedicates this much RAM to a single process.

To give processes access to their entire 4 gigabyte or 18 exabyte address space, OS X uses the hard disk to hold data that is not currently in use. As memory gets full, sections of memory that are not being used are written to disk to make room for data that is needed now. The portion of the disk that stores the unused data is known as the backing store because it provides the backup storage for main memory.




Each process has a logical (virtual) address space created for it by the virtual memory manager. This space is chopped up into 4KB pages. This logical address space is always available to the process.

What the system can do is run out of physical RAM. That is when these pages can get swapped out of memory onto disk and vice versa.

So this sentence of yours:

The OS will swap when it runs out of virtual memory addresses.

Is completely incorrect.

S-
Nope that is not how it works as you can clearly read in the documentation. When the system is out of physical memory it will shift around memory which will eventually lead to swapping. This is explained a couple of times (!) in the documentation.

sidewinder
Sep 21, 2012, 02:04 PM
dyn,

Admit you are wrong!!! Here is what you said one more time:

"The OS will swap when it runs out of virtual memory addresses."

Please note that you said the OS runs out of virtual memory addresses, not a process. Also note you said nothing about physical memory.

The OS does not "run out" of virtual address space. The OS can assign virtual address space to as many processes as can be run.

Each process is limited to the size of virtual address space available to it. Each 32-bit process has 4 gigabytes of virtual address space. 64-bit processes have ~18 exabytes of virtual address space. It is logical address space but it has a finite size. If a process were to use up all its virtual address space, that's it. There would be no more to assign to that process.

Let's take a 32-bit process. It is assigned 4GB of virtual address space. No matter what, a 32-bit process cannot access more than 4GB of address space. If a 32-bit process uses up it's 4GB of virtual address space, whether the pages are in real memory or paged out, it cannot get any more address space.

The OS swaps (pages) when physical memory limits come into play. Not when a process utilizes the entire virtual address space assigned to it.

S-

Jenni8
Sep 21, 2012, 03:10 PM
There are a lot of inherently bad things happening underneath it all.

First of all, you need to understand why OSX gets into these "laggy" situations in the first place. One is the move to LLVM, where the compiler is not optimized for performance, but for supporting multiple languages. It has virtual machine written even inside its name! It is a byproduct of competition with .NET, which was a competitor with Java. These technologies are BAD for performance. They force slow as hell garbage collection and automatic reference counting on programmers to save programmer noobs from leaking memory. To do that you just create object without worrying about when to release them (free up the memory used by objects). The garbage collector runs when you eventually take up all the main memory. What it does is GO THROUGH EVERY memory and release those that are not held by any active programs. It is a slow process, and it takes a lot of CPU time. No AAA games can survive it, so no AAA games will use JAVA, or .NET languages (like C#). Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING. The compiler will insert code "thinking" it is the right time to create and release memory. To be safe, it will only release usually when the program exits.
That is what most Java programs do anyways, grab all the memory, and don't run garbage collector until you have used up all virtual memory. It never runs. When it does the game crawls, people notice the bad performance of Java, so the garbage collector essentially does nothing until ALL memory is used up.

Now that you have reached here. How does this relate to Mountain Lion? OSX uses Objective-C, which has object oriented stuff patched onto regular C. Instead of using C++ which uses method calls, Objective-C uses message passing. SLOW! It needs to parse the message to find out what object to call. Whereas C++ just has pointers to the actual object, no parsing. And the biggest bummer? Garbage collection is default on Objective-C. You don't allocate objects, you just eat up all the memory and the garbage collector runs (the same Java, .NET, etc technology that is bad for performance). In order to save themselves Apple tries to get away from garbage collectors, by using ARC (AutomaticReferenceCounting) in Mountain Lion, and garbage collectors are deprecated. (In Lion, Garbage Collectors are not deprecated). The situation gets bad here. Now that garbage collectors are not default, the model has changed. Programmers now need to explicitly tell the OS when it is ok to release memory in ARC, or Mountain Lion will assume they want to keep using memory. If you don't program your program telling OSX that it is ok to be released, it is NOT going to be released. So all programs that were programmed from Lion and earlier ((using ARC only) )will keep leaking memory in Mountain Lion. Because they don't even have code to tell Mountain Lion it is ok to free memory. BINGO! Why is the disk swapping so much? Why am I out of memory?

Now this is not the main problem. The main problem is that garbage collection and ARC is still supported, and the fact the OSX still uses Objective-C, which is stuck with such slow technology from a by-gone era. Message passing is too slow. Garbage collection is slow. ARC is slow. Only C++ and C with manual allocation and release is fast. Do you know why garbage collection is not supported in iOS? Yep, bad for mobile battery life with CPU draining all the juice and low main memory. Instead of fixing the problem (bad technology), they are trying to patch the technology. The move to garbage collectors to ARC, is moving more of the responsibility to programmers on memory management, back to the original way programmers did it in the first place (manual management using C/C++). But the problems is that Objective-C is stuck with this ARC that is supposed to be an improvement, but is still not as fast and good to memory management as plain programmer created/released memory. The only way is to go to the lower level and use C/C++ where you can actually touch the memory and malloc/release the memory yourselves. Since llvm is supposed to support c and c++, they still have hope if you start moving chunks of the operating system to c/c++, and remove all the objective-c code that relies on ARC or garbage collection, which is keeps around a virtual machine handling the memory management.

.NET and these interpreted technology is so bad for business that XNA is being dumped and Windows Phone 8 no longer requires it. You can now do C++ directly on top of Direct3D, instead of that SLOW .NET C# layer that destroyed their third party gaming business on XBOX360. Yes it is that bad. Apple will try to cover it up, but eventually, the technology will show itself in ugly places. All these complaints on performance are a byproduct of bandaid fixes.

To get this straight, you are saying that the operating system isn't automatically dumbing its inactive memory as it used to and unless the program is closed or is setup to dump its own memory when needed. Then that's why a lag occurs because the inactive memory doesn't "appear" available because the software isn't compatible to.

Now here is a question, how do we get around this without using some app to free all the inactive memory? Which isn't really what I like to do cause it will make the rest of the system run slow when it comes to Launchpad and such. I think I've fixed some of my memory issues until I'm ready to get more ram. But even if I get more ram will I still have issues running Photoshop and Lightroom together at 12gb?

VinegarTasters
Sep 23, 2012, 07:48 AM
To get this straight, you are saying that the operating system isn't automatically dumbing its inactive memory as it used to and unless the program is closed or is setup to dump its own memory when needed. Then that's why a lag occurs because the inactive memory doesn't "appear" available because the software isn't compatible to.

Now here is a question, how do we get around this without using some app to free all the inactive memory? Which isn't really what I like to do cause it will make the rest of the system run slow when it comes to Launchpad and such. I think I've fixed some of my memory issues until I'm ready to get more ram. But even if I get more ram will I still have issues running Photoshop and Lightroom together at 12gb?


They are trying to move iOS methodology into Mountain Lion I think. In iOS when you close programs they are "saved state" into the flash. So in Mountain Lion, every program will not actually clear, but stuck in memory/virtual memory. So soon everything will fill up to the brim.

Well, if you want, you can always turn off virtual memory (but it may crash when you run out of memory). That way you interrupt the "ios" behavior. Or you can get a program to be some sort of garbage collector, grabbing memory until all the other programs gets dumped from virtual memory, and then freeing itself.

Ledgem
Sep 23, 2012, 12:05 PM
They are trying to move iOS methodology into Mountain Lion I think. In iOS when you close programs they are "saved state" into the flash. So in Mountain Lion, every program will not actually clear, but stuck in memory/virtual memory. So soon everything will fill up to the brim.
If you monitor your memory usage regularly, you'll notice that the behavior is not as you are describing it.

VinegarTasters
Sep 23, 2012, 02:23 PM
If you monitor your memory usage regularly, you'll notice that the behavior is not as you are describing it.

Err... The keyword is "trying" to be like iOS. They get stuck with large caches though. Here, here is another way of restating the "problem":

http://www.bechte.de/tag/inactive-memory/

Ledgem
Sep 23, 2012, 03:44 PM
Err... The keyword is "trying" to be like iOS. They get stuck with large caches though. Here, here is another way of restating the "problem":

http://www.bechte.de/tag/inactive-memory/
There's a lot of hysteria about Apple porting features between iOS and OS X, and I'm a bit worried that's where you're going with this. There is no reason to make the comparison between iOS and OS X in terms of memory management because the hardware that each are designed for differs quite significantly, as do the multitasking expectations. iOS seems content to keep programs loaded in the memory until the memory fills up, given the expectation that you're only using one at a time; as of OS X 10.7, the default behavior was that programs would automatically be closed (taken out of memory) if it detected that they weren't being used. I'm not sure if it required that all documents be closed, as well; I never had that behavior enabled. Suffice it to say, when a program is closed, it's closed. Some remnants will remain as inactive memory, but otherwise much is recycled back to "free" memory.

petsounds
Sep 23, 2012, 05:07 PM
Well, I updated from Snow Leopard to Mountain Lion a few weeks ago, and so far ML seems more competent at memory management than SL. Not a lot, but I haven't had to do a purge from the command line like I did often under SL. Though I find it strange that I'm looking at Activity Monitor right now and see 170MB of Swap used but no Page Outs. Usually those went hand-in-hand.

I think for me I suffer from RAM problems because OS X seems to hold on to app memory a lot longer than it should. This is fine if you only run two or three applications, but I use a wide range of memory-intensive programs each day -- Photoshop, Illustrator, Xcode, Logic Pro, et al. As I use more applications, it fills up the Used RAM until it teeters on the edge of using all my physical RAM (10 GB), and often this eventually results in Page Outs and Swap being used.

This is all compounded by applications which either have memory leaks, or just never give up RAM. Java apps are terrible at this. PS3 Media Server (which I believe is Java-based) can quickly burn through my physical RAM streaming a couple 720p movies, and OS X will never reclaim it until I run a manual purge.

z06gal
Sep 23, 2012, 05:37 PM
Well, I have read this entire thread and everything is as clear as mud :eek:

VinegarTasters
Sep 23, 2012, 08:45 PM
Well, I have read this entire thread and everything is as clear as mud :eek:

Don't worry, half of the posters are on a paid agenda to prettify the problems when you reveal negative things about apple products. Just focus on the people who describe the problems and ignore those that just seem to want to explain the problem away. That way you will know the truth.

AlexJaye
Sep 23, 2012, 11:28 PM
As I said before, the after picture shows that page outs occurred since the last restart, but does not prove that they occurred at a time when there was inactive memory available. I would be happy to concede that this is happening, but I've never seen any proof.

You're being difficult. The other poster is correct. OSX sucks with memory management. I've seen pageout's and beach-balls on my Mac with 0 free ram but a gig of inactive ram available.

Mr. Retrofire
Sep 24, 2012, 12:07 AM
There are a lot of inherently bad things happening underneath it all.

First of all, you need to understand why OSX gets into these "laggy" situations in the first place. One is the move to LLVM, where the compiler is not optimized for performance, but for supporting multiple languages. It has virtual machine written even inside its name!
-1

You are misinformed. The LLVM does not run on the target computer (i.e. your Mac). The LLVM-project is a code translation, code generation and code optimization project. The goal of this project is one compiler and one optimizer for all programming languages. LLVM generates highly optimized code, which does NOT run inside a VM. Apple used LLVM for all kernel extensions and system frameworks in OS X 10.7 and 10.8, and this code does NOT run inside a VM.

Benchmark GCC v4.8 vs. Clang (part of LLVM):
http://www.phoronix.com/scan.php?page=article&item=gcc48_llvm32_svn1&num=3
(see how fast LLVM is)

How the LLVM Compiler Infrastructure Works
http://www.informit.com/articles/article.aspx?p=1215438

GGJstudios
Sep 24, 2012, 01:14 AM
You're being difficult. The other poster is correct. OSX sucks with memory management. I've seen pageout's and beach-balls on my Mac with 0 free ram but a gig of inactive ram available.
More claims with zero proof.
Don't worry, half of the posters are on a paid agenda
No one in this forum is paid. You are misinformed about several things.

Kashsystems
Sep 24, 2012, 01:17 AM
Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING.

I did both quotes because the poster is very misinformed.

ARC is not garbage collection nor has it ever been garbage collection.

All ARC does is do the reference counting for you when the program is compiled so you do not have to manual figure it out yourself. It calculates and inserts code for retain, release, and auto release. It does not do this during runtime nor is it slower.


Don't worry, half of the posters are on a paid agenda to prettify the problems when you reveal negative things about apple products. Just focus on the people who describe the problems and ignore those that just seem to want to explain the problem away. That way you will know the truth.

So far what you have revealed is a lack of understanding of how this really works.

NET and these interpreted technology is so bad for business that XNA is being dumped and Windows Phone 8 no longer requires it. You can now do C++ directly on top of Direct3D, instead of that SLOW .NET C# layer that destroyed their third party gaming business on XBOX360.

Once again, misinformed. Let me quote the head of windows graphics dev for windows phone and XNA head developer Shawn Hargreaves.

http://xboxforums.create.msdn.com/forums/p/91616/549344.aspx#549344
It is correct that XNA is not supported for developing the new style Metro applications in Windows 8.

But XNA remains fully supported and recommended for developing on Xbox and Windows Phone, not to mention for creating classic Windows applications (which run on XP, Vista, Win7, and also Win8 in classic mode).


So basically XNA is not dead and if you use windows 8 classic mode, still fully supported.

Also I do not see how xbox 360 has killed their 3rd party game business when 99.99 percent of their games are 3rd party.

Your information just seems really off and I can not imagine where you got all this misinformation from.

Ledgem
Sep 24, 2012, 04:35 AM
Your information just seems really off and I can not imagine where you got all this misinformation from.
Maybe he's a paid poster with an agenda :D

VinegarTasters
Sep 24, 2012, 04:37 AM
-1

You are misinformed. The LLVM does not run on the target computer (i.e. your Mac). The LLVM-project is a code translation, code generation and code optimization project. The goal of this project is one compiler and one optimizer for all programming languages. LLVM generates highly optimized code, which does NOT run inside a VM. Apple used LLVM for all kernel extensions and system frameworks in OS X 10.7 and 10.8, and this code does NOT run inside a VM.

[/url]

LLVM does use a VM on Mac.

Here look at this:

http://webcache.googleusercontent.com/search?q=cache:RkqvHjcMfQIJ:lists.cs.uiuc.edu/pipermail/llvmdev/2006-August/006492.html+&cd=2&hl=en&ct=clnk&gl=us


[QUOTE]

[LLVMdev] A cool use of LLVM at Apple: the OpenGL stack

Chris Lattner sabre at nondot.org
Tue Aug 15 15:52:19 CDT 2006
Previous message: [LLVMdev] OOPLSA 2006 Call for Participation
Next message: [LLVMdev] Re: A cool use of LLVM at Apple: the OpenGL stack
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[I just got official okay to mention this in public. This was previously
announced at Apple's WWDC conference last week.]

For those who are interested, Apple announced that they are using the LLVM
optimizer and JIT within their Mac OS 10.5 'Leopard' OpenGL stack (which
was distributed in beta form to WWDC attendees).

LLVM is used in two different ways, at runtime:

1. Runtime code specialization within the fixed-function vertex-processing
pipeline. Basically, the OpenGL pipeline has many parameters (is fog
enabled? do vertices have texture info? etc) which rarely change:
executing the fully branchy code swamps the branch predictors and
performs poorly. To solve this, the code is precompiled to LLVM .bc
form, from which specializations of the code are made, optimized,
and JIT compiled as they are needed at runtime.

2. OpenGL vertex shaders are small programs written using a family of
programming langauges with highly domain-specific features (e.g. dot
product, texture lookup, etc). At runtime, the OpenGL stack translates
vertex programs into LLVM form, runs LLVM optimizer passes and then JIT
compiles the code.

Both of these approaches make heavy use of manually vectorized code using
SSE/Altivec intrinsics, and they use the LLVM x86-32/x86-64/ppc/ppc64
targets. LLVM replaces existing special purpose JIT compilers built by
the OpenGL team.

LLVM is currently used when hardware support is disabled or when the
current hardware does not support a feature requested by the user app.
This happens most often on low-end graphics chips (e.g. integrated
graphics), but can happen even with the high-end graphics when advanced
capabilities are used.

Like any good compiler, the only impact that LLVM has on the OpenGL stack
is better performance (there are no user-visible knobs). However, if you
sample a program using shark, you will occasionally see LLVM methods in
the stack traces. :)

[ENDQUOTE]


The technical manual states:


"Code that is available in LLVM IR can have a wide variety of tools applied to it. For example, you can run optimizations on it (as we did above), you can dump it out in textual or binary forms, you can compile the code to an assembly file (.s) for some target, or you can JIT compile it."

Therefore, it is either binary... OR JIT compiled Either you create a binary, OR you JIT compile it. Lets continue...


"In order to do this, we first declare and initialize the JIT. This is done by adding a global variable and a call in main:

...
let main () =
...
(* Create the JIT. *)
let the_execution_engine = ExecutionEngine.create Codegen.the_module in
...
This creates an abstract "Execution Engine" which can be either a JIT compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler for you if one is available for your platform, otherwise it will fall back to the interpreter."


ExecutionEngine is either the JIT or the interpreter (exact same
thing in Java and C# world). We are now inside a virtual machine either
just in time compiled or interpreted on the fly.

Virtual Machines are memory hogs due to supporting garbage collections and automated reference counting, in addition to implementing a whole CPU virtually. In addition, the LLVM backend IS a virtual machine. It needs to
in order to do JIT and interpretation of the LLVM IR. So anytime that llvm backend runs, IT IS IN VIRTUAL MACHINE mode.

In addition, LLVM takes about 5 times more main memory than GCC:
http://webcache.googleusercontent.com/search?q=cache:UF-ULztaXsgJ:clang-developers.42468.n3.nabble.com/Memory-and-time-consumption-for-larger-files-td683717.html+&cd=3&hl=en&ct=clnk

See that? 5 TIMES the required memory. The kernel pulls in drivers into
itself and if that driver needs to run inside a virtual machine, it is going
to eat up memory fast. If something takes 1 GB to compile in GCC, but now
takes 7GB to compile when going with LLVM, how can a Mac that only
has 4GB memory going to come up with that memory?

No amount of memory management will work if there is no memory to manage. Why? Because they are all EATEN UP by the compiler! It is going to go the the
harddrive to offload some stuff so it has some real main memory to work with.

Now the main point in this post is about the baggage Clang left in the LLVM IR. It is more abstracted than an efficient C compiler intermediate state. Trying to support all those garbage collection, reference counting, etc removes you so far from the CPU instructions that by the time you are going into CPU machine code generation, it ends up NOT faster. Which the benchmark shows. 400% IS NOT a small problem. It shows up in games, which is VERY IMPORTANT criteria when people buy computers (especially when it runs on windows or osx).


Some posts seem to disappear (perhaps they are negative?) So you need to use google cache. For example, here is the GOOGLE CACHE of the LLVM needing 8GB compared to 2.6GB of GCC. Please like to hide these things, but I feel they are better exposed so you know the tradeoffs and negative aspects instead of being force-fed only what they want you to see.

Mar 29, 2010; 11:37pm Memory and time consumption for larger files

Hello,
recently I encountered one rather unpleasant issue that
I would like to share with you.
Me and few colleagues are working on a project where we want to
create a development environment for application-specific processor
processors (web pages, currently not much up-to-date are here:
http://merlin.fit.vutbr.cz/Lissom/).
One part of this project is a compiler generator.
To generate instruction selection patterns from our
architecture description language ISAC, one function that describes
semantics of each
instruction is generated. File that contains these functions is then
compiled and contents of functions are optimized, so I get something quite
close
to instruction selection patterns.
For some architectures like ARM, the count of generated functions
is huge (e.g. 50000) and the resulting C file is huge too.

The problem here is that the compilation to LLVM IR using frontend takes
enormous amount of time and memory.

---------------------------------------------------------------------------------

Experiments are shown for C file or size 12 MB, functions have approx. 30
lines each,
preprocessed file can be downloaded here:
http://lissom.aps-brno.cz/tmp/clang-large-source.c.zip

Tests were run on Fedora 11 64-bit, Pentium Quad Core, 2.83GHz, 4GB of
memory.
Latest llvm and clang from llvm, rev. 99810, configured and compiled with
--enable-optimized (uses -O2).
clang version 1.0
(https://llvm.org/svn/llvm-project/cfe/branches/release_26 exported)

Using GCC, gcc (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2), time is only
illustrative,
because also compilation into object file is included:
The top memory is just approximation observed from output of the top
command.


1) g++ -DLISSOM_SEM -O0 -c -o tst.o clang-large-source.c
(time is only illustrative, because object code file is generated)
time: 12m17.064s
top memory approx: 2.6 GB


2) llvm-g++ -DLISSOM_SEM -O0 -c --emit-llvm -o tst.bc clang-large-source.c
time: 6m28.518s
top memory approx: 8 GB

3a) clang -DLISSOM_SEM -DCLANG -c -O0 -o tst.bc clang-large-source.c
time: 11m15.332s
top memory approx 8 GB


Resulting file tst.bc with debug info has 250 MB.
Without debug info (-g0), compilation seems to be even slower, but it was
maybe because
some swapping collision occurred, I was not patient enough to let it
finish,
resulting file for llvm-g++ had 181 MB.

Note also that on a 32-bit machine, the compilation would fail because
of lack
of memory space.


If I run then the opt -O3 on bc file generated with debug info,
it also consumes 7GB of memory and finishes in 9 minutes.

In my opinion, 12 MB of source code is not so much and the compilation
could be almost
immediate (or at least to be less than one minute), because no
optimizations are made.
Especially, what i don't understand, is the big difference between code
size and
the needed memory. After preprocessing, the C file has still 12MB, so
roughly, each
byte from source file needs 660 bytes in memory, 20 bytes in resulting
bytecode
and 100 bytes in disassembled bytecode.

-------------------------------------------------------------------------------------

Maybe there could be some batch mode that would parse the file by
smaller pieces, so the top memory usage would be lower.

If I divide the file into smaller files, compilation takes much less
time.
The question is, whether it is necessary, for example when -O0
is selected, to keep the whole program representation in memory.

time clang -DLISSOM_SEM -DCLANG -c -O0 -o tst.bc cg_instrsem_incl.c

for 2,5 MB file:
g++ (with obj. code generation): 1m 6s
llvm-g++: 7 s
clang: 2m2.501s

for 1 MB file:

g++ (with obj. code generation): 23 secs
llvm-g++: 2.5 s
clang time: 42 secs

Here I do not much understand, why is clang so much slower than llvm-g++.
I checked, that it was configured with --enable-optimized more than once
(does this affect also the clang?).
Testing files can be found here:
http://lissom.aps-brno.cz/tmp/clang_test.zip

------------------------------------------------------------------------------

Probably should this text go into bugzilla, but I thought it would be
better
that more people would see it and maybe would be interested in the reason,
why
clang behaves this way.
Anyway, clang is a great piece of software and I am very looking forward
to
see it replace gcc frontend with its cryptic error messages.
However as the abstraction level of program description is
moving higher and higher, I am afraid it will not be uncommon to generate
such
huge files from other higher-level languages that will use C as some kind
of
universal assembler (as currently is done with Matlab or
some graphical languages).
Such high memory and time requirements could pose problem for using
clang as
compiler for generated C code.


Or, do you have any ideas, when I would like to use clang, how to
make the compilation faster? (and of course, I already ordered more memory
for my computer:).
Also, if anyone would be more interested, what do I need to do with
these files i need
to compile, you can write me an email.

Have a nice day
Adam H.

_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev

VinegarTasters
Sep 24, 2012, 04:48 AM
I did both quotes because the poster is very misinformed.

ARC is not garbage collection nor has it ever been garbage collection.

All ARC does is do the reference counting for you when the program is compiled so you do not have to manual figure it out yourself. It calculates and inserts code for retain, release, and auto release. It does not do this during runtime nor is it slower.




So far what you have revealed is a lack of understanding of how this really works.



Once again, misinformed. Let me quote the head of windows graphics dev for windows phone and XNA head developer Shawn Hargreaves.

http://xboxforums.create.msdn.com/forums/p/91616/549344.aspx#549344


So basically XNA is not dead and if you use windows 8 classic mode, still fully supported.

Also I do not see how xbox 360 has killed their 3rd party game business when 99.99 percent of their games are 3rd party.

Your information just seems really off and I can not imagine where you got all this misinformation from.

Instead of putting my words out of context, you can try quoting the whole thing. Here you go:

"Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING. The compiler will insert code "thinking" it is the right time to create and release memory. To be safe, it will only release usually when the program exits.
That is what most Java programs do anyways, grab all the memory, and don't run garbage collector until you have used up all virtual memory. It never runs. When it does the game crawls, people notice the bad performance of Java, so the garbage collector essentially does nothing until ALL memory is used up."

How is that misinformed? You just repeated what I said, but you pulled one line out of my statement and tried to criticize it, by repeating what I said.

About the XNA, the head guy just repeated what I said. The "NEW" metro style interface, don't use XNA library (C# code), but use C++. Sure, you can fall back on "classic" old deprecated SLOW XNA technology (C#), but that is not the future.

Yeah OK. There are also lurkers on these forums from Microsoft and LLVM.

Mr. Retrofire
Sep 24, 2012, 06:20 AM
LLVM does use a VM on Mac.

[LLVMdev] A cool use of LLVM at Apple: the OpenGL stack
This is a different version (LLVM JIT or LLVM Just-In-Time compiler (http://en.wikipedia.org/wiki/Just-in-time_compilation#Overview)) which is not comparable with the LLVM/Clang, which Apple uses for applications, kernel extensions, system frameworks and so on. And i doubt that the memory management of OS X has something to do with LLVM JIT/OpenGL (see topic).

Puevlo
Sep 24, 2012, 07:37 AM
Sure it does. Inactive memory is not directly available to applications. OS X will use free memory for disk cache, which then becomes inactive memory. At its discretion (e.g., when free memory is running low), it will release inactive memory back to free memory pool, by doing things like flushing disk cache.

Unfortunately, Lion and (to less extent) Mountain Lion do not release inactive memory very well. Murphy's Law is that as inactive memory rises, free memory will decline. You just can't have both.

When you run out of free memory, OS X will start swapping memory (page out) into disk (virtual memory). And when you start paging out, you will start to get things like beach balls (less severe on flash storage/SSD). The only workaround at this point is to reboot your Mac.

Here's before:
Image (http://f.cl.ly/items/2X151N2m3c3V0O2q0A3I/before.png)
Here's after:
Image (http://f.cl.ly/items/1y0P460m0q0K242P2u1b/after.png)
In theory, this after state shouldn't happen.

Nice photoshop but no. Page outs cannot occur when there is inactive memory.

nutmac
Sep 24, 2012, 11:59 AM
Nice photoshop but no. Page outs cannot occur when there is inactive memory.

It's not Photoshopped.

dukebound85
Sep 24, 2012, 12:05 PM
Nice photoshop but no. Page outs cannot occur when there is inactive memory.

lol oooook. Both of us very long time members just get a kick out of trolling I guess

GGJstudios
Sep 24, 2012, 12:15 PM
lol oooook. Both of us very long time members just get a kick out of trolling I guess
They're not photoshopped and you're definitely not trolling, but Puevlo has repeatedly posted nonsense in various threads. I would ignore anything posted by them.

Jenni8
Sep 24, 2012, 11:15 PM
They are trying to move iOS methodology into Mountain Lion I think. In iOS when you close programs they are "saved state" into the flash. So in Mountain Lion, every program will not actually clear, but stuck in memory/virtual memory. So soon everything will fill up to the brim.

Well, if you want, you can always turn off virtual memory (but it may crash when you run out of memory). That way you interrupt the "ios" behavior. Or you can get a program to be some sort of garbage collector, grabbing memory until all the other programs gets dumped from virtual memory, and then freeing itself.

So what kind of program would work like that without resetting the icons on launchpad or other OS X areas? I did find an app to "free memory" as their are apparently many, but I don't like how it resets EVERYTHING.

VinegarTasters
Sep 25, 2012, 12:13 AM
So what kind of program would work like that without resetting the icons on launchpad or other OS X areas? I did find an app to "free memory" as their are apparently many, but I don't like how it resets EVERYTHING.

That is a good question. The new model of Mountain Lion programming model forces you to tell the OS what to do. If you don't tell it, it doesn't know what to do, and will treat you program like iOS does. All the old programs don't have these code to tell Mountain Lion. So you can wait for them to update to a newer version so they behave better under the new model. You probably need a program with root privilege that can undo what Mountain Lion is doing... trying to be like an iOS device.

iOS devices has about 64GB Flash at most (most people get the 16GB). They are also using Flash (like ssd), which is many times faster than regular harddrives. So what happens is that you get tiny programs that are constrained into the 256MB to 512MB RAM (about a quarter to half of 1GB), that are easily dumped (and fast too) into the flash, and loaded.

Now they are trying to use this methodology on OSX, which usually has 8GB or 16GB of RAM, and are running on harddrives (many many times slower than flash because it is mechanical with a rotating disk) that are 500GB in size no problem. I am hoping they took into consideration performance. Or will it be like the NeXT computer forcing everyone to use writable optical drives, without considering whether performance is critical, not because it is the next best thing. Or Corel trying to move wordperfect into Java. So you have programs like iMovie which can easily take up over 8GB of RAM that are "save state" into the "memory", taking up either valuable active memory or inactive memory depending on how you closed the program. Soon no matter how much memory you have it doesn't matter. All programs that ran before probably has these "stubs" hanging around (like iOS, where you double click and see a list of previous run applications) either in active or inactive memory (taking up virtual or real memory). So the OS is constantly swapping back and forth trying to deal with these memory guzzler apps (not like iOS apps that only has at most half of gigabyte).

So what does this mean? Probably in the future MacBooks will run on ARM chips, and a version of iOS, using iOS methodology. Probably because they are the breadwinners now, and dictate how the OSX's direction run... so in the future they can be merged.

So to answer your question, you probably need a new program made in Mountain Lion, has root privilege, understands the new methodology, and can wipe the inactive memory out.

theSeb
Sep 25, 2012, 05:25 AM
A lot of facepalm in this thread. Vinegar taster, what paid agenda are you part of? I hope that it's you actually upvoting your own posts and that it's not someone else agreeing with you.

iOS methodology? Thanks for the laughs.


iOS devices has about 64GB Flash at most (most people get the 16GB). They are also using Flash (like ssd), which is many times faster than regular harddrives.

No, it's actually not. The flash memory in smart phones is nothing like a SSD drive or flash memory in a computer. It's quite slow actually, even compared to a good, old-fashioned mechanical hard drive.

So what does this mean? Probably in the future MacBooks will run on ARM chips, and a version of iOS, using iOS methodology. Probably because they are the breadwinners now, and dictate how the OSX's direction run... so in the future they can be merged.I don't think you understand the relationship between iOS and OS X at all.

LLVM does use a VM on Mac.


No. LLVM can be used to create a virtual machine, but it has very little to do with traditional virtual machines that you've mentioned, like Java and .NET. LLVCM / Clang compiled code does not run in a virtual machine on OS X or iOS.

some stuff about ARC and Garbage Collection

ARC is not garbage collection like in Java. Garbage collection in Java runs periodically, during runtime, which reduces end-user performance. ARC in Cocoa is done during in compilation so it does not cost any run time cycles - it is a static type of memory management since the compiler basically checks through the code and inserts retain and release statements where necessary. Apps with dynamic GC normally use more memory and slow down when GC is invoked. The disadvantage of static GC, like ARC, is that it cannot catch retain cycles (A retains B and B retains A, hence why we use strong and weak pointers in Obj C for parent - child and child - parent relationships)

And finally to address your rant about Objective C performance.

Objective C is a compiled language like C/C++, does not run in a virtual machine and isn't interpreted so as far as programming languages go, it's very much on the fast side of the room. Objective C is a clean superset of C. The big difference is the late method binding, which is similar to what C++ does. Like C++, it uses function pointer tables that are generated by the compiler and need to be read during runtime. Performance takes a slight hit in comparison to C++ here due to the more dynamic nature of Objective C's implementation with the use of the id superclass.

The general agreement though, if you do some reading, is that this difference is theoretical and not really measurable in real world performance. C is fastest, then C++ and very close behind is Objective C.

VinegarTasters
Sep 25, 2012, 06:07 AM
A lot of facepalm in this thread. Vinegar taster, what paid agenda are you part of? I hope that it's you actually upvoting your own posts and not someone else.

iOS methodology? Thanks for the laughs.



No, it's actually not. The flash memory in smart phones is nothing like a SSD drive or flash memory in a computer. It's quite slow actually, even compared to a good, old-fashioned mechanical hard drive.

I don't think you understand the relationship between iOS and OS X at all.

You can upvote your own posts? Actually I rarely pay attention to those numbers until you brought them up. I got numbed by Google + button and Facebook LIKE numbers. :)

The flash thing, you should get up to speed. MLC, TLC, SLC, they are evolving at super duper speed, faster than harddrives can keep up with. Even MicroSD has variations for cameras that can outperform harddrives, you just gotta pick the type. Cheap, slower, higher price faster.

My paid agenda is: I paid for a device, I should get the most value out of it. Preferably fast performance, and bug free.

On the OSX versus iOS. You don't notice the merging of the two? App Store, Launch Pad, Notes, Messages, Facetime, bla bla bla. In fact there was a prototype of a MacBook running on ARM. Come on... get with the times.

As for the ARC and Garbage Collection and Objective-C. You label only Cocoa as being a static type memory management. If you look through the thread I point out there are LOTS of places in OSX that uses dynamic type. The JIT of OpenGL for example. I provide proof. You can look it up. Its just a few posts up from this one. But I find it strange everytime I put something up, they get deleted. So you need to use Google Cache. Also, there are LOTS of places in OSX that uses "dynamic GC", and garbage collection. In fact, ALL programs created before Mountain Lion was released (just a few months ago!) uses Garbage Collection if they program in Objective-C (default compile state parameters). Being deprecated does not mean it is gone. You can still program using GC if you want today. ARC as I mentioned is just a bandaid fix. They should just move to C or C++. Message passing is SLOW! Most games lose 15 percent to 50 percent framerate because of message passing. That is why games are faster on Windows than OSX. They don't have objective-C underneath... They use C++.

theSeb
Sep 25, 2012, 06:19 AM
You can upvote your own posts? Actually I rarely pay attention to those numbers until you brought them up. I got numbed by Google + button and Facebook LIKE numbers. :)

The flash thing, you should get up to speed. MLC, TLC, SLC, they are evolving at super duper speed, faster than harddrives can keep up with. Even MicroSD has variations for cameras that can outperform harddrives, you just gotta pick the type. Cheap, slower, higher price faster.I keep up with the latest developments just fine. The latest smart phone memory just announced last month by Samsung is said to be 4 times faster than current phone flash memory.

Today even budget personal computer (PC) solid state drives (SSDs) offer read speeds of up to 230 MB/s and write speeds of up to 190 MB/s. Top-of-the-line models can record speeds of up to 492MB/sec for sequential reads and 518MB/sec for sequential writes.

But the NAND memory used in tablets and smartphones has been stuck in the slow lane, largely due to power and space (form factor) constraints. That's why the new Samsung Embedded Multimedia Card (eMMC) Pro Class 1500 is an exciting development, despite speeds that would seem pedestrian by PC standards.

The eMMC modules off sequential reads of 140 MB/s and sequential writes of up to 50 MB/s. For random reads/writes it can handle 3500/1500 IOPS (inputs and outputs per second), which Samsung claims is four times as fast as previous solutions.


That means this new memory is slower than a 7200 RPM HDD in sequential writes and faster in random reads and writes. An OCZ Vertex 3 is rated at about 60,000 IOPS.

So where does that put current smart phone flash memory? It's just not very fast at all. That's why you don't need USB 3 on a smart phone since it won't make copying things onto it or from it any faster at this point in time. USB 2 is fast enough.


My paid agenda is: I paid for a device, I should get the most value out of it. Preferably fast performance, and bug free.

On the OSX versus iOS. You don't notice the merging of the two? App Store, Launch Pad, Notes, Messages, Facetime, bla bla bla. In fact there was a prototype of a MacBook running on ARM. Come on... get with the times.Familiar functionality and apps does not mean much and is not what the discussion is about. iOS, as an operating system and development platform, is simply a subset of OS X. Keeping the last run apps and data in memory has been a feature of OS X memory handling for as long as I've been using it. That's why an app launches so quickly when you've just closed it. This is not something introduced by IOS-ification. Apple has been pretty clear about the direction of OS X and iOS at WWDC in terms of the underlying technology and where they are heading.

Ledgem
Sep 25, 2012, 02:08 PM
On the OSX versus iOS. You don't notice the merging of the two? App Store, Launch Pad, Notes, Messages, Facetime, bla bla bla. In fact there was a prototype of a MacBook running on ARM. Come on... get with the times.
You're talking about user interface features, many of which make sense to combine for simplicity of user experience. How do you make the jump to think that it means Apple is porting core operating system functions? Do you think they would do that just for the sake of it, even if it meant worse performance?

Believe what you want, but please stop trying to pass your beliefs off as fact.

wd40
Sep 28, 2012, 12:38 PM
As I said before, the after picture shows that page outs occurred since the last restart, but does not prove that they occurred at a time when there was inactive memory available. I would be happy to concede that this is happening, but I've never seen any proof.

Proof? How's this for proof:

I just restarted my mbp (running 10.8.2), fired up Chrome (which re-opened the previous state of 40+ tabs) and 65% of the way through loading all the tabs, the inactive memory jumped from under 100MBs to 3GB+.

THIS IS WHAT ALWAYS HAPPENS WHEN CHROME IS LOADED.

Once Inactive Memory shot up above 3GB, Free Memory shot down to under 20MBs. At this point, Page Outs began and are now at 1.48GB (with all tabs fully loaded). Swaps used = 3.69GB.

I'm running a 2011 MBP with 8GB RAM.

I have to restart multiple times per day and can't do design work with chrome open.

How much more proof do you need that this is happening to A LOT OF PEOPLE??

Is there a way to clear the 3GB of inactive that always appears when Chrome (or any other browser) is loaded)?? Running "FreeMemory Pro" just makes everything worse.

/sigh

GGJstudios
Sep 28, 2012, 12:48 PM
Proof? How's this for proof:
That's not proof. That's only more claims. I've already specified what would qualify as proof. If you want to troubleshoot your issue, follow every step of the following instructions precisely. Do not skip any steps.
Launch Activity Monitor
Change "My Processes" at the top to "All Processes"
Click on the "% CPU" column heading once or twice, so the arrow points downward (highest values on top).
(If that column isn't visible, right-click on the column headings and check it, NOT "CPU Time")
Click on the System Memory tab at the bottom.
Take a screen shot (http://guides.macrumors.com/Taking_Screenshots_in_Mac_OS_X) of the entire Activity Monitor window, then scroll down to see the rest of the list, take another screen shot
Post your screenshots (http://forums.macrumors.com/showpost.php?p=14126379&postcount=16).

throAU
Sep 28, 2012, 01:00 PM
There are certain situations where it WOULD be desirable to swap, when there is still free memory available in the "inactive" pool.

Why?

If you are performing a lot of disk i/o, and you have programs in ram that are INACTIVE.

Page non-running program to disk, it will be cached in the disk cache ANYWAY during the swap out, and use its memory for cache on other more hot areas of the disk if required.


SWAP is not bad inherently. You should be concerned if your swap disk is doing a massive number of IOs and you are also currently actively swapping (number in brackets).

Seeing that your mac HAS swapped at some point is nothing to be alarmed about. Its what the VM subsystem is DESIGNED to do.

These memory reclaim apps, etc are just going to ruin the performance of OS X's disk caching.



In earlier versions of lion, apple got the numbers wrong with the VM tuning and it was trying to write things out to swap too greedily (possibly tested primarily on SSD is my bet). Later versions of Lion and ML have fixed this. Same idea, just slightly less aggressive on swapping out to favour disk caching.

wd40
Sep 28, 2012, 01:29 PM
That's not proof. That's only more claims. I've already specified what would qualify as proof. If you want to troubleshoot your issue, follow every step of the following instructions precisely. Do not skip any steps.
Launch Activity Monitor
Change "My Processes" at the top to "All Processes"
Click on the "% CPU" column heading once or twice, so the arrow points downward (highest values on top).
(If that column isn't visible, right-click on the column headings and check it, NOT "CPU Time")
Click on the System Memory tab at the bottom.
Take a screen shot (http://guides.macrumors.com/Taking_Screenshots_in_Mac_OS_X) of the entire Activity Monitor window, then scroll down to see the rest of the list, take another screen shot
Post your screenshots (http://forums.macrumors.com/showpost.php?p=14126379&postcount=16).

Thanks for the response - will post screenshot shortly. Sorting by "% CPU" causes the "Process Name" list to change every second. Can I sort by "Real Mem"? That doesn't bounce around as much.

GGJstudios
Sep 28, 2012, 01:30 PM
Thanks for the response - will post screenshot shortly. Sorting by "% CPU" causes the "Process Name" list to change every second. Can I sort by "Real Mem"? That doesn't bounce around as much.
Yes, it will fluctuate, but it's more helpful to sort by CPU.

smithrh
Sep 28, 2012, 04:37 PM
Hmmmmm.....

Ledgem
Sep 28, 2012, 04:40 PM
Proof? How's this for proof:

I just restarted my mbp (running 10.8.2), fired up Chrome (which re-opened the previous state of 40+ tabs) and 65% of the way through loading all the tabs, the inactive memory jumped from under 100MBs to 3GB+.

THIS IS WHAT ALWAYS HAPPENS WHEN CHROME IS LOADED.
...
I have to restart multiple times per day and can't do design work with chrome open.
I don't have this issue with Chrome, but I don't think I ever have more than 10-15 tabs open at a time. 40+ seems like an awful lot. If you close those tabs, do you still have memory issues with Chrome? If not, you may want to consider keeping your tab count low and coming up with an alternate solution (like creating a "tab" folder for your bookmarks, and then going heavily from there). If it saves you from having to restart multiple times per day, I think that would be worth it.

AlanShutko
Sep 28, 2012, 04:48 PM
Most games lose 15 percent to 50 percent framerate because of message passing. That is why games are faster on Windows than OSX. They don't have objective-C underneath... They use C++.

Usually the source code for the engine and app are the same, and it's in C++. The UI might be using cocoa, but it's more common that it's using OpenGL directly with a very few mac-specific calls. The main problem with framerate is that the graphic drivers are better on Windows than Mac.

Valve, for instance, has done a lot of work with Apple and vendors to find performance bottlenecks.

http://store.steampowered.com/news/4211/