Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is a good point. However, no matter how long you let it sit there, the GC doesn't run. If the GC is running in a mode where it sits there with the memory taken, then the pages weren't freed anyway. If it works where the OS eventually signals the application to begin GC, then this is a crappy method. The fact that the memory does decrease (albeit it not to the correct point) as each window vanishes this tells me that there is either no GC being used, or it's being freed anyway. Keep in mind that Cocoa wasn't re-written for Leopard, and it's probably not likely that Apple would choose to re-write it to use GC when that has additional overhead.

Also, this is in Leopard's "man gcc" page:



It's possible that this is old, however.

Also, previous versions of Mac OSX have this problem.



I submit that it'd take more overhead to free a page but mark it as "takeable" by the application allocator and keep it active. This yields the same functionality as the OS allocator anyway, and basically amounts to duplicate code. I've already done experiments that when I malloc something and fill it, my RM goes up. When I free it, my RM drops immediately.

The GC didn't show up until leopard, in the objective c 2.0 runtime stuff. It doesn't work with malloc - just with objective c memory management. My understanding is that cocoa has been compiled to work both ways.

You argue that memory decreasing, but not all the way, indicates no GC. I don't see how that follows. First, even without GC, there are explanations. For example, each window being opened might add data to class static variables (I don't know what the term is in objective C - methods that are preceded by "+" :) or additions to global data, and also local class variables. It may create some small structures (for state) and some big structures (e.g., buffers). Memory might be allocated from a pool for speed, resulting in a big shared chunk being allocated when you ask for 2 words.

Some of that stuff can be quickly deallocated, and some of that stuff should be deallocated only when necessary, in order to avoid speed hits. If my window is using 2 words in a 4k-word pool, I wouldn't want it to deallocate the pool, or split off those 2 words just to deallocate them.

As for GC, the gc algorithm might clean up only some of the stuff (for similar reasons as above), and leave other stuff for later.
 
Why would you want to open 900 text edit windows?
just kidding, don't get mad at me

I have to wonder the same thing. I can't agree that "apple sucks" or Leopard or Tiger for that matter because at least my Mac is still running as fast as it was the day I got it. Show me a WinPC that does the same.
 
I have to wonder the same thing. I can't agree that "apple sucks" or Leopard or Tiger for that matter because at least my Mac is still running as fast as it was the day I got it. Show me a WinPC that does the same.

The one i bought yesterday? :)

Nah, i don't even have a Windows box in the house right now (my sister is borrowing my gamingPC), so I am with you.
 
Define "any". It's perfectly feasible to have an operating system (e.g. Java-based) which handles fine-grained memory management and releases pages after process teardown via garbage collection. Consider a multiprocessing Lisp machine, where sharing is par for the course. You forgot to read rule 55: "Does it have to be done this way? Does it have to be done at all?"

If you truly believe that anything Java-based could ever be called an OS then there's no hope for you! Java is a horrible piece of crap. I believe we've argued about this before. I have no issues with it in that it does what it's supposed to do, however I have found in recent months that it isn't at all as portable as they say. Different VMs do different things on different architectures, especially when it comes to things that the VM tries to abstract from hardware like sockets. It's a shame really. Java attracts way too many lazy (and thus horrible) programmers. Most of them hail from a certain peninsula, too.

For second sentence to follow from first, the leaking routine must be executed sufficiently often; it is quite possible to run software usefully without invoking every path of code proportionally often to runtime. This is an important distinction because much of the difficulty in identifying leaks comes down to actually observing them.

This is true. However, what other paths should there be? You open an empty window, and you close the empty window. I agree that there are plenty of cases where you might not want to free memory all the time. But if you really believe that TE leaving behind so much memory after 900 windows come and go is "by design" then either you're very much mistaken or that design is simply bad. Heck, if this problem exists with 900 empty windows, then it must be heavily magnified by more complicated things, assuming that I'm right and the problem exists in cocoa (which I find it to be, as simple C tests don't show any problems with memory management on the part of OSX) then all that means is you better restart your apps often or else you'll start swapping.

A process either returns all the allocated memory to the operating system, or it hasn't returned to its original "state". Your loose redefinition of "state" is significant, because there are cases when the app proper may act as if things are right back to how they started, but e.g. the runtime knows they aren't - consider garbage collection, or thread pooling.

My assertion is that the problem lies with Cocoa, and there's no reason for it to keep fragments of 900 empty windows behind. While many applications (especially in an embedded world) might allocate and release memory based on usage in order to maintain a higher level of determinism (as opposed to calling malloc() or something similar every time) it makes no sense for Cocoa framework to do this. I mean, OSX is a UNIX based OS which means you want it to run many applications at the same time. If the basic framework for most of your apps is holding on to that much memory, then it can't possibly be a good thing. I'm sorry if you feel it is. I don't think even the incompetent boobs at Apple would design software this way, therefore I'm going to have to call bug on this one.

[/QUOTE][edit] I've just created a simple physical memory eater, having opened then closed 899 windows, and observed by the time I got bored that TextEdit's RSIZE had gone down from 45 to 43MB - swapping out? (prove this - these are questions you need to answer to back up your assertion of poor memory management) But VSIZE, which was a good 960MB before I started the eater, has flown down to 392MB (points for explaining this), approaching the 358MB VSIZE that TextEdit appears to start up with. Now 393-358 is not far off 45-6 (6 being startup RSIZE), so it's possible there is a stubborn pool of 35-40MB. [/QUOTE]

I think your point here is to demonstrate how I might have worded things. (It's the only argument about the argument I'm willing to respond to.) If this is the case, then I simply don't observe this behavior. I just closed all my 901 windows and see 100% CPU usage by TE as it cleans up memory. When the CPU usage stops, RM is at 48.12 MB and VSIZE is at 959.88 MB. VSIZE is back to its original state, but RM is 48 megs. Ick.

So let's play devil's advocate here and assume that there's a pooling scheme going on. This would explain VSIZE being back at its original state. (I have yet to see it drop to 392 MB). I've now gone ahead and allocated 1024*1024*2 blocks of 1024 bytes and set them all to 0. This is 1.5 of my 2 gigs of ram. I use pause() at the end so I can monitor things. Despite 1 gig swapping out, TE is still VSIZE 960 megs but RM dropped to 43.27. Ok, that's interesting. Maybe something at a higher level decided to clean up some pages. Weird. So let's put it to the test. I'll take up all 2 gigs of my ram and see what happens. Down to 29 megs, vsize still at 960 megs. I ran it a second time, and it actually dropped to 4.09 MB. That's less than it had going in. This does in fact suggest a pooling mechanism that seems aware of the needs of the system, and that's cool.

Anyway, thanks for the inspiration. I didn't think to try eating all the physical memory. You could have just suggested that in the first place instead of arguing about the argument which I already said was moot with me anyway. Learn your target demographic. :)

The best part is that I feel a lot better about OSX now. I haven't done any cocoa programming for a while but I'll assume that it was my fault that I was leaking some columns when destroying an NSTableView.

(btw, the last part of this post was a log of me going through the routine as you suggested)

Also, my signature states this not because of my issues with OSX but because of the crap Apple pulled last year with the 17" C2D MBPs shipping.

So now my biggest gripe is the one I've always had... I've had 3 different 17" laptops from Apple, and each one has a problem waking up half the time when you use it with an external monitor sometimes. Several times I'll put it all to sleep, then put the computer in my bag, go home, and try to use it on the internal display, and it'll wake up, I'll type in my login/pass, and then it'll immediately go to sleep again. I don't take any extra time to do this. I should start another sucks thread. :)
 
The GC didn't show up until leopard, in the objective c 2.0 runtime stuff. It doesn't work with malloc - just with objective c memory management. My understanding is that cocoa has been compiled to work both ways.

I have Leopard and the man page still says that. I had heard they intended to include an option for GC in Obj-C but this man page makes me think they didn't finish it. The new C++ standard also calls for optional GC there as well. (Thank goodness it's optional.)

BTW, I understand what you're saying about not freeing things right away sometimes to avoid speed hits. Cocoa traditionally avoided this issue with autorelease pools. However, I would submit to you that the behavior involved with freeing memory is best suited to a specific application rather than a generic GC algorithm.
 
Hm. I thought I had resolved this issue. I was showing someone here at work the test (because I had discussed the leak issue with him before) and no matter how many times I rinsed and repeated, TextEdit held onto its "leaked" memory.

So, I think I'm wrong about it being leaks. However, I'm not convinced that it's a good design. Especially when it isn't deterministic.
 
Hm. I thought I had resolved this issue. I was showing someone here at work the test (because I had discussed the leak issue with him before) and no matter how many times I rinsed and repeated, TextEdit held onto its "leaked" memory.

So, I think I'm wrong about it being leaks. However, I'm not convinced that it's a good design. Especially when it isn't deterministic.

GC is definitely in there - if the man pages say otherwise, they are wrong (there have been other complaints about out-of-date man pages).

If whatever you are seeing appears to be non-deterministic, it very well may be GC at work, as I'm sure you know. The GC algorithm may very depend on things like the available memory, page alignment, page fragmentation, etc. Heck, for all I know the memory allocator is affected by the code page randomizer they added to avoid damage from buffer overflows (though I can only see that being an issue if allocation blocks are bigger than pages).
 
GC is definitely in there - if the man pages say otherwise, they are wrong (there have been other complaints about out-of-date man pages).

If whatever you are seeing appears to be non-deterministic, it very well may be GC at work, as I'm sure you know. The GC algorithm may very depend on things like the available memory, page alignment, page fragmentation, etc. Heck, for all I know the memory allocator is affected by the code page randomizer they added to avoid damage from buffer overflows (though I can only see that being an issue if allocation blocks are bigger than pages).

I didn't hear about the code page randomizer. That's kind of neat. My test is doing 2*1024*1024 mallocs of 1024 bytes and I'm seeing different behavior. I would think that if a process needs all physical memory that it would correctly acquire memory from wherever it's available.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.