Retain logic?

Discussion in 'Mac Programming' started by mdeh, Feb 21, 2009.

  1. mdeh macrumors 6502

    Jan 3, 2009
    I am trying to see the logic of one aspect of retain/release for memory management of cocoa.

    If an object is added to a collection, ( Dictionary, Array) the retain count ( as I understand it) is incremented automatically for that object, and an autorelease message is sent to that object.

    Now, when an object is assigned to a variable, the retain count is unchanged. In one of Kochan's examples, when this assignment occurred, a retain message was sent to the object..which I think I understand.

    So, could anyone help me with the **logic** of this approach. I can readily see why a language would automatically retain an object that is added to a collection ( if the initial variable that "owned" that object were to be released, the "collection" object would be deallocated too, which would not be good.

    But...what about the straight assignment. If it is customary ( and I am not sure if it is) to "retain" an assigned object "manually" ie by the programmer, why was this not incorporated into the language. In other words, why would one **not** wish to retain an assigned object, and why does the language not do the same as if the object were being stored in an array? ( for example)

    Hopefully this makes sense? If not, it is probably because I do not understand the language properly.

    thank you as usual.
  2. Howiieque macrumors regular

    Feb 1, 2009
    an object that is added to a collection will get an retain, so that the original owner release will not affect it. the object now get a retain count at lease ''1'' made by the collection. so if you don't explicitly release the object too many time, the programme runs properly.
    sometimes you retain an objet, because you want to take the ownership of it. you don't want it release by an autorelease pool.
    the collection without having a copy of your object may occupy less space and execution time.
    hope that help.
  3. mdeh thread starter macrumors 6502

    Jan 3, 2009
    Another contributor suggested I look at cfRetain/cfRelease.

    It does make the rules clear.http://

    Now that the rules are clear ( :) ) I wonder **why** this design is chosen.
    Perhaps it is as easy as saying that if you ( the owner ) loan ( assign) the object to another person / variable, you cannot expect that entity to be an owner? Which is just another round about way of restating the rule, but does not in my mind answer the core issue I am asking.
    So clearly, the rules place a higher price on ownership, say vs the object having a "say" in it's existence.

    So, not to belabor the point, if I create an object, then assign it to a second variable, under the scheme above the second object would not be an owner, and runs the risk of having an invalid address if the first "owner" were to release it. What inherent advantages does this have over, say a scheme were the same operation, would increase the retain count, ( the logic being that the object has more than one "observer" and we do not wish to deallocate the object as long as someone is interested in the object?).

    All I can say is that as one starts to learn about a language, certain things seem like a big deal, but as one uses the language, it just becomes one of those things that are the way they are...and it will probably be so for me soon. But, initially these questions are of interest to me, and **if** the logic is obvious, it's nice to know this and hang ones knowledge on these pegs, for 2 reasons. It helps to remember the rules in the first place, and it 's just fun to know why things are the way they are.
  4. eddietr macrumors 6502a

    Oct 29, 2006
    So a couple of quick thoughts on this:

    It would be, I think, very inconvenient to have automatic retention on assignment (at least in a non-GC environment). For example, look at what happens in event handing code. Do you really want automatic retention when an event is assigned to your local variable? Then however you respond to the event, every method or function in that chain would have to explicitly release that event. That includes not just your own code but all the framework code as well. Just look at a stack trace for any event you handle.

    And remember, this was all designed back in days where processors came in speed like 8Mhz and really fast computers were just coming out at 33Mhz. :)

    Now, other languages like Java do basically what you describe, which is they count assignment as retention. And then the GC cleans up an object when no one has a reference to it. But there are a couple of points there:

    1.) Java was designed when computers were fast enough to actually do such GC scans. And even then, it was not designed as a language to build an operating system framework. That stuff came much later when computers were 100 times as powerful.

    2.) It generally works in Java because the programmer doesn't have to explicitly release objects. So the programmer doesn't (usually) have to track all the assignments he made in every function and make sure they get released.

    3.) But that scheme still has some other issues. Like sometimes you want to have a reference to something, but you don't want that reference to actually prevent the object from being freed. These are so called "weak references" which are something you then need to add to the language to deal with that issue.

    4.) You have the problem of circular references. A references B, which references C, which references A. And so the ring can't be deallocated. The Obj-C method of explicit retain/release allows (or forces depending on your point of view:)) the programmer to be explicit about which references are truly important or persistent and which are really just a passing interest (like in an event handler, for example.)

    If course, having said all of that, GC (which again is basically built on the premise that assignment == retention) is not a performance issue anymore with 64-bit 2GHz+ dual core laptops. But GC is not a perfect magic box either. It presents other challenges to the programmer.

    I still see memory leaks in our Java projects all the time. They're just trickier to track down.
  5. lee1210 macrumors 68040


    Jan 10, 2005
    Dallas, TX
    What you are speaking of (keeping track of assignments, but more broadly keeping track of whether you can access an object) is called Garbage Collection. It is now available in Objective-C 2.0. It lets the programmer "forget" about memory management. I don't know exactly how Objective-C handles GC, but Java has always been garbage collected, and essentially keeps an object graph of everything that is accessible. It handles things like two objects that reference each other, but cannot be accessed from elsewhere, etc. It keeps track of each stack frame and what's being accessed in there, etc. When it finds that an object is no longer accessible, it can deallocate it. It doesn't have to right then, there is no guarantee of that, it is up to the garbage collector to decide.

    This takes away a lot of power from the programmer, but it also helps (but doesn't totally remove the risk) with memory leaks. With a non-GC'd language, you can decide when you are done with some memory, and let it go. With Cocoa's retain/release/autorelease/NSAutoreleasePool mechanism, it's sort of a hybrid. You don't say explicitly to destroy an Object, but the reference counting is done explicitly, instead of implicitly like in a GC'd system. In Cocoa you can say you are done with an Object, but until a runtime mechanism actually checks that the retain count is 0 and deallocates, the Object is still around.

    The risk in a GC system is that the programmer gets lazy, and leaks references to Objects (that is, there is still a reference that they forgot about to some Object) which is, in turn, leaking memory. I have heard many times that "You can't leak memory in Java! It's Garbage Collected!", which is the worst sort of attitude to take.

    Another reason that the retain count cannot be changed in Objective-C/Cocoa is that the retain mechanism is part of Cocoa, and not part of the language itself, so an assignment statement is just taking the value of a pointer and assigning it to another pointer variable. Cocoa doesn't know anything about this. Obviously in 2.0, with Garbage Collection, that all goes out the window, but that memory management system is divorced from the retain system of Cocoa.

    I am trying to think of some code that would make it very difficult to track how many times you should send release if the retain count *was* incremented when you did an assignment. I'll just do something totally contrived for an example, because i can't think of a "real world" scenario, but i'm sure something could come up.
    -(void) retainTest:(NSString *)stringTest {
      NSString *stringTwo = nil;
      NSString *stringThree = nil;
      if([stringTest isEqualToString:@"Cow"]) {
        stringTwo = stringTest; 
      stringThree = stringTest;
      if([stringThree length] > 2) {
        stringThree = [stringThree substringToIndex:2];
      //You MIGHT need to send release(s) to what's stored in stringTest here
      stringTest = stringThree;
      NSLog("Result is: %@",stringTest);
      //Do releases here...
    So... if we were incrementing the retain count of an object each time it was assigned, there would be the following increases:
    NSString passed in as stringTest: either 1,2, or 3
    NSString created from substringToIndex: not created at all, or 2

    In addition, you might need to send releases to the Object originally passed in via stringTest if stringThree doesn't have the same value before it is assigned to stringTest. So you would need to manually be keeping track of how many releases need to be sent. And you would have to keep track of it per Object, not per pointer. This is, as they say, not a good prize.

    With GC, each assignment doesn't need to be specifically watched, but it's impact on the Object graph is. For example, a newly created object might be assigned to a variable, which was previously storing the last reference to some other Object. That's significant because there's a new Object, and it is referenced, so cannot be destroyed yet, and the original Object that variable was referencing is now inaccessible, so it can be destroyed. This doesn't need to be evaluated at the time of the assignment, but the "world" looks much different to the garbage collector after it happens.


    Edit: Mid-post I had to walk the dogs, etc. so eddietr beat me to the punch, and made very similar points. Oh well.
  6. mdeh thread starter macrumors 6502

    Jan 3, 2009

    Hi eddietr,
    Thank you for pointing out the different approaches. It puts what i have been asking into much greater perspective. Having now looked at it more closely over the last few days, it is starting, with your help, starting to look more "manageable". So thank you.
  7. mdeh thread starter macrumors 6502

    Jan 3, 2009
    That's a great reason actually, and is another peg to hang this knowledge on.

    That makes sense. Your explanation fits right in with the way I like to approach things too. Learning by rote is OK, but a good bit of logic behind the rote makes it so much more satisfying.

    As always, thank you so much Lee.

Share This Page