check for duplicates in NSArrayController

Discussion in 'Mac Programming' started by gonche1124, Jun 21, 2010.

  1. gonche1124 macrumors newbie

    Joined:
    Aug 6, 2009
    #1
    I have two NSArrayControllers that manage two different entities (core data), when I add an object to the array I want it to check for duplicates and if this is true merge the properties of the objects.

    How can I do this?

    Thank you very much

    Andres Gonzalez
     
  2. jared_kipe macrumors 68030

    jared_kipe

    Joined:
    Dec 8, 2003
    Location:
    Seattle
    #2
    It sounds like you've already written out what you want to do in plain english.

    Compare an object to others in an array and merge the properties if you do find it.


    So... break it up into pieces and do them one at a time.

    Iteratively search through the array to find the same/similar object already.
    Merge if found.
     
  3. Catfish_Man macrumors 68030

    Catfish_Man

    Joined:
    Sep 13, 2001
    Location:
    Portland, OR
    #3
    Note that that approach will be N^2, which may be relevant with a large number of items. If that's an issue, you could use an NSSet to do it in NlogN time.
     
  4. lee1210 macrumors 68040

    lee1210

    Joined:
    Jan 10, 2005
    Location:
    Dallas, TX
    #4
    NSMutableDictionary might be a compromise in terms of functionality and performance. I believe that your objects are not exactly the same as you reference "merging" when a dupe is found. As such you could make another class that is the "key" data items that must be unique, or encode the unique items in an NSString, etc. then use this as your key. When adding an item you'd get the value for this key. If one exists, collision. You now have the object that must be merged, and you need not disturb the layout of the dictionary. If no object is returned, your new object is unique and can be added to the collection.

    The implementation of NSDictionary is opaque, but assuming something like a balanced binary tree a search is log n. An insert is generally log n (though it's amortized n if a rebalance is needed). This means that worst case insert is log n to search, fail to find a match, then amortized n to insert. This means that the complexity would be treated as n, but in most cases it would be log n (log n for search, then possibly log n to insert in a case w/o a rebalance, and 2 log n reduces to log n).

    I think that a best case log n, worst case n (this means the actual complexity is n, but hopefully the amortized case is rare) is pretty good, and in practice for your usage this should work out cleanly and still be performant.

    -Lee

    Note: complexities are O
     

Share This Page