Max file opens allowed

Discussion in 'Mac Programming' started by satyam90, Nov 4, 2008.

  1. satyam90 macrumors regular

    satyam90

    Joined:
    Jul 30, 2007
    Location:
    Bangalore, India
    #1
    I am using XCode, Obj C with Cocoa on 10.4
    I wrote a client/server application which will sync a directory that consists of 25,000 files. As number of files are more, I am using thread to open files simultaneously and sync with server. But I am facing error with more number of simultaneous file opens.
    Can any one tell me how many simultaneous File opens are allowed programatically?
     
  2. pilotError macrumors 68020

    pilotError

    Joined:
    Apr 12, 2006
    Location:
    Long Island
    #2
  3. idelovski macrumors regular

    Joined:
    Sep 11, 2008
    #3
    Based on the code that can be found on page 113 of "Advanced Mac OS X Programming" I tryed this on my Mac:

    Code:
       struct rlimit  rl; 
    
       if (!getrlimit(RLIMIT_NOFILE, &rl))  {
          printf ("Max Files - Soft limit: %lld;  ", rl.rlim_cur);
          if (rl.rlim_max == RLIM_INFINITY)
             printf ("Hard limit: INFINITE\n");
          else
             printf ("Hard limit: %lld\n", rl.rlim_max);
       }
    
    The output was: "Max Files - Soft limit: 256; Hard limit: INFINITE" When I printed that RLIM_INFINITY value I got 9223372036854775807.

    Anyway, googling for "open files 12288" gives many links even on linux and Solaris related sites.
    This might be informative: http://discussions.apple.com/message.jspa?messageID=6925084
     
  4. lee1210 macrumors 68040

    lee1210

    Joined:
    Jan 10, 2005
    Location:
    Dallas, TX
    #4
    if you can keep your process lean, you could just fork instead of threading. The threads are going to be lighter-weight than the processes, but probably not by too much. That way you can just have your "controller" process get the list of files, then pass the filename to a new process and it's sole job is to sync that one file. The controller wouldn't wait for the return, and continue to fork jobs for additional files.

    -Lee

    P.S. Is this for academic purposes, or real life work? If the latter, you should stick to tried and tested tools like rsync.
     
  5. ChrisA macrumors G4

    Joined:
    Jan 5, 2006
    Location:
    Redondo Beach, California
    #5
    The simplest why to write a program like this would be to call "rsync". rsync already does what you need.

    You method is not the best use of parallel threads. You should limit the number of threads to no more then there are CPU cores. But even then that may be more then you need because the bottle neck is going to be the network.

    The direct answer to your question is that the number of files that can be opened is configurable and can be changed. So your program needs to handle this limit gracefully and not fail. Check after each open that it worked and don't handle failure as a fatal error. Simply re-try later after a few files are closed.

    But really, why not simply use "rsync".
     
  6. ChrisA macrumors G4

    Joined:
    Jan 5, 2006
    Location:
    Redondo Beach, California
    #6
    That would be very "expensive" and slow. Process creation has considerable overhead compared to thread creation and there is no reason to create so many

    The best way to do this is for the "controller" to create some number of threads. These threads are long lived and not created for each file. Each tread as soon as it starts will ask the controller for a "job". The job has the name of a file and the place where it must be copied to. Then the thread asks for another "job" and continues untill there are no more jobs, then terminates. the controller waits for the last thread to terminate and then it terminates. This is the classic "boss and worker" model. This model works well if the jobs are fairly long and self contained.

    the one thread per file model is actually a horrible idea on several levels. If nothing else think of what 20,000 threads would do to the disk read/write head. Better to read files one at a time as as to minimize head movement. Or as I wrote above, not re-invent this wheel and use rsync.
     
  7. Catfish_Man macrumors 68030

    Catfish_Man

    Joined:
    Sep 13, 2001
    Location:
    Portland, OR
    #7
    This sounds like a situation where NSOperationQueue would serve well.
     
  8. satyam90 thread starter macrumors regular

    satyam90

    Joined:
    Jul 30, 2007
    Location:
    Bangalore, India
    #8
    My application is client/server one.....I am using webservice to communicate from client to server and viceversa. When a file has to sync, server will send an event and then client will open a file to write it on local Mac. As server is quite fast and on local Mac ulimit -n is only 256. Client is not able to open that many files as sending by server. When I increased the ulimit -n on my Mac to 1024 from terminal, it works fine. So, I want to change it programatically.
     
  9. idelovski macrumors regular

    Joined:
    Sep 11, 2008
    #9
    Maybe setrlimit() can do the trick: setrlimit(int resource, const struct rlimit *rlp);

    Then I tried the code I posted above and found something funny. Here it is again:

    Code:
    #include <stdio.h>
    #include <sys/types.h>
    #include <sys/time.h>
    #include <sys/resource.h>
    
    int main (int argc, const char * argv[])
    {
       struct rlimit  rl; 
    
       if (!getrlimit(RLIMIT_NOFILE, &rl))  {
          printf ("Max Files - Soft limit: %lld;  ", rl.rlim_cur);
          if (rl.rlim_max == RLIM_INFINITY)
             printf ("Hard limit: INFINITE\n");
          else
             printf ("Hard limit: %lld\n", rl.rlim_max);
       }
       
       return (0);
    }
    
    When I compile and run it from the terminal this is the output:

    Max Files - Soft limit: 256; Hard limit: INFINITE

    but if I crate a Standard Tool project in Xcode and run the same code I have this:

    Max Files - Soft limit: 10240; Hard limit: 10240

    Well, I give up - maybe someone else can help you. ;)
     

Share This Page