I have organized my document data into discreet pieces that I selectively write back to the original file when it is saved (none of that namby-pamby safe write stuff). Specifically, the file is a hand-crafted tarball. The reason for doing this is that some of the pieces may be quite large: if they can be captured low in the file where they will be unaltered, it will not be necessary to write them back every time. So, I start writing content (files) at an arbitrary offset (where ever stuff has changed or been inserted — I am trying to keep directories contiguous), up to the end, by constructing the header and appending the data to a NSMutableData object, writing it out, then reusing it for the next piece. Now, it occurs to me that I could run through and build one single data object (basically a memory image of the file) and cast it out in one write, and it seems like that would be somewhat more efficient (up to the point that the data starts to crowd memory out). Yet, how much actually happens when you write? TBMK, operating systems have for quite a while been caching disk writes (perhaps not so much, as solid state drives become more common), so if I do the single object write, I will be briefly triplicating the data (in the document, in the NSMutableData object and in the disk cache). If I write out discreetly (one piece at a time), does disk caching make that comparable in efficiency to a single write (bearing in mind that the NSMutableData object is being resized with each piece)?