Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Wie Gehts

macrumors 6502
Original poster
Mar 22, 2007
495
15
I never really noticed this before because I never really had to copy a folder with so much in it. Anyway, I have a couple of folders containing around 2-300,000 midi files each. These folders are only about 115-150mb's. Normally, copying an item this size to an external hd would take a few seconds, but these are taking 30+ minutes each!! for only a 100+ megabytes!! :eek::mad:

Why...Oh Why??
 
Its because the finder copies each file one at a time, rather than block by block. No promises, but you can try this. Use the finder to compress each folder, then transfer them, and uncompress on the other side.

Good luck!
 
Going to guess here that the external drive is a FAT volume. The Finder DOES do block copies (has since OS8, in fact), but isn't nearly as efficient when copying to FAT, if memory serves. Massive numbers of files do take a while to copy due to associated filesystem overhead, but you should see at least a few dozen files copied per second.

If you're transferring them to another Mac, an HFS formatted volume would speed the process significantly. Otherwise zipping them first, as suggested, will help a lot, since it removes the filesystem overhead of dealing with FAT (or any other filesystem, for that matter).

If the volume is already HFS, maybe that's just how long it's going to take. Filesystem corruption or severe fragmentation (which isn't entirely impossible if you're slinging around that many files) could also slow things a bit.
 
nope...no fats here...all mac formatted guid journaled etc etc...

i did copy to a 1tb ext usb drive thats almost full, so maybe...

if i need to do this again i'll try the compressing

thanks
 
...1tb ext usb drive thats almost full...
That could definitely be a factor. The closer to full the drive is, the more likely you are to have fragmentation be an issue. and if it's mostly full and you delete a bunch of files that were already distributed, fragmentation could compound quickly.

No guarantee, but could definitely be a factor. If you have a second drive available the easy test would be to copy one to a blank second drive--quickest way to eliminate fragmentation from the picture.
 
If the sizes are right, it's a very inefficient copy no matter what.

150MB (call it 150,000,000) with 300,000 directory entries? Average file is only 500 bytes so there's no way you're going to get any kind of efficiency with the copy. This is a corner case copy due to the small files and large count.

Copying a file has a certain amount of overhead, it has to grab the existing inode, learn where the data blocks are, create the new inode, request data blocks and then copy the data blocks. On average these files are < 1 block in size.

Compressing / uncompressing won't change the end amount of time created, and could very well increase it. If you're going to compress uncompress, tar it first then compress the tar file.

See above re: corner case because the base operations regardless are your limiting factor.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.