How full is too full for a hard drive?

Discussion in 'Mac Pro' started by Michael73, Aug 29, 2014.

  1. Michael73 macrumors 65816

    Joined:
    Feb 27, 2007
    #1
    Here's the situation...my nMP has a 1TB internal SSD. With the OS, programs, documents and VMs it has about 550GB used. My Drobo 5D (connect via TB) has my iTunes, iPhoto and iMovie libraries.

    For whatever reason the Drobo will put the disks to sleep if not being used. Every time I listen to music or switch over to iPhoto I can hear the drives spool up and it takes 8-10 seconds. Also sometimes when I fast forward from song to song in the library the response time is a bit slow. The lag is driving me nuts.

    A long time ago I heard that when a drive gets over a certain percentage full it's performance is degraded. I wondering if I move at least my iPhoto and iTunes libraries back to the internal SSD (to get rid of the lag), increasing the usage from 550GB to just over 900GB, whether I'm reaching that point? BTW, what is the point where performance suffers...90%, 95%, 99%?
     
  2. SandboxGeneral Moderator

    SandboxGeneral

    Staff Member

    Joined:
    Sep 8, 2010
    Location:
    Orbiting a G-type Main Sequence Star
    #2
    Typically, its regarded that if your free space drops below 10%-15% then your drive begins to suffer performance loss. This is especially true when the drive is hosting the OS and it requires swap space when RAM gets too low.
     
  3. SuperMatt macrumors 6502

    SuperMatt

    Joined:
    Mar 28, 2002
    #3
    Have you contacted Drobo support about this? I know the 5D has had sleep issues with Macs in the past - so they might have a firmware update or some tips on settings. You might also want to try going to energy saver and deselect "put hard disks to sleep when possible" - although I don't know if that has any effect on the Drobo.
     
  4. GGJstudios macrumors Westmere

    GGJstudios

    Joined:
    May 16, 2008
    #4
    Your drive is no where near being full enough to impact performance. The general rule on an internal boot drive is to leave about 10% or more free space, for caches, swap files, etc. The same rule does not apply to data-only drives. I've used 1TB drives with 50MB available with no impact on performance.

    Your problem has nothing to do with how much data is stored on the drive. It has to do with how slow the drive spins up. Of course, the ultimate solution would be to use a SSD, as there is no spin-up time and lag is nonexistent. If you're not going the SSD route (they are much more expensive), finding a drive with a faster spin-up time would improve the performance.
     
  5. bennibeef macrumors 6502

    Joined:
    May 22, 2013
    #5
    You can select in Drobo dashboard to never spin down the drives. Which might not be a bad thing at all. If you are using the drives quite often it makes sense to never spin them down. Often harddrives die in spin up not "beeing online". The drobo might use more power because it has do run the drives all the time.


    Or put the Dashboard option to some longer time period before they spin down
     
  6. SandboxGeneral Moderator

    SandboxGeneral

    Staff Member

    Joined:
    Sep 8, 2010
    Location:
    Orbiting a G-type Main Sequence Star
    #6
    I don't know if Drobo can do this or not, but you could set up a RAID 0 with several high-capacity drives and short-stroke them to increase their read/write times.

    (Its something I know of, but have never done myself)
     
  7. grahamperrin macrumors 601

    grahamperrin

    Joined:
    Jun 8, 2007
    #7
    In the Drobo support area: Can I change the capacity percentage (set at 95 percent) that triggers Drobo storage device's final usage warnings?

    For optimal performance a figure of twenty percent may be preferable. Note, for example, an answer in Ask Different to Why is it important to keep lots of freespace on OSX? What is the impact?

    If a volume is relatively large, then a relatively lower percentage may be acceptable.

    The impact will depend on what you store in the fragmented free space. For example: if the attributes B-tree or catalog B-tree becomes fragmented, through extension, whilst there is a dearth of free space, there may be a significant impact and simply removing data (creating more free space) may not improve performance.
     
  8. Michael73 thread starter macrumors 65816

    Joined:
    Feb 27, 2007
    #8
    Thanks. I just did this. We'll see if it helps. I think the Disk Drive Spindown option is checked by default.

    I'm not too worried about the drives dying as I've invested in NAS drives for the enclosure. :cool:

    I don't see that option available. It seems to be either "on" or "off".
     
  9. flowrider macrumors 601

    flowrider

    Joined:
    Nov 23, 2012
    #9
  10. Macsonic macrumors 65816

    Macsonic

    Joined:
    Sep 6, 2009
    Location:
    Earth
    #10
    Not sure if this helps, you can try to disable the "put the HD to sleep whenever possible" in System Preferences/Energy Saver unless you've done this already.
     
  11. AlphaDogg macrumors 68040

    AlphaDogg

    Joined:
    May 20, 2010
    Location:
    Boulder, CO
    #11
    IIRC, 80% is recommended as the maximum to maintain performance...
     
  12. fredr500 macrumors regular

    Joined:
    Apr 12, 2007
    #12
    Is there a difference in how much free space to maintain between spinning and SSDs?

    I am under the impression is that as a spinner gets full the fragmentation can get really bad. That last 10-20% is scattered all over and there reaches a point where defrag can't run.

    That problem doesn't exist with an SSD as you never need to defrag and it doesn't matter if the files are fragmented.
     
  13. flowrider macrumors 601

    flowrider

    Joined:
    Nov 23, 2012
    #13
    ^^^^May I suggest you read the link I posted above. Your question an answer is in there. The issues are very different.

    Lou
     
  14. grahamperrin macrumors 601

    grahamperrin

    Joined:
    Jun 8, 2007
    #14
    OT: USB flash drives and fragmentation of HFS Plus

    Off-topic from hard disk drives

    I know from experience that booting OS X from a thumb drive in a USB 2.0 port can be much slower than necessary if the HFS Plus file system is particularly fragmented.

    I don't have experience with SSDs with faster interfaces.
     
  15. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #15
    A seriously fragmented SSD is much slower than a defragged one.

    The OS has to issue a separate I/O for each fragment - so there's CPU and I/O latency cost to fragments. There's no added latency due to moving the heads, so the effect is less than for spinners.

    A fast consumer SSD like the 840 EVO 250 GB can do about 90,000 reads per second, and 66,000 writes per second.

    If a file is in 200,000 fragments - that adds two seconds to the read time and 3 seconds to the write time.

    An occasional manual defrag of an SSD can make Safari snappier.
     
  16. Michaelgtrusa macrumors 604

    Michaelgtrusa

    Joined:
    Oct 13, 2008
    Location:
    Everywhere And Nowhere
    #16
    The same applies too an HDD. :)
     
  17. AidenShaw, Aug 30, 2014
    Last edited: Aug 30, 2014

    AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #17
    Some hard data

    True - but I was replying to the comment "problem doesn't exist with an SSD as you never need to defrag and it doesn't matter if the files are fragmented", which is untrue.
    ___

    I just ran some tests - comparing the time to read fragmented and contiguous files on a 1 TB Samsung 840 EVO. Comments are in color, black text is the system output and input.

    Code:
    V:\SSD\Frags> frag-it    [COLOR="SeaGreen"]// alternately append a small file to two different files 300,000 times[/COLOR]
    
    V:\SSD\Frags> contig -a *   [COLOR="seagreen"]// display fragmentation[/COLOR]
    
    V:\SSD\Frags\a.file is in 84518 fragments
    V:\SSD\Frags\b.file is in 84482 fragments
    V:\SSD\Frags\frag-it.bat is defragmented
    
    V:\SSD\Frags> copy b.file c.file   [COLOR="seagreen"]// copy to new files[/COLOR]
            1 file(s) copied.
    
    V:\SSD\Frags> copy c.file d.file
            1 file(s) copied.
    
    V:\SSD\Frags> dir
    
     Directory of V:\SSD\Frags
    
    2014-08-30  09:40       768,200,000 a.file
    2014-08-30  09:40       768,200,000 b.file
    2014-08-30  09:40       768,200,000 c.file
    2014-08-30  09:40       768,200,000 d.file
    2014-08-30  09:21               140 frag-it.bat
    
    V:\SSD\Frags> contig -a *    [COLOR="seagreen"]// check fragmentation[/COLOR]
    
    V:\SSD\Frags\a.file is in 84518 fragments
    V:\SSD\Frags\b.file is in 84482 fragments
    V:\SSD\Frags\c.file is defragmented
    V:\SSD\Frags\d.file is defragmented
    V:\SSD\Frags\frag-it.bat is defragmented
    
    V:\SSD\Frags> rammap     [COLOR="seagreen"]// flush all filesystem caches ("Empty standby list")
    
    [COLOR="SeaGreen"]// Read the files with 'timeit' enabled[/COLOR]
    
    [/COLOR]V:\SSD\Frags> wc -l a.file
       33400000     a.file
      Elapsed (real) time:    0:00:13.40  h:m:s       13.40 secs
      Total CPU time:         0:00:04.49  h:m:s        4.49 secs      33.5 % of elapsed time
    
    V:\SSD\Frags> wc -l b.file
       33400000     b.file
      Elapsed (real) time:    0:00:12.93  h:m:s       12.93 secs
      Total CPU time:         0:00:04.80  h:m:s        4.80 secs      37.2 % of elapsed time
    
    V:\SSD\Frags> wc -l c.file
       33400000     c.file
      Elapsed (real) time:    0:00:04.82  h:m:s        4.82 secs
      Total CPU time:         0:00:04.76  h:m:s        4.76 secs      98.6 % of elapsed time
    
    V:\SSD\Frags> wc -l d.file
       33400000     d.file
      Elapsed (real) time:    0:00:05.37  h:m:s        5.37 secs
      Total CPU time:         0:00:05.32  h:m:s        5.32 secs      99.2 % of elapsed time
    
    
    So, about 5 seconds to read a contiguous file from SSD (almost all CPU time in the application).

    About 13 seconds to read a fragmented file.

    Please, squash the myth that SSDs don't need to be defragmented.
     
  18. Macsonic macrumors 65816

    Macsonic

    Joined:
    Sep 6, 2009
    Location:
    Earth
    #18
    Yep. I still Defrag my HDs regularly. Even though some of my colleagues tell me with Mac OS one does not need to Defrag and only done in Windows I still do Defragmenting HDs. Personally I think Defragmenting HDs is important in Macs too.
     
  19. gnasher729 macrumors P6

    gnasher729

    Joined:
    Nov 25, 2005
    #19
    But a fixed percentage is nonsense. The OP has a 1TB SSD drive. With 800 GB = 80% stored on a 1TB drive, most of the stored data will not be touched for long times. If say 700 GB of these 800 GB are not touched in the next month, performance is about the same as for someone using 100 GB of a 300 GB drive.

    The situation is different for a hard drive, because transfer speed on the outer tracks of the drive can be more than twice as high as on the innermost tracks, and the OS will use tracks from the fastest to the slowest as the drive fills. So for maximum performance, you buy a drive that exceeds your needs - like a 3TB drive if you only need 1TB.

    Personally I think it's quite pointless. Unless you have an SSD or Fusion drive; in that case it isn't pointless but will most likely reduce performance of the drive, so it's a really, really bad idea.
     
  20. grahamperrin macrumors 601

    grahamperrin

    Joined:
    Jun 8, 2007
    #20
    Not entirely nonsensical.

    When I last checked the fixed percentages used by Apple in Mobile Time Machine environments, the percentages for SSDs were no different from those for HDDs.
     
  21. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #21
    But I just showed an example where defragging made the app run almost three times faster!

    Please explain how a defragged drive can be slower than a fragmented drive.
     
  22. gnasher729 macrumors P6

    gnasher729

    Joined:
    Nov 25, 2005
    #22
    Your example uses Windows. Plus it artificially creates two files that are fragmented to a bizarre degree. You'd need to test this on MacOS X, and you'd have to show a situation where that kind of fragmentation happens in real life.
     
  23. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #23
    You didn't address my question.

    How can a defragged drive be slower than a fragmented drive?
     
  24. gnasher729 macrumors P6

    gnasher729

    Joined:
    Nov 25, 2005
    #24
    The problem seems to be that many flash drives are just awfully bad handling many small files. I found a benchmark for USB 3 flash drives (can't remember where, but one of the reputable sites), and they found that while many drives could read/write large files at a very nice speed, with many small files most of the drives had absolutely ridiculous drops in performance, with most of them 10 times slower than the fastest drive and some 100 times slower. One "Sandisk Extreme" 64 GB flash drive was the exception, running at speed comparable to SSD drives, but at twice the cost of the cheapest drives.

    ----------

    Irrelevant. In what real life situation does HFS+ have files that are fragmented to such a degree that it makes a measurable difference?
     
  25. grahamperrin macrumors 601

    grahamperrin

    Joined:
    Jun 8, 2007
    #25
    If I recall correctly, my USB 2.0 booting OS X example was Mac OS X, Lion.

    The reduced performance – an extreme reduction – was almost certainly primarily due to fragmentation of the catalog B-tree.
     

Share This Page