Anyone actually can explain why chaining external SSD yelds faster results?

Discussion in 'iMac' started by vannibombonato, Dec 11, 2012.

  1. vannibombonato macrumors 6502

    Joined:
    Jun 14, 2007
    #1
    Hi all,
    while waiting to receive my iMac i'm testing my Lacie external thunderbolt SSD, which is blazing fast.
    I'm a bit puzzled about the statement that is often made by manufacturers that via chaining several thunderbolt drives, the read/write speed increase.

    How is this possible? I don't get it:

    -Let's say i have a 1gb file on drive 1 that i want to transfer and i achieve a speed of X.
    - How can the X speed increase if i transfer the file from the same drive but while another drive is daisy-chained?

    Curious as i'll probably will be expanding my SSD storage via chaining different Thunderbolt drives, and wondering how this can actually be true (unless obviously the different drives are setup as RAID, which i don't even know if its possible).

    Ideas?
     
  2. joe-h2o macrumors 6502a

    Joined:
    Jun 24, 2012
    #2
    They're probably talking about striping them together in a Raid setup, thus increasing speed.
     
  3. jkautosports macrumors regular

    Joined:
    Dec 6, 2012
    Location:
    New York, NY
    #3
    It's faster due to the fact that you can read/write to different drives simultaneously as opposed to reading and writing to just one (which would essentially reduce your read and write speeds from the theoretical maximum).
     
  4. BSoares macrumors regular

    Joined:
    Jun 22, 2012
    Location:
    USA
    #4
    Just making a chain won't make it any faster. But a Raid-0 for example will. Because the data is written half to one disk, half to the other. Then when you go read you can read from both drives at the same time which means faster speeds.
     
  5. g4cube macrumors 6502a

    Joined:
    Apr 22, 2003
    #5
    or explained another way:
    - CPU make request from a particular drive, and it takes awhile to respond with the data. If it is in the cache, the response is almost immediate. If not in the cache, there is time where the CPU may have to wait.
    - instead of waiting, the CPU can make another request. Another drive may be idle, waiting for a request

    The requests can be queued, and the various drives may queue up these requests, and the CPU can then check each drive to see if the requested data is available. The CPU can respond more quickly, and instead of waiting, it queues as many requests possible.

    To make things even faster, some algorithms request subsequent sequential data so it might be made available just in case it too will be needed after the original request. Sometimes this pre-fetch is done automatically by the drive, but in some cases the CPU may know even better where the next request must come from.

    That is the magic of cache, RAID, and re-fetch that maximizes performance from a storage system.

    In the past, CPU was actually quite slow, but as they got faster, they were able to do more I/O operations per second. the more I/O requests that can be performed, the more that the PU can prefetch data, resulting in faster overall performance.
     
  6. Stetrain macrumors 68040

    Joined:
    Feb 6, 2009
    #6
    They're talking about combining multiple chained drives in a single RAID striped setup.

    In that case you have two SSDs in a chain, with half of your 1GB file on each.

    Then when you read that file you can read both halves from the two SSDs simultaneously.
     

Share This Page