RAID Throughput Question - Card vs. Internal

Discussion in 'Mac Pro' started by jethrodesign, Aug 6, 2009.

  1. jethrodesign macrumors member

    Joined:
    Aug 6, 2009
    #1
    Hi, I'm trying to get clarity on how to best setup a RAID for our 'new' server. Our needs aren't crazy, we aren't doing any streaming video - mostly file serving. And we don't even need that much total space - we've lived fine for a couple years with only 360GB total. Data security is our main priority.

    Our current server is starting to crash a lot, so it needs to go. It is an old G4, with a Highpoint 1820a RAID card connected to 3 internal SATA drives on a RAID 5. This has been pretty solid for the last few years.

    Our 'new' server will be a G5 1.6Ghz single processor tower (2003). It has 2 internal drive bays, and 3 regular old PCI slots (not PCI-X or PCIe).

    So I'm trying to choose between a few simple setups, wondering about overall throughput mostly:

    Option A) Throw a couple 'enterprise' SATA drives in internal bays and use software RAID (Apple or other) to make RAID 1.
    - If I do this, I'm curious if I should partition the RAID to have the boot drive be on this as well? Any dangers of not being able to boot or fix RAID if there are RAID problems?

    Option B) Throw our old Highpoint RAID card on a PCI slot and use that to connect 2-4 internal SATA drives (using kit to mount extra drives if needed).
    - If we do this, we have a few choices - two RAID 1's (one for boot & one for files); a single RAID 5 (with or without a separate drive for boot); a RAID 10 (probably partitioned for boot & files).

    So there's a lot of different ways to go here, I'm just wanting to see what would be at least as good as what we've had. I mostly don't know how much of a factor the older standard PCI slot plays into overall speed.

    THANKS!
     
  2. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #2
    Get a Sonnet G5 Jive, which brings it's total drives to 5, so that should solve the physical mounting issue for you.

    You could go either way, and given the idea of 5 or 10 as array types, 10 would be doable by OS X, and is more redundant than 5, which would require a hardware controller.

    If you go for the software method, you use 4x for files, and can stuff the OS on a separate drive. It's easier to solve RAID issues this way, as you won't have to reinstall the OS if the array dies (not just operate in degraded mode).

    You also have the option of doing a 4x type 5 set, and a separate disk for the OS, or increase it to all 5 disks and place the OS on it (assuming that card will boot OS X, as it looks only to be driver support from what I can see off Highpoint's site).

    In either case, you need a backup, and I'll presume you have one, even though it's not mentioned.

    I'd go with 10 if you're after high availability (uptime), and 5 if you need greater throughput than 10 for a 4 disk set.

    Hope this helps. :)
     
  3. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #3
    oO

    You're kidding? :confused:


    For the setup you have and are thinking about the PCI plays very little roll or perhaps none at all - reasonably speaking. It's the drives that will make the biggest difference when we're talking about 4 drives or less. And even though you say you don't need a lot of storage space it is the larger drives that will be the fastest - for several reasons. I also seriously doubt you will notice any difference at all between any RAID card under about $500. They're all so close to the same I doubt throughput speed differences are measurable at all. If you suddenly come to your senses :)D) and go for an intel based mac like the 2006 for about $1k, the 2008 for about $2k, or the 2009 for about $3k then there is also no difference in throughput between the embedded RAID controller that comes with the machine(s) and just about any card under $500 or so. Again the differences won't be measurable.

    It's really all about the drives when talking small implementations. Every other aspect has overhead to spare. The difference in drive size (platter density) can be dramatic in terms of seek speed (latency), W/R speeds, and throughput. For example a 3 drive RAID5 composed of 300GB disks probably averages 60 MB/s (as a file server) and when there is 500 GB of files stored on it the average access latency for the days work will be somewhere between 10 and 20 milliseconds. 300 GB drives have 4 MB or 8 MB caches usually so throughput is not significantly increased.

    In comparison a 3-drive RAID5 composed of 1.5TB drives will probably average 150 MB/s to 200 MB/s (for file serving) and when there is 500GB stored on that RAID access latencies in a multi-user environment will average around 2 to 5 milliseconds over the course of an average work-day. This is because 500 GB on those drives occupies less than 1/4 the physical platter space that the 300 GB RAID uses, and the data additionally occupies only the outer sectors which are spinning faster than the inner sectors needed by the 300 GB drives for the same data. 1.5 TB drives typically come with 32 MB caches. There are three drives so that's 96 MB of cache space. This is enough to significantly improve throughput up to about 600 MB/s (for data sizes of up to 100 MB or so) and reduce latency even further than I estimated above. During cache I/O we're taking latencies in the 0.x and 0.0x millisecond range. :D This can happen for the 300 GB drives too but when someone access a couple of 4 or 8 MB files that's all blown away so cache hits are reduced dramatically and thus speed. This is one reason any good RAID controller card worth it's salt (>$500) comes with a LARGE cache and sometimes user upgradable.

    Anyway, all that just to say that you will find the answers to most of your concerns in the drives themselves and not in the controller. The equation changes a bit when you start looking at SAS drives or RAID configurations composed of 5 drives or more.
     
  4. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #4
    Thanks for the replies! This helps a bit.

    We had looked into getting a new Mac Pro, but for what we use it for (mostly file serving, some VPN, and FTP), it just seems like it would be so overkill. We haven't had processing speed concerns with our ancient G4 ;) And we have this G5 just sitting here not being used. So the price difference is over $2k.

    And the disk size issue does make sense, thanks. We've actually been using 200GB drives in our current 3-drive RAID 5. But they are mostly full. So I suppose anything larger we should notice a bit better performance (not to mention the G5 system architecture). The thing about having huge amounts of space available, is that it can make backing up more complex (or expensive), and we can get lulled into a false sense of security thinking we have so much space, instead of being diligent about archiving.

    We do have a backup system using Retrospect to 2 rotating external FW drive sets. So this should be covered.

    So it seems like from what you're saying, with these types of setups (and using SATA drives, not SAS), the throughput limitation on the PCI bus will still be greater than the performance we might get from the RAID attached to the card (whether RAID 5 or 10). If this is true, than that at least takes one thing out of the equation.

    - So if we wanted to use 4 internal drives in a RAID 10 (no card), does 10.4 server support this natively? And are there 4 SATA ports on this older G5?

    - And if we go with the HP card, we can choose either RAID 5 or 10. But you're suggesting that we keep the boot drive separate (which we've always done). Would we just need to keep a clone of this boot drive in case it crashes (using SuperDuper), or should we just setup two RAID 1s? I'm concerned about losing all the setup done on the server.

    - What would be someone's guess as to performance differences between the internal software RAID 10 using 4 drives vs. a hardware RAID 10 using the HP card? Negligible? What about vs. a hardware RAID 5 using the same 4 drives?

    Thanks a bunch. I think I've almost got it figured out.
     
  5. gugucom macrumors 68020

    gugucom

    Joined:
    May 21, 2009
    Location:
    Munich, Germany
    #5
    The G5 Power Mac had only 2 SATA ports. ODD were ATA for Macs until 2009. So for a 4 disk Raid you will have to purchase a card suitable to your bus. G5s had PCI, PCI-X and PCIe depending of the year of manufacture. You do get Raid cards for all of the busses but you must know what you need.
     
  6. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #6
    OK, thanks. So we'll most likely use our existing Highpoint RocketRaid 1820A for the ports. It has 8 internal SATA ports and works on PCI or PCI-X. And we'lll probably get the G5 Jive to mount a couple extra drives in the computer.

    So let's say we have 4 SATA drives internally (maybe WD SE3 drives) on our HP RAID card. The next thing is to determine the best RAID setup.

    What would the best RAID setup be then for mostly file serving (no video streaming) to 4-6 users across Gb Ethernet. Our priorities are: 1) security/reliability/redundancy; 2) performance; 3) capacity.

    A) Two RAID 1s (one for boot; one for storage)
    B) RAID 5 (probably using 3 drives, then stand-alone for boot)
    C) RAID 10 (with either partition for boot drive, or small 5th drive for boot)

    Would we notice any real-world differences? Would it mostly be when opening/saving/closing files?

    THANKS! We won't have much time to experiment around, we'll have to set this up and transfer everything over as quickly as possible, so we just want to do it once. Can't afford down-time.
     
  7. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #7
    Given this description, go with 10, and use a separate boot disk.
     
  8. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #8
    Well, $1k, for the 2006 model.



    I hear ya. Just tell everyone to backup anyway and remember that your backup drive(s) only need to be the size of the data and not the size of the media. Also keep in mind that any single drives or RAID sets used for daily work should ideally alway be less than or about 50% empty. After about the 50% mark wear and tear on the drive increases and performance decreases.


    Yup, true.



    Right. And this along with software compatibility is why I'm shocked by anyone wanting a G5 for use in a secure area to house important business data. UB builds are starting to fade from the scene at this point and the faster better Snow Leopard is Intel only.



    Either way is fine, it doesn't matter when examining performance - especially if the machine is solely used as a file server and not as a user box. If it were me I would just put everything on the RAID - OS X and all. Without active users OS X is like 40 or 50 gigs. and 10.6 is like 15 gigs. But again you'll need an intel Mac for that. If your daily backups are incremental in nature (meaning that it only backs up new and changed files) then after the 1st time it won't add much time to the backup process. Keep in mind that you (should) still have to backup the boot drive too in a case where the boot is a separate drive.



    Not measurable. No difference. Assuming of course we're talking cheap (<$500) RAID controllers. In Apple's RAID Card's case I'm told the card is actually slower. :eek:



    4-drive RAID0 = fastest,
    4-drive RAID5 = next fastest (about the same as a 3-drive RAID0)
    4-drive RAID10 = slowest (about the same as a 2-drive RAID0 but a tad faster at a few things.)

    EDIT:
    Both RAID10 and RAID5 are very safe - scheduled (automatic) weekly backups should be fine. With RAID0 given your description so far, you would need to backup daily without missing a beat - again easily done automatically if needs be.
     
  9. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #9
    OK, thanks for all the advice. I think we're leaning toward using our existing Highpoint RocketRaid 1820a in this computer to at the very least give us the extra SATA ports we'll need (it has 8). I'm thinking we'll get an adapter to port a couple of the internal SATA ports to become eSATA ports for our backup drives as well.

    So I've also been leaning toward using a 4-drive RAID 10 for our storage, using WD RE3 500GB drives. Then maybe we'll use a small 5th HD for our OS and clone it every now & then. It just seems that this would be simpler RAID setup (vs. the RAID 5 parity), and we don't need huge amounts of storage.

    - But I've been wondering, if we're just accessing this server to mostly just open/save/close files, using gigabit ethernet from 4-5 client machines, would we really notice a difference between RAID 1 & RAID 10? Would it just make really large files a bit quicker to open/save/close?

    - And now I would feel pretty safe with our data from a hard-drive failure standpoint, but what about the dangers of the RAID card failing? If this goes, do you lose access to all data until you can replace/repair the card? Would a RAID 1/RAID 10 be safer in this instance than a RAID 5?

    THANKS! I'm pretty close to being able to make an informed purchase, I think our needs may be just a bit different than most threads I read where folks are mostly concerned about the absolute maximum performance or size.
     
  10. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #10
    Assuming the 1G Ethernet could actually run flat out, you'd only be able to hit a max of 125MB/s. With the number of drives you're planning to use, you'd be able to saturate the Ethernet connection.

    RAID 10 is faster, but it also gives you a little additional fault tolerance though, so let that issue to determine your path, since the performance differences may won't be noticed.

    If the card dies, you will loose access to the data until a replacement can be obtained.

    In terms of fault tolerance, both RAID 1 and RAID 10 are better suited (fault tolerance is the same for both of these), than RAID 5.[/QUOTE]
     
  11. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #11
    OK, great. Thanks for the response.

    So we'll determine if the extra 2 drives in a RAID 10 are worth the advantages that we may or may not see (I understand the additional fault tolerance to an extent, but we would also have 4 drives to go bad instead of only 2).

    I'm probably a bit more concerned about being locked into a proprietary RAID card in case it goes bad.

    - If we're just running a RAID 1 or RAID 10, would we be better off just using Apples software RAID or SoftRAID instead of the Highpoint utility? Is this even possible? Would this be better protection against failure?

    - I've heard that on the lower-end RAID cards, they're a lot of the time just software RAID anyway. Anyone know on this card?

    Thanks. I was hoping that with a RAID 1 (or equivalent) we'd be able to access either individual drive independent of the RAID card if necessary.
     
  12. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #12
    Actually, the additional drives in this case (array type), gives you improved odds. That's the whole point of fault tolerance.

    You're better off with a RAID card than software based solutions (proper cards, not Fake RAID controllers). The reason is you get more features (things you can't do with software), and a fundamental difference that may be of importance. You can transfer the array (& card) from one system to the next, provided it's capable of accepting it. It also gives you the potential of using it with multiple OS's. So the details matter, and are card specific.

     
  13. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #13
    Profile your system via the activity log or in a monitoring app. Then you'll know. Guessing is not a good idea. ;)


    No.

    And keep in mind a RAID1 is exactly 2 drives. No more and no less. RAID5 is better than RAID10 for cost, performance, and maybe safety too. If it's not better as far as security goes then it very very close. The downside of RAID5 over RAID10 is rebuilding takes longer and you have degraded performance after death and during rebuild. But the chances of a drive failing are pretty fragging nil.
     
  14. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #14
    The fault tolerance of RAID 1 is actually better than RAID 5, but in the end, the result is the same. It can handle the failure of 1 drive, no more. But it's the result of the fact a RAID 1 is comprised of a pair of drives.

    It's when it's applied to a nested set the differences show up. 4 drives min to create a type 10 array, and you can loose 2 drives, and still operate. Unfortunately, it's a little tricky, as it does NOT mean any 2 drives. I think of it as a pair of halves (first half = 1, second half = 0). It can take the loss of 1 driver per half, and the data is still in tact. But if both the drives in a single half die, you're screwed. Restoration from backup media is the only way to get it back.

    So we're both right, but it wasn't clear as to why. ;) :p
     
  15. flatfoot macrumors 65816

    Joined:
    Aug 11, 2009
    #15
    Just about mounting your additional drives within the PowerMac G5:

    When I had my G5, instead of buying a G5 Jive, I just took a 2-drive cage from an old PC-enclosure and mounted it beside the graphics card using a few adhesive pads (see picture below, don't know if it fits with your (longer?) SATA-card, though; for mine it was OK). If you add the G5 Jive (sits in front of the fans, lower left in the picture), you have a whole of 7 HD bays!

    For an 8th bay, take out the optical drive, which can be attached via FireWire or USB if needed. You'd have to sneak the SATA-cable up there somehow and affix the HD somehow.

    Hm, but then, two of your SATA-ports are still vacant... Those you could sneak out using a 2-port eSATA PCI-bracket and attach two more drives using eSATA enclosures.

    And there you go: 10 SATA HDs in/on one PowerMac G5! ;)
     

    Attached Files:

  16. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #16
    OK, just to clarify a question I have about RAID 1:
    - As a RAID 1 is simply a mirror of one drive to another, and as in our case it will just be basic file storage, could one of the drives be connected straight to a SATA port (no RAID controller) to retrieve the data if necessary?

    I know in a RAID 5 or RAID 0 this wouldn't be possible, because the data is spread across multiple drives, but I was under the impression that a RAID 1 is kind of like automated, immediate cloning.

    If this is the case, we may stick with RAID 1 so that we aren't stuck if the RAID controller dies, or software crashes. This has always concerned me about our current RAID 5 setup, and I've also heard of people losing data when rebuilding a RAID 5 with certain cards. Just seems more complex than we need. Drives are huge and cheap these days, so costs aren't the concern.

    And thanks for the ideas about mounting. I think our RAID card would be too long to fit drives in front. My biggest concern about the G5 Jive (or any other solution to add more drives) is the extra heat it will generate. I wouldn't want premature failure due to heat.

    Maybe I'll run a test on our current RAID setup to see what kind of throughput we're currently getting. Then, if I have time, I'll try to quickly setup a RAID 1 in the new computer and see how it compares. What's the best utility for checking HD performance?
     
  17. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #17
    It depends. If it was set up as software RAID, then Yes (it's already attached to the board). If on a card, I don't know for sure (RAID cards usually operate a little differently than software based RAID). I don't think from a card to the board is likely. But I've never actually tried to access a RAID 1 built on a card, and placed on a native/system controlled SATA port (logic board) either to verify what would happen.

    Sort of, but the way the drive is operated by a card is definitely different. (It doesn't get it's own drive letter, so both drives are duplicated (low level stuff)). That's why I don't think the card to logic board transfer would work. It's different between a card and software based operation.

    I wouldn't worry about a RAID card failure. The only time I've seen one go, was when it was a casualty of a blown PSU. They tend to outlast the rest of the computer, and most certainly drives. :D

    Checking for the heat and throughput is a good idea, as it can at least give you a base line for comparison purposes.

    I've no idea of current temps in that machine, so any data you can post would help.

    Good luck, and keep me posted if you don't mind. :)
     
  18. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #18
    Great, thanks.

    Well I ran an Xbench test on our current server to see what we're used to. I posted it on another thread about results, but I'll put it here too as I'm curious to see what you think about what we're currently getting for performance on this old 3-drive RAID 5.

    This might help in determining how much to expect from a different setup. I think as long as it's as good or better than what we now experience, we'll be happy.

    Code:
    Results	27.98	
    	System Info		
    		Xbench Version		1.3
    		System Version		10.4.11 (8S169)
    		Physical RAM		1024 MB
    		Model		PowerMac3,1
    		Processor		PowerPC G4 @ 0 MHz
    			Version		7455 (Apollo) v3.2
    			L1 Cache		32K (instruction), 32K (data)
    			L2 Cache		256K
    			L3 Cache		2048K
    			Bus Frequency		100 MHz
    		Video Card		ATY,RV250
    		Drive Type		RR182x RAID 5 Array
    	Disk Test	27.98	
    		Sequential	84.41	
    			Uncached Write	70.04	43.00 MB/sec [4K blocks]
    			Uncached Write	83.94	47.49 MB/sec [256K blocks]
    			Uncached Read	59.16	17.31 MB/sec [4K blocks]
    			Uncached Read	232.99	117.10 MB/sec [256K blocks]
    		Random	16.77	
    			Uncached Write	5.06	0.54 MB/sec [4K blocks]
    			Uncached Write	43.48	13.92 MB/sec [256K blocks]
    			Uncached Read	95.89	0.68 MB/sec [4K blocks]
    			Uncached Read	134.97	25.04 MB/sec [256K blocks]
    
    And maybe I'm also a bit too paranoid about the RAID card failure. It's partially because this card is discontinued and would be difficult to replace. But the idea of software RAID allowing direct access to the drive is appealing. I'm wondering if we just use the HP RAID card for the additional SATA ports, but use either Apple software RAID or SoftRAID to configure the RAID, we might be safer (RAID not tied to the card's bios). I've heard that on these low-end cards it's really just software RAID anyway.

    Maybe I'll run a quick test, unless there's already one posted somewhere, showing the performance of a software RAID 1 using similar drives to what we're trying (WD RE3). I'd be curious to see how that compares to the numbers we're getting above.
     
  19. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #19
    Keep in mind, the software RAID method does mean it must be connected to a system running the same OS (OS based software RAID can only read what it creates, nothing by a different OS).

    Hopefully, you'd get an improvement in speed, as you're using faster drives. But if it's attached to the RR1820A, it's only PCI-X, which is 64bit operation @133MB/s max throughput. Fast enough for NAS, but not speedy by any means. It's also just Fake RAID, so there's no separate controller or memory to help. Given this fact, it may be another reason to go with a type 10 array, as there's no parity calculations imposed on the CPU (less load to handle). Software RAID 5's run slower than their true hardware counterparts.

    BTW, is the NAS a single 1G port, or are you teaming a pair?

    If you've a few RE3's already, test it out. But it should be similar to the sofware RAID installs out there. Keep in mind though, you have to be aware of the SATA port speed (1.5Gb/s or 3.0Gb/s) of the system. The 1.5Gb/s spec will produce slower results with that drive.

    The RR1820A claims it handles SATA II (3.0Gb/s), but that could be misleading, as the drives should step down to 1.5Gb/s when connected to 1.5Gb/s ports. As that's an older model, this is more of what I'd expect.
     
  20. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #20
    Would using SoftRAID have the same issue? Seems to me that if it was supported by the OS, it would transfer OK. But I'm not sure we'll even upgrade this server to 10.5, and it can't run 10.6, so it may not be an issue. And with SoftRAID or Apple software RAID, would we be able to transfer RAID to an external solution if we chose at some point (with appropriate eSATA ports which we are buying an adapter for).

    Actually, the 1.6Ghz G5 is only PCI, not PCI-X. But someone earlier said this is still enough bandwidth to support fast RAIDs.

    Well, we're not running an actual NAS per se. I was just referring to the typical use of our server (unless that's what you're meaning). This server isn't doing anything processor intensive, so I'm not too concerned about processor overhead.

    I went ahead and ordered the G5 Jive, and the adapter to port 2 internal SATA ports to eSATA. So I'm just waiting to figure out which drives to purchase. Probably either 2 750GB or 1TB drives for a RAID 1, or 4 500GB drives for a RAID 10. The larger drives have 32MB cache vs. 16MB for the 500 - would this make any real difference?
     
  21. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #21
    Yes. The software/drivers are proprietary, that's why they won't work with one another.

    That's the advantage of having a proper RAID card. You can transfer it and the drives from one system to another, assuming the new system has a compatible slot. If you have to get a new card, you can get the same brand, as the drives usually work (same manufacturer tends to use the same methodologies). Otherwise, you recreate the data from backups, and get what ever card best fits ones needs.

    One of the many uses for the backup. ;)

    I took a quick glance at the specs for that card, and hadn't realized it was stuck in a PCI slot. Unfortunately, that slot only runs at 33MHz, and is slow. So slow in fact, it's going to be a bottleneck with the new drives (no way they'll run at full speed).

    You really need a PCI-X slot for that card (which obviously doesn't exist), or a different system, preferably with PCI-e in it.

    Either way works really (no need to nit pick the details), as the data is sent over the Ethernet, and it can potentially be teamed (assuming the NIC's support it, i.e. 3rd party set of cards).

    Cache does help, and platter density is even more important. Unfortunately, the use of this specific card is an issue. No way the RE3's will top out on that card, given the limited bandwidth it uses to connect with the system (hadn't realized it was stuck in a PCI slot, not PCI-X previoiusly). :eek:
     
  22. jethrodesign thread starter macrumors member

    Joined:
    Aug 6, 2009
    #22
    OK, thanks for all the replies!

    It appears we have some bottlenecks with this setup, although I'm sure the bottlenecks we currently have might be smaller ;)

    A) The PCI bus (33Mhz) may be an issue, although Tesselator didn't seem to think so.

    B) The SATA I support of the motherboard (which I'm assuming throttles down the SATA II support of the RAID card)

    C) Most access coming through ethernet, not directly working on machine (though again, I've heard the bandwidth may be plenty here as well).

    D) Not 'true' RAID card (ie - no cache or processor).

    So this being the case, it kind of goes back to my initial question and decision. Whether we should just go with 2 large drives in a RAID 1, or try for something more complex/more speedy. If it won't really make much of a difference, due to our limitations, it won't be worth the extra effort & costs. I just haven't bought the drives yet to test, as I'm waiting to determine if we should get 2 bigger drives, or 4 smaller drives.

    - Can anyone tell from the Xbench scores I posted if we could expect anything better with any of the solutions we've thought of? The test was done on a setup with probably the same or greater bottlenecks (same RAID card, PCI slot, gigabit ethernet card, but older drives & much older/slower system).
     
  23. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #23
    A. It to me that it is, if you figure in the drive count and fact it's FakeRAID. Take a look at this (8 drive RAID 5 on a G4).
    128K stripe size only produced ~115MB/s (sequential reads). For 8 drives (74GB Raptors BTW). That's not wonderful by any stretch of the imagination. :(

    Newer drives are faster, but I'm not sure 2x as fast when you figure in the calculations (why you may be better with 10 than 5). RE3's would get you to ~85MB/s each (single drive performance; depending on specific capacity, as 1TB's would be quickest). The 500GB would hover at the 85MB/s range. Two of those in a stripe would be able to run at ~170MB/s (hardware conditions that can handle it), and similar for RAID 5 or 10 on a hardware controller (5 usually runs slower on software RAID due to the parity calculations, which don't exist in 0/1/10). But for this card, it can't seem to get the full potential of the drives (calculation overhead). As I mentioned, this may be less of an issue on 10.

    If you've got a pair of RE3's even (or newer consumer models), it would be a good idea to make a few experimental runs in the G5. Though the bus is the same, the processors should be able to perform the calculations quicker, and I can't say as to how much of an impact that will have (not that familiar with PPC Macs).

    B. The step up/down won't actually matter, as that's handled by the RR1820A. The OS only sees data delivered via the PCI bus.

    C. If you've been fine with the Ethernet connection as is, it's not going to be an issue. I was just wondering if it was a single connection, or was teamed (paralleled NIC's) to get a higher bandwidth. I was just curious to the details, as that would help me deduce the needed throughput. So at this point, I'm figuring ~100MB/s or so. The RR1820A is capable of it, but it may require more than 4 drives.

    D. Correct. :) Fake RAID is nothing more than a controller board and perhaps some firmware to allow it to be booted from. The RR1820A is such a card. No processor, no cache. It does have the firmware that allows BIOS boot support (no OSX booting), and the system resources are used in conjunction with the drivers (provides the instructions on how to implement the RAID type you set it for). Very basic card to say the least, but they can get the job done for very limited budgets. The compromise of course, is speed and features.
     
  24. patpro macrumors newbie

    Joined:
    Aug 5, 2009
    #24
    I seriously doubt that. Do you have some benchmarks ?

    Nope, RAID 5 is not "very safe". In fact it can be a nightmare. And it's always less safe than a RAID 10.
     
  25. patpro macrumors newbie

    Joined:
    Aug 5, 2009
    #25
    RAID 1 is as many drives as you want, starting at 2.

    performance I'm not sure, and safety I'm sure not.
    On RAID 5, you can lose only one HD, no matter how many you have, before losing all your data.
    Say you have a 4 HDs RAID 5 : you can lose only one HD.
    On RAID 10, you can lose every HD but one per RAID 1 array, before losing all your data.
    Say you have a 4 HDs RAID 10 : you can lose one HD in each RAID 1 array -> you have 2 RAID 1 arrays, you can lose up to 2 HDs.

    With 12 HDs, you still can lose only 1 HD in RAID 5. In a RAID 10 made of 4 stripped arrays of 3 mirrored HDs, you can lose up to 8 HDs before losing your data.
     

Share This Page