Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I seriously doubt that. Do you have some benchmarks ?

Yes I do! And this site is just chalk full of them. Mine and others. Have a search.



Nope, RAID 5 is not "very safe". In fact it can be a nightmare. And it's always less safe than a RAID 10.

Wikipedia and the professional video community disagrees with you but it would not be the first time that the larger group was wrong and a knowledgeable individual was right (in fact that's somewhat the norm here at Mac Rumors). So educate us. How and why is RAID5 "not very safe"?

Thanks.

EDIT:
Or did you mean only this:
performance I'm not sure, and safety I'm sure not.
On RAID 5, you can lose only one HD, no matter how many you have, before losing all your data.
Say you have a 4 HDs RAID 5 : you can lose only one HD.

Sounds right.

On RAID 10, you can lose every HD but one per RAID 1 array, before losing all your data.
Well, no. But you clarify below so... kewl.

Say you have a 4 HDs RAID 10 : you can lose one HD in each RAID 1 array -> you have 2 RAID 1 arrays, you can lose up to 2 HDs.

Yeah... But saying that's more safe is an odd sort of stretch IMHO. What are the odds of that happening in just that way? Like a trillion to one or something?

With 12 HDs, you still can lose only 1 HD in RAID 5. In a RAID 10 made of 4 stripped arrays of 3 mirrored HDs, you can lose up to 8 HDs before losing your data.

No sane person or SOHO user would ever set up a RAID like that. The still need a backup anyway - please remember. So that's just retarded - unless you're a bank or something and then it's RAID 50 or RAID 60 anyway.
 
Yes I do! And this site is just chalk full of them.

I must admit, it really only depends what performances you are talking about. I'm sysadmin, I'm "server side" (DB...), my typical workload in 30% random reads / 70% random writes.
RAID 5 has abysmal performances when it comes to random writes. RAID 10 is far better for random writes (no parity calculation).
For sequential access, RAID 5 appears to be a bit better than RAID 10.

EDIT: you might want to check this http://www.xbitlabs.com/articles/storage/display/400gb-raid0510.html
it's a benchmark comparison between RAID 0/5/10 under various simulated workloads. On page 3 you'll find random writes (database simulation), RAID 10 is better than RAID 5. On page 6 you'll find sequential read/write, RAID 5 is better than RAID 10.


Sounds right.

and you can add to this the infamous RAID 5 write hole (causing silent data corruptions). But it's very unlikely, compared to the loss of 2 HDs.

Yeah... But saying that's more safe is an odd sort of stretch IMHO. What are the odds of that happening in just that way? Like a trillion to one or something?

It's just statistics :)
4 drives in RAID 10, you lose one: if you happen to lose a second one, each remaining drive has 1/3 odd to be this one.
You have roughly 1/3 odd to lose the drive-you-must-not-lose.
On RAID 5, you have 1/1 odd to lose the drive-you-must-not-lose.

My experience tells me that when a RAID loses one drive, you can be pretty sure a second (and may be a third) drive will die very soon. Especially on RAID 5, because rebuilding is a quite intensive process, and most of the time drives have the same age.

No sane person or SOHO user would ever set up a RAID like that. The still need a backup anyway - please remember. So that's just retarded - unless you're a bank or something and then it's RAID 50 or RAID 60 anyway.

That was just a mathematical demonstration ;)
Yep, no RAID can replace a backup. And I think banks are beyond RAID 50/60 now.
But more-than-2-drives RAID 1 is still possible and offers a very good redundancy if you can afford it.
 
Nope, RAID 5 is not "very safe". In fact it can be a nightmare. And it's always less safe than a RAID 10.
If you mean in terms of the "write hole" issue, then yes, but ONLY on software/Fake RAID implementations, as they don't have a way to fix it.

Proper hardware RAID cards include an NVRAM solution, so it's been solved. Just keep up with replacing the battery as needed.

RAID 1 is as many drives as you want, starting at 2.
RAID 1 is only comprised of 2 drives.

Even if you're thinking of duplexing, it's still only 2 drives per set. Its a minimum of 2x RAID1's on separate controllers. It adds the ability to give fault tolerance to the controller as well.

performance I'm not sure, and safety I'm sure not.
On RAID 5, you can lose only one HD, no matter how many you have, before losing all your data.
Say you have a 4 HDs RAID 5 : you can lose only one HD.
On RAID 10, you can lose every HD but one per RAID 1 array, before losing all your data.
Say you have a 4 HDs RAID 10 : you can lose one HD in each RAID 1 array -> you have 2 RAID 1 arrays, you can lose up to 2 HDs.

With 12 HDs, you still can lose only 1 HD in RAID 5. In a RAID 10 made of 4 stripped arrays of 3 mirrored HDs, you can lose up to 8 HDs before losing your data.
No one does RAID 10 like that. They'd skip it and choose another array type, likely a nested type (50 or 60). Or distribute the data over different arrays (separate), depending on needs.
 
That's strange, because I've just created a RAID 1 with 3 HDs on a spare XServe (and also because wikipedia, among other online references, do agree with me).

Well, it depends on the controller. In most cases that I know of it doesn't support more than 1 mirror unless you nest and that's just crazy for any number of reasons. Some controllers will allow you to chain the mirror X number of drives deep. If Apple allows that then that's kinda kewl in a weird techy kinda way. Weird because it's fairly retarded and nonsensical for normal users.

Also I believe the strictest definition of RAID1 does not permit this behavior and ONLY two volumes can be used (sans nesting). Could you quote the Wiki page that says otherwise? And maybe supply a link. Thanks.
 
Also I believe the strictest definition of RAID1 does not permit this behavior and ONLY two volumes can be used (sans nesting). Could you quote the Wiki page that says otherwise? And maybe supply a link. Thanks.

from http://en.wikipedia.org/wiki/RAID5#RAID_1

  • A RAID 1 creates an exact copy (or mirror) of a set of data on two or more disks.
  • Since all the data exists in two or more copies...
  • Read performance can be further improved by adding drives to the mirror.

But well, that does not help replying to the original question "RAID Throughput Question - Card vs. Internal" ;)
 
Wow, those are some great in-depth conversations that you guys have all night ;)

So, being that we want to keep things a bit more 'simple', and will use a maximum of 4 drives in our RAID setup, I'm still wondering if we would notice much of a difference in real-world usage between a two-drive RAID 1 using software and the internal SATA ports (w/ two 1TB RE3 drives) vs. a RAID 10 on our 'fake' RAID card using 4 500GB RE3 drives?

We're not running a database either, so the server won't be doing lots of constant small read/write requests. Our needs would be more for opening/saving/closing/transferring larger design & photo files (20MB - 200MB+), as well as allowing access to this through FTP and VPN (which I'm assuming our internet connection would be the bottleneck).

I ran an Xbench test and posted the results earlier for our existing RAID 5 in our old G4. It looks like only the larger sequential reads do well (which may be more our needs), everything else looks fairly slow. We'd be using the same card, but newer HDs, newer system, and possibly newer setup (my main question).

- Would there be any better (and more technical) way for me to test real-world conditions beyond copying a very large file over the network and watching it with a stopwatch?

We just haven't ordered the drives yet, because we're trying to determine what size/quantity to get. And setting up different RAIDs & running tests may be tough with how busy we are. We ultimately need the transfer to go very quickly, so there is no down time, hence me trying to make a more 'educated' decision about what direction to go.
 
So the comparison is between:

2-drive 1 TB RAID1 (firmware), and
4-drive 0.5 TB RAID10 (hardware)

Of course the RAID10 will be faster. And noticeably so too!

For drive size you want the largest you can afford. Even if you don't need that much space. 1) Larger HDDs are faster, 2) you shouldn't be filling any drive to more than about 50% of it's capacity -(At 98% to 99% you're actually asking for breakage..), and 3) WTH? they're only $100 a pop - jeez, that's one pizza night for me. Larger drives are faster because of platter density. Some drives like the samsung HD154UI one's I'm using, now offer 500 GB per platter - which is the current densest. :) They're 1.5 TB each and cost me $97 each. I think they're $110 in the USA right now. The WD 1TB Black drives also have a good reputation.

Quantity is up to you of course but I'm in love with RAID0 as a 1st choice. All RAIDs need to be backed-up anyway and incremental backups only take about 15 minutes a day after the first cloning operation. So my personal preference is three 1.5TB drives in a RAID0 and 2TB drive as an external (eSATA) backup. My second choice would be a four drive RAID5 with the same backup unit. And finally a RAID6 if my data was hyper-critical! I would personally never consider a RAID1 or a RAID10. Both to me are a complete waste of time and money.



PS: Pizza Night =
3 Large $25 pizza pies,
1 $10 6-pack of beer,
1 $3 bottle of coke for the kids,
3 DVD rentals at $5.

Total = $93 or one 1.5TB HDD. :D



.
 
So the comparison is between:

2-drive RAID1 (firmware), and
4-drive RAID10 (hardware)

Of course the RAID10 will be faster. And noticeably so too!

Well, the 2-drive RAID 1 would be using either the Disk Utility feature, or SoftRAID. And it would use the larger drives with larger cache (1TB vs. 500GB; 32MB vs. 16MB).

The 4-drive RAID 10 would use our RR 1820a, which I've heard is not a true 'hardware RAID' as most like to consider that - it has no cache or processor. And this would probably be setup using the card's own bios/firmware, unless it would be better/safer for us to still use SoftRAID or OS with it.

I was just wondering if it would still be noticeable considering our 'bottlenecks'. If so, we may just go for it, especially if it helps with large file writes/reads.

- And lastly, I'm still curious to know if a RAID 1 might be 'safer' in the sense that if the RAID array itself gets messed up somehow (through card failure or software glitch, not HD failure), we'd have a better chance at recovering our data as it would exist in complete form on both HDs, instead of 'split up' by the striping element of RAID 10?
 
I revised my post in case you missed it.


Well, the 2-drive RAID 1 would be using either the Disk Utility feature, or SoftRAID. And it would use the larger drives with larger cache (1TB vs. 500GB; 32MB vs. 16MB).

Yeah, hmm, it might be a toss-up. <shrug> If we were talking about the same drives the 4-drive RAID10 would be faster. It's hard to determine with those differences. Benchmarking is needed in order to actually know I think - but I'm still going with my original choice.

The 4-drive RAID 10 would use our RR 1820a, which I've heard is not a true 'hardware RAID' as most like to consider that - it has no cache or processor. And this would probably be setup using the card's own bios/firmware, unless it would be better/safer for us to still use SoftRAID or OS with it.

I was just wondering if it would still be noticeable considering our 'bottlenecks'. If so, we may just go for it, especially if it helps with large file writes/reads.

I'm not exactly sure what you mean here but I'm convinced that there's no performance difference between the embedded Apple (Intel) chip-RAID and any card you can buy for under $500.


- And lastly, I'm still curious to know if a RAID 1 might be 'safer' in the sense that if the RAID array itself gets messed up somehow (through card failure or software glitch, not HD failure), we'd have a better chance at recovering our data as it would exist in complete form on both HDs, instead of 'split up' by the striping element of RAID 10?

Sounds the same to me. But as I added to my post above I dislike RAID1 and RAID10 both. At least I can see a reason for RAID1 on a SOHO system. I can't see a purpose for RAID10 under any circumstances. If I apply commonsense, logic and simple budget book-keeping, RAID10 always comes out as being retarded.
 
That's strange, because I've just created a RAID 1 with 3 HDs on a spare XServe (and also because wikipedia, among other online references, do agree with me).
I was thinking you meant in terms of increasing the capacity beyond a single disk. (late night).

In the sense of 1+n, where n = number of redundant copies, I can understand what you meant.

But as Tesselator pointed out, not all controllers will let you do this. And at that point, I'm more aware of skipping it, and going straight to duplexing to add the card to the redundancy aspect.

Yeah, hmm, it might be a toss-up. <shrug> If we were talking about the same drives the 4-drive RAID10 would be faster. It's hard to determine with those differences. Benchmarking is needed in order to actually know I think - but I'm still going with my original choice.
In this case, 10 would definitely be able to outrun 1, just given the drive quantity. 2 vs. 10. Both can derive their performance from 2 drives, but the 10's is a stripe. Sequential is faster. Random, I'd have to test it with the specific disks to be sure.

I'm not exactly sure what you mean here but I'm convinced that there's no performance difference between the embedded Apple (Intel) chip-RAID and any card you can buy for under $500.
Sort of. At that low a price, performance gains aren't spectacular, but it would only apply to array types the software RAID supports.

Even cheap hardware (proper, not Fake), has some important advantages. They can offer array types the software method doesn't (say 5 & even 6 compared to OS X), and in the case of parity based arrays, offer an NVRAM solution to the write hole issue. Some other features may be available that make recovery easier.

Sounds the same to me. But as I added to my post above I dislike RAID1 and RAID10 both. At least I can see a reason for RAID1 on a SOHO system. I can't see a purpose for RAID10 under any circumstances. If I apply commonsense, logic and simple budget book-keeping, RAID10 always comes out as being retarded.
Here's how I think of it.

OS X, and other software RAID implementations don't offer RAID6, so 10 can allow you better than a single drive failure (2 drives), and still be able to recover (assuming the drives that failed don't comprise half of the nest).

Inexpensive for the level of redundancy it offers, as there's no additional hardware needed. So it does have it's uses. ;) :p
 
Sounds the same to me. But as I added to my post above I dislike RAID1 and RAID10 both. At least I can see a reason for RAID1 on a SOHO system. I can't see a purpose for RAID10 under any circumstances. If I apply commonsense, logic and simple budget book-keeping, RAID10 always comes out as being retarded.

Here's how I think of it.

OS X, and other software RAID implementations don't offer RAID6, so 10 can allow you better than a single drive failure (2 drives), and still be able to recover (assuming the drives that failed don't comprise half of the nest).

Inexpensive for the level of redundancy it offers, as there's no additional hardware needed. So it does have it's uses. ;) :p

I understand that logic but it seems flawed to me. Right? OK, here's the outlay:

RAID0: Drives: 4, Cost: $500, Space: 6TB, Advantages: Speed4, Size!

RAID10: Drives: 4, Cost: $500, Space: 3TB, Advantages: Speed2, Size -2, Redundancy1

RAID5: Drives: 3, Cost: $550, Space: 3TB, Advantages: Speed3, Size -1, Redundancy1, Extra Connections 4, Added RAID levels 2 or 3​
Where does RAID10 fit? IMO it just doesn't. It's a bastard child. ;) Heck, it's not even an official RAID level. :p


Also again, to me redundancy in a SOHO environment is not an advantage at all. We still need to backup the system. And restoring takes about the same time as rebuilding the 10 or the 5. We have the backup in all cases so we can continue working with that in an emergency time-pinch. Then there's the extra drive conundrum. To rebuild or restore any of these you need an extra identical drive just laying around doing nothing ready to replace the broken one. Who does that? Almost no one in SOHO I would guess. Certainly not the OP here - heck he's trying to use some ancient 500GB drives from yesteryear just to save some pizza money. :D
 
I understand that logic but it seems flawed to me. Right? OK, here's outlay:

RAID0: Drives: 4, Cost: $500, Space: 6TB, Advantages: Speed4,

RAID10: Drives: 4, Cost: $500, Space: 3TB, Advantages: Speed2, Redundancy1

RAID5: Drives: 3, Cost: $550, Space: 3TB, Advantages: Speed3, Redundancy1, Extra Connections 4, Added RAID levels 2 or 3

Where does RAID10 fit? IMO it just doesn't. It's a bastard child. ;) Heck, it's not even an official RAID level. :p
I understand where you're coming from. ;)

My reasoning was based on Redundancy = Primary Characteristic though, not speed, capacity, or future expansion. :)

It has it's uses, but they're for special cases, such as No RAID card, small capacity reqirements (whatever can be achieved via the board's ports). Then there's the cost savings (card) by being able to use a software implementation, and still achieve the redundancy characteristics of RAID1.

I think of it this way; RAID1 + additional capacity, and no card needed. So best case redundancy on a limited budget. ;)

As I love my speed, I usually skip that though, and would look at a minimum of RAID6 for such a redundancy requirement. :D Of course that means a decent hardware controller, which I usually try to use anyway (I've fallen in love with the ability to rebuild w/ PT backups on the card). :p
 
It's no cost savings though. A RAID Card can be had for about the price of one drive. Remove one from the RAID10 and buy the card. Wala!

So that kinda blows away your entire position. :D
 
It's no cost savings though. A RAID Card can be had for about the price of one drive. Remove one from the RAID10 and buy the card. Wala!

So that kinda blows away your entire position. :D
How so?

Spend the $500 on the card, now what are you going to do for drives?

Unless you've some lying around you can retask for RAID duty. ;) Though I was thinking a first-time RAID installation, where there's typically only a single drive available. :)
 
Thanks again for the replies!

Also again, to me redundancy in a SOHO environment is not an advantage at all. We still need to backup the system. And restoring takes about the same time as rebuilding the 10 or the 5.

Our need for redundancy does not negate our need for backup (or vice-versa). We backup to 2 different backup sets each week (one on Tuesdays, the other on Thursdays). This is after hours so it doesn't disrupt anything and gets the cleanest backup (no files are open and being worked on). Redundancy (RAID 1, 5 or 10) just gives us an instantaneous 'backup' so we don't lose a day or two of work (which could be priceless, depends on the situation).

Certainly not the OP here - heck he's trying to use some ancient 500GB drives from yesteryear just to save some pizza money.

I WISH I could do pizza (not so good on my stomach) :( We're looking into brand new Western Digital RE3 HDs. I was under the impression these were some of the best (for SATA RAID) without going to SAS drives (which are probably beyond our needs right now). But I'm definitely open to other suggestions. Again, reliability is #1, performance #2.

So the pricing I was getting was:
- About $320 for two 1TB drives if we wanted to just do a RAID 1.
- About $360 for four 500GB drives to do a RAID 5 or 10.

So the difference is negligible, and the space is the same (unless we go for RAID 5). But either way, it's way more space than we need. Our current total space used is about 340GB. And this is after over 3 years on our current 400GB RAID 5 setup. We're good at archiving :D

So that's why we were considering a RAID 10 against a basic RAID 1: same price but faster speed (much faster 'in theory' correct?).

And concerning RAID 10 against a RAID 5: slightly higher price for quantity, but with a slightly lower chance of errors or data lost on rebuild, slightly less overall complexity of array, and faster write times (much faster I've heard without a 'true' hardware RAID card with cache memory).

And our Highpoint card supports RAID 0, 1, 5, 10 natively.
 
Our need for redundancy does not negate our need for backup (or vice-versa). We backup to 2 different backup sets each week (one on Tuesdays, the other on Thursdays). This is after hours so it doesn't disrupt anything and gets the cleanest backup (no files are open and being worked on). Redundancy (RAID 1, 5 or 10) just gives us an instantaneous 'backup' so we don't lose a day or two of work (which could be priceless, depends on the situation).
Yep. :D Backups are always needed, RAID or just single drive use. No RAID can act as a substitute. ;)

I WISH I could do pizza (not so good on my stomach) :( We're looking into brand new Western Digital RE3 HDs. I was under the impression these were some of the best (for SATA RAID) without going to SAS drives (which are probably beyond our needs right now). But I'm definitely open to other suggestions. Again, reliability is #1, performance #2.

So the pricing I was getting was:
- About $320 for two 1TB drives if we wanted to just do a RAID 1.
- About $360 for four 500GB drives to do a RAID 5 or 10.

So the difference is negligible, and the space is the same (unless we go for RAID 5). But either way, it's way more space than we need. Our current total space used is about 340GB. And this is after over 3 years on our current 400GB RAID 5 setup. We're good at archiving :D
I like the RE3's for enterprise SATA models, and trust them over Seagate at this point (7200.11 consumer model's issues did affect the ES.2 models).

As per speed, if you keep the capacity at 50% or less of the total the set is capable of, you'd get the highest throughput, as you stay off the inner tracks.

So that's why we were considering a RAID 10 against a basic RAID 1: same price but faster speed (much faster 'in theory' correct?).

And concerning RAID 10 against a RAID 5: slightly higher price for quantity, but with a slightly lower chance of errors or data lost on rebuild, slightly less overall complexity of array, and faster write times (much faster I've heard without a 'true' hardware RAID card with cache memory).

And our Highpoint card supports RAID 0, 1, 5, 10 natively.
Given that card, I'd stay away from RAID5, as it does NOT have a solution for the write hole issue associated with parity based RAID. So a type 10 is safer in that regard for your intentions. Unless you go with a different card (more money of course).

So I still say skip 1 & 5, and go with a 10. As you discovered, the drive cost is negligible, and it's safer. As it happens, if you keep the data to 500GB or less, you'd also stay on the outer tracks, giving you better overall throughput. :)
 
Thanks again for the replies!



Our need for redundancy does not negate our need for backup (or vice-versa). We backup to 2 different backup sets each week (one on Tuesdays, the other on Thursdays). This is after hours so it doesn't disrupt anything and gets the cleanest backup (no files are open and being worked on). Redundancy (RAID 1, 5 or 10) just gives us an instantaneous 'backup' so we don't lose a day or two of work (which could be priceless, depends on the situation).



I WISH I could do pizza (not so good on my stomach) :(

Yep. :D Backups are always needed, RAID or just single drive use. No RAID can act as a substitute. ;)


Yeah, I'm just saying "for me". And I'm including on assumption, most (but not all) SOHOs in that too.

On another note... WHAT??? No pizza??? Man! Sucks to be you! :D
 
Hmmm, well maybe we'll give the RAID 10 a shot. Then we'll have to find a small drive for the OS. I certainly hope this 5-drive internal setup doesn't get too hot, but other people seem to be using it. And this computer has a much smaller processor setup, so maybe that will help.

And we'll also consider having a spare on hand in case something goes down, although we've done OK for the last 3+ years with our Seagate 200GB drives on 24/7. Either way, it doesn't take long to get replacement drives these days.

So here's a separate question I'm wondering if you'll know by chance:
- I also want to port a couple of the internal SATA ports off the RR 1820a to become eSATA ports for backup drives. I already found the adapter for this, but I'm wondering if they'll be able to be mounted/unmounted at will like our current FW drives? I know they can be recognized & used independently by the card. Any experience here?

Thanks again for all the insight!
 
So here's a separate question I'm wondering if you'll know by chance:
- I also want to port a couple of the internal SATA ports off the RR 1820a to become eSATA ports for backup drives. I already found the adapter for this, but I'm wondering if they'll be able to be mounted/unmounted at will like our current FW drives? I know they can be recognized & used independently by the card. Any experience here?

Thanks again for all the insight!

FW is too slow for me so I never had one so I dunno what "like FW drives" would mean exactly. But with any SATA or eSATA drive that isn't the startup drive, it can be "ejected" which will spin it down and dismount it. You can mount it again after that in any number of utilities including Disk Utility.app. This of course is NOT the same as hot-swappable connections unless your card supports that. After spinning it down and dismounting it you're technically not supposed to disconnect it physically from the bus while the computer is turned on. I guess if there's a separate power switch on the housing unit you could turn it off.

If it's off when the computer boots you will not be able to mount it later. We need a device re-scanner and initializer for OS X but I've never seen one. There are for SCSI based macs. :D
 
But with any SATA or eSATA drive that isn't the startup drive, it can be "ejected" which will spin it down and dismount it. You can mount it again after that in any number of utilities including Disk Utility.app. This of course is NOT the same as hot-swappable connections unless your card supports that. After spinning it down and dismounting it you're technically not supposed to disconnect it physically from the bus while the computer is turned on. I guess if there's a separate power switch on the housing unit you could turn it off.

If it's off when the computer boots you will not be able to mount it later. We need a device re-scanner and initializer for OS X but I've never seen one. There are for SCSI based macs.

Hmmm, I don't think our card supports hot-swapping. I was just hoping to be able to connect and fire up an eSATA drive for doing backup, then shut it down and put away when done. We could shut it down and leave it connected, I suppose, but we have multiple drives we'd need to be able to swap in at some point. We are getting a NewerTech Voyager, which allows us to 'plug' a raw drive into it like a toaster and connect by eSATA. So we can swap raw drives but leave this connected if it helps.

Do dedicated eSATA cards/solutions handle this differently? Or is eSATA really designed to 'plug into one device and leave it'? I guess I just assumed it might work like FW or USB drives where you can unmount, shut down, and reconnect at will.
 
Just a note--I wouldn't really bother spending the money on the mounting plate. All you really need is a couple washers, nuts, and some cheap metal from a hardware store. Seriously, when mine came my first thought was "really?".
 
Hmmm, I don't think our card supports hot-swapping. I was just hoping to be able to connect and fire up an eSATA drive for doing backup, then shut it down and put away when done. We could shut it down and leave it connected, I suppose, but we have multiple drives we'd need to be able to swap in at some point. We are getting a NewerTech Voyager, which allows us to 'plug' a raw drive into it like a toaster and connect by eSATA. So we can swap raw drives but leave this connected if it helps.

Do dedicated eSATA cards/solutions handle this differently? Or is eSATA really designed to 'plug into one device and leave it'? I guess I just assumed it might work like FW or USB drives where you can unmount, shut down, and reconnect at will.
Some eSATA devices can do this, but the card has to at a minimum, support hot swapping. It also needs the ability to hot plug in some cases (doesn't have it's own separate PSU for the drive).

Unfortunately, the RR1820A is not a hot swappable model. No listing of support on their product page at any rate.

You could just leave the drive(s) plugged in, and powered. Then mount/unmount as needed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.