Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
RAID 0 for backups? That makes no sense. You don't need the speed and the risk is high even if the RAID array is not your only backup.

Look up SNAFU.....because that is what you will have.

S-
what risk would that be? and yes RAID0 for backups could be a solution. is it so bad that i want REDUNDANCY for my backups? :cool:

It's odd... Every week we hear people who are thinking of using a RAID for backup, when will people start to learn that RAID isn't a archiving solution? :confused:
Redundant Array of Independent Disks. need i say more?

It's fine if you've a second backup of the same data, and as it's an external, you can store it separately from the system, such as a fire resistant safe (won't help you in floods though :p). It can even work as the only backup system, but I would NOT recommend it. Period. If you do, be aware of the risks.

Also keep in mind, if you go with a stripe set, and circumvent these settings, your risks go up to that of a primary stripe (platters are always spinning, even if the heads are stationary).

Agian, I don't recommend it as it's dangerous. You don't want to play "high risk" with your backup strategy. Speed's not that important. You set it, and walk away (just make sure the system's active if it's set on a schedule, which I would recommend as well).
that makes sense, we wouldnt want the platters spinning all the time i guess. its the redundancy part that caught my eye.


Again, don't do this. It's like storing data on a loaded shotgun, and throwing it into a fire. :eek: :p
where do you pull these from haha?


Yes, I'm sure. There's no mention of an independent processor or cache.
schweeta. so i still need to use Disk Utility to setup the array, switching that switch just informs the device of what to be ready for ;)



Yes, the data is stored sequentially (in terms of capacity). Fills A, then B,...
great :D just making sure i knew.

But if you find yourself in that situation, you can use the other drive for something else until your backup capacity reaches a level where you need to use a larger capacity drive or multiple drives (single disk operation, JBOD,...).[/quote]
i dont think that will be a problem really. i always find ways to use up data :)


Speed certainly isn't your primary concern here. Capacity for the lowest cost is.
most certainly! the limitations of the network and the fact that im not doing anything extensive with the drives indicates that, so id be quite happy with green powered drives - even in a JBOD situation.

The board's SATA ports make for the least expensive solution, but consider your primary capacity needs as well. You may end up needing a SATA card anyway (move the backups to the card, and new drives for primary usage on the board).
got it. if i have to move the drives to a different SATA card (or even different computer), is this possible? do you have to tell the computer or is there an autodetect feature?


Run a SMART test on each, and see what it comes up with. If the data doesn't make sense, there's online resources available (some tools are easier to understand as well, but it's not hard anyway).
i shall run SMART - nothing came up last time i ran it though (few months ago) so i assume they are all fine.

I presume there's no squeals, grinds,... to cause concern. If so, definitely run the SMART test ASAP, and prepare to replace them immediately (seriously, if this is the case, DO NOT drag your feet or data = vapor).
haha no none of that.

As per the 6.0Gb/s, it's not a "classed" system. It's the distance from point A to point B. You're into networking, you should know there's distance requirements. :rolleyes: :p
wow getting ahead of yourself a bit there! 6Gb/s! dont worry, i get them mixed up too. oh, and of course i know there are distance requirements! over here we dont class them or anything, you made it sound like the ISP gave you a ranking or something.

Well, RAID is used to backup primary RAID's (and redundant types are used for the backups as well as the primaries). Other than software scheduling, they're not "connected". ;)
it will be a secondary primary backup :p i can take precautions if i want!! leave me alone haha

But it's not the same as the questions that's seen on MR. :D :p
thats a good thing right? different is good? lol

thanks again everybody.

news: quite possible that a 3x1TB JBOD system will be implemented now as that is what's available in the computer. i might think of getting 2x1TB drives later on down the track for extra backups/storage - this will be either connected via another SATA connector, or be put into my spare dual bay FW enclosure (again, JBOD seems the likely implementation).
 
DoFoT9,

You don't know what you are talking about which will result in a SNAFU and a FUBAR.

RAID 0 for a backup solution? RAID 0 has no redundancy. One more time, RAID 0 has no redundancy. RAID 0 is all about speed. If you don't need the speed, why would you take the risk of losing an entire array of data if one drive fails? If you need the capacity of several disks as a single volume, use JBOD instead. At least you have a chance of recovering some data. It's still not the best idea.

What are you looking for? Data security? Speed? High availability? One large backup volume made up of several disks? None of the above? All of the above?

S-
 
what risk would that be? and yes RAID0 for backups could be a solution. is it so bad that i want REDUNDANCY for my backups? :cool:
RAID 0 /= Redundant. Period.

It gives speed, but NO redundancy. You have to go to other levels for that, and each is different. i.e. 10 can take the loss of 2 disks, as can 6. 5 and 1 can lose 1 disk. Check the details (i.e. re-read wiki and other sources, as you're missing what's going on).

Redundant Array of Independent Disks. need i say more?
A stripe set (type 0) is the "Bastard Child" of RAID, as it's only giving increased performance, NOT redundancy. One drive goes, and the whole thing is TOAST (all the data's gone). Fix the array, and restore the data from backups. No other choice.

Backups are critical with any drive (single disk operation or otherwise), but it's even more so with a stripe set, as multiple drives = more crap to fail with no redundancy. Think riding a Ducatti at full throttle down a really twisty, high mountain road, with no handle bars. :eek: :eek: At some point, you're Road Kill. You get a one-way trip in the "meat wagon", and straight to the morgue. Do not pass Go, or collect $200. :D :p

That's why the other levels exist. Some, such as type 1 (mirror), is purely redundant, while most others are a balance of performance and redundancy (10/5/6/50/60 for example; there's a couple of others that aren't as common like 3/4/and even 7 (rare - usually proprietary)).

schweeta. so i still need to use Disk Utility to setup the array, switching that switch just informs the device of what to be ready for ;)
In the FW enclosure, NO. It's going to be the switch on the back. Had it not been there, you'd have to go into the drivers (utility included as part of the driver package).

As per the board, you could actually try to set up the array (especially if you go with 0/1/10) via the board's firmware first (i.e. Intel's ICH line of chips has firmware access). But OS X's Disk Utility would also work. You don't want to use the same array for both Windows and OS X either. JBOD is easily accomplished via the OS it's to work under. Just remember, with any OS implementation, it's only good for that OS. Windows can't read OS X, and vice versa (it's not the same as a single drive).

got it. if i have to move the drives to a different SATA card (or even different computer), is this possible? do you have to tell the computer or is there an autodetect feature?
If you move it to another system, it may not appear as a JBOD. It depends on whether or not it's the same OS (it is possible if the array is also the primary one, and just physically transferred from one system to another). If they're not identical though, you may have problems due to the different components on the board, cards,... (i.e changing to the correct device drivers if it will boot). If it's setup in the board's firmware, it will NOT transfer, as the new board will want to re-initialize when you go to create it. This is one of the advantages of removing the JBOD settings before transferring. The data stays in tact, but you get individual drives (you then re-create the JBOD if it's not the OS location).

It's best practice to backup in a manner that makes transfers easier. RAID cards will transfer as well, but again, if the system is different, it can be more difficult.

it will be a secondary primary backup :p i can take precautions if i want!! leave me alone haha
It's your system, you can do what you want. But you've been given some pertinent information, and be aware you are taking a risk with a stripe set for use as a backup.

news: quite possible that a 3x1TB JBOD system will be implemented now as that is what's available in the computer. i might think of getting 2x1TB drives later on down the track for extra backups/storage - this will be either connected via another SATA connector, or be put into my spare dual bay FW enclosure (again, JBOD seems the likely implementation).
It's the least expensive way to go. You an add a SATA card later if you need more ports. Or even a true RAID card, once you figure out the limitations of software implementations (includes the set size due to available ports on the board/card used to connect the drives desired for an array).
 
DoFoT9,

You don't know what you are talking about which will result in a SNAFU and a FUBAR.

RAID 0 for a backup solution? RAID 0 has no redundancy. One more time, RAID 0 has no redundancy. RAID 0 is all about speed. If you don't need the speed, why would you take the risk of losing an entire array of data if one drive fails? If you need the capacity of several disks as a single volume, use JBOD instead. At least you have a chance of recovering some data. It's still not the best idea.
RAID 0 /= Redundant. Period.
omg. how embarrassing. i seriously had no idea what i was thinking! i think i actually was talking about RAID1, but kept using RAID0. mybad! sorry :eek:

i do know what im talking about.. even though it doesnt look like it ;)

What are you looking for? Data security? Speed? High availability? One large backup volume made up of several disks? None of the above? All of the above?

S-
data security, meh
speed, meh,
high availability, meh
one large backup volume made up of several disks, yup :D that is why i originally suggested RAID5. but as has already been discussed, it isnt safe under software implementation and OSX cannot do it. RAID10 or RAID6 might be nice, but hardly doable in my situation.

but i agree though, i hardly consider JBOD a safe option, but there isnt much else to choose from.


It gives speed, but NO redundancy. You have to go to other levels for that, and each is different. i.e. 10 can take the loss of 2 disks, as can 6. 5 and 1 can lose 1 disk. Check the details (i.e. re-read wiki and other sources, as you're missing what's going on).
not missing. just had a teeney weeney blonde moment :mad: :p


In the FW enclosure, NO. It's going to be the switch on the back. Had it not been there, you'd have to go into the drivers (utility included as part of the driver package).
right, got it.

As per the board, you could actually try to set up the array (especially if you go with 0/1/10) via the board's firmware first (i.e. Intel's ICH line of chips has firmware access). But OS X's Disk Utility would also work. You don't want to use the same array for both Windows and OS X either. JBOD is easily accomplished via the OS it's to work under. Just remember, with any OS implementation, it's only good for that OS. Windows can't read OS X, and vice versa (it's not the same as a single drive).
my board said it had no RAID support :eek: the JBOD/array would only be used under OSX. it might be connected to via windows clients however, but that will not cause any issues.


If you move it to another system, it may not appear as a JBOD. It depends on whether or not it's the same OS (it is possible if the array is also the primary one, and just physically transferred from one system to another). If they're not identical though, you may have problems due to the different components on the board, cards,... (i.e changing to the correct device drivers if it will boot). If it's setup in the board's firmware, it will NOT transfer, as the new board will want to re-initialize when you go to create it. This is one of the advantages of removing the JBOD settings before transferring. The data stays in tact, but you get individual drives (you then re-create the JBOD if it's not the OS location).
ok, i see. i dont think i will be moving the drives from the computer at all, nor reinstalling the system for a long time or anything. it wont be the primary partition though, the primary partition will be a seperate HDD (running on the spare SATA slot on the mobo, most likely).

It's best practice to backup in a manner that makes transfers easier. RAID cards will transfer as well, but again, if the system is different, it can be more difficult.
so backing up everything to another drive/set of drives would be the easiest option, then copy them back on. that seems very smart :)


It's your system, you can do what you want. But you've been given some pertinent information, and be aware you are taking a risk with a stripe set for use as a backup.
point noted. no stripes. no mirrors either (too costly).


It's the least expensive way to go. You an add a SATA card later if you need more ports. Or even a true RAID card, once you figure out the limitations of software implementations (includes the set size due to available ports on the board/card used to connect the drives desired for an array).
for the mean time i think this setup will be very nice. adding a RAID card later on would be a very nice addition. hopefully by then 4TB drives are out :D

thanks sammich for clarifying that i did indeed mean RAID1. sorry once again everybody for the idiot coming out in me. (still so embarrassed).
 
omg. how embarrassing. i seriously had no idea what i was thinking! i think i actually was talking about RAID1, but kept using RAID0. mybad! sorry :eek:
:cool: NP. :)

one large backup volume made up of several disks, yup :D that is why i originally suggested RAID5. but as has already been discussed, it isnt safe under software implementation and OSX cannot do it. RAID10 or RAID6 might be nice, but hardly doable in my situation.
RAID6 is also parity based, and wouldn't be worth trying on a software implementation if it were possible either. It's a little safer than 5 as it can take an additional drive failuer, but the write hole is still there, and just as deadly. Which is why you need a proper card to do it (NVRAM implementation to cover your butt).

but i agree though, i hardly consider JBOD a safe option, but there isnt much else to choose from.
It's the same as a single disk in terms of failure rate.

Safer (increased redundancy), means 1 or 10 via software. In the case of a mirror (type1), you'd want to go with 2TB disks. Out of budget I presume. Then there's 10. Not possible with less than 4 drives, and still more expensive than a 2 or 3 disk JBOD. Not as much as the 2TB drives (based on 1TB disks), but still an extra disk or two.

my board said it had no RAID support :eek: the JBOD/array would only be used under OSX. it might be connected to via windows clients however, but that will not cause any issues.
The lack of firmware array creation isn't a death knell. You can set it up under the OS. Windows allows this as well.

ok, i see. i dont think i will be moving the drives from the computer at all, nor reinstalling the system for a long time or anything. it wont be the primary partition though, the primary partition will be a seperate HDD (running on the spare SATA slot on the mobo, most likely).
No need to worry about it until such time it has to happen, if ever.
 
*hangs head in shame*


RAID6 is also parity based, and wouldn't be worth trying on a software implementation if it were possible either. It's a little safer than 5 as it can take an additional drive failuer, but the write hole is still there, and just as deadly. Which is why you need a proper card to do it (NVRAM implementation to cover your butt).
i seriously dont think i will ever need RAID5/6 or even RAID50/60 because i dont think i could use the speed increases. i dont do that much intensive stuff. the redundancy would be nice but price isnt really justified by the things that i would be doing (just storing crap/backups).


It's the same as a single disk in terms of failure rate.
true, and if i get my hands on a good set of drives then i should be set for at least 5+ years :D

Safer (increased redundancy), means 1 or 10 via software. In the case of a mirror (type1), you'd want to go with 2TB disks. Out of budget I presume. Then there's 10. Not possible with less than 4 drives, and still more expensive than a 2 or 3 disk JBOD. Not as much as the 2TB drives (based on 1TB disks), but still an extra disk or two.
afraid 2TB disks are out of the question here, would love to though - maybe for my external FW enclosure ;)

a RAID10 with 4 drives still only gives me 2TB total though, sure speed and redundancy are good but as explained above speed isnt really an issue.


The lack of firmware array creation isn't a death knell. You can set it up under the OS. Windows allows this as well.
that makes it a software RAID though? i wouldnt want to set it up via windows at the moment. do you have a preference of OS when setting up a JBOD type of RAID? are there any more secure/stable?

once again thanks for your help nano, your input is invaluable. :)
 
i seriously dont think i will ever need RAID5/6 or even RAID50/60 because i dont think i could use the speed increases. i dont do that much intensive stuff. the redundancy would be nice but price isnt really justified by the things that i would be doing (just storing crap/backups).
Not now anyway, and if you think 5/6 is costly, 50/60 is worse. More drives at a minium (dual sets in 5 or 6, then stripe the two together). Then you can get into mulitple controllers running in tandem. Much like port teaming with Ethernet. Only way more expensive, and not the simplest thing to do.

true, and if i get my hands on a good set of drives then i should be set for at least 5+ years :D
Yep. You'd be fine with WD Blacks or Seagate if you dare (I'm just not over the Boot of Death mess with the ES.2 drives). :D :p

afraid 2TB disks are out of the question here, would love to though - maybe for my external FW enclosure ;)
I figured that, and what you get stuck with there in Australia, it would be awful.

a RAID10 with 4 drives still only gives me 2TB total though, sure speed and redundancy are good but as explained above speed isnt really an issue.
This one is a bit unique. You can choose it with redundancy only, or performance in mind. It's the only way to get a 2 disk failure and remain operational without a RAID card to safely create parity based arrays. By far less expensive. So even if you don't need the speed, it's still valid if you need that level of redundancy at a lower cost of implementation. This is software RAID's advantage.

that makes it a software RAID though? i wouldnt want to set it up via windows at the moment. do you have a preference of OS when setting up a JBOD type of RAID? are there any more secure/stable?
Software based arrays can be generated with an OS or drivers, but the simplest way to define it, is it uses the system's resources to do the calculations. There's no independant processor, controller, or cache to take the load off the system (technically, it's software here too, as it's in the firmware). But it doesn't rely on the system to do the work, or OS to create it.

SATA/eSATA cards can use drivers to create software based arrays, as the processing is handled by the system, just as it is in one created in the OS.

Windows isn't a problem. Nor is OS X. It's just between them, they don't play well together. No array built in one OS can be read (or written to) by another OS. It's proprietary. A Windows GUID /= OS X GUID /= Linux,....

Make sense?
 
Not now anyway, and if you think 5/6 is costly, 50/60 is worse. More drives at a minium (dual sets in 5 or 6, then stripe the two together). Then you can get into mulitple controllers running in tandem. Much like port teaming with Ethernet. Only way more expensive, and not the simplest thing to do.
i dont think i want to know the price! anyway, if the person chooses those sort of arrays then they will of course need the speed.

Yep. You'd be fine with WD Blacks or Seagate if you dare (I'm just not over the Boot of Death mess with the ES.2 drives). :D :p
had a look at the seagate comments from our local store. 8/10 of the comments reported that they had problems/drive failing! so i shall not go with seagate. WD blacks are the wd1001FALS i take it? they cost $130Aus, compared to the green versions at $103Aus. apparently the greens benchmark at 111MB/s, have 3 platters (333GB) and idles at 2.8W! the blacks benchmark at 106MB/s and consume 6.5W at idle.

i know what one i will be purchasing!


I figured that, and what you get stuck with there in Australia, it would be awful.
if only shipping was cheaper! halp halp!!


This one is a bit unique. You can choose it with redundancy only, or performance in mind. It's the only way to get a 2 disk failure and remain operational without a RAID card to safely create parity based arrays. By far less expensive. So even if you don't need the speed, it's still valid if you need that level of redundancy at a lower cost of implementation. This is software RAID's advantage.
a very handy and cheap way to get good performance and data redundancy. i think thats my most favourite for the moment.


Software based arrays can be generated with an OS or drivers, but the simplest way to define it, is it uses the system's resources to do the calculations. There's no independant processor, controller, or cache to take the load off the system (technically, it's software here too, as it's in the firmware). But it doesn't rely on the system to do the work, or OS to create it.

SATA/eSATA cards can use drivers to create software based arrays, as the processing is handled by the system, just as it is in one created in the OS.
so the basic idea, is that its not where the RAID is made, its what resources are used to calculate/compute the data portion of the array. ? seems like the lowest (dumbest) sort of description.

Windows isn't a problem. Nor is OS X. It's just between them, they don't play well together. No array built in one OS can be read (or written to) by another OS. It's proprietary. A Windows GUID /= OS X GUID /= Linux,....
oh i see! so its not going to be solved by installing MacFuse or MacDrive or anything? because the arrays use their own individual ways of working, the other OS's do not have access.

over the network though, access is granted correct? (via SMB/AFP and whatnot)

Make sense?
i think so yes :D

its now officially thursday here. i will call it for the night and be back in the morning (about 8 hrs away hopefully :D). once again thanks to all for your input!

have a nice day/night.

DoFoT9

*computer starts indexing* :rolleyes:
 
had a look at the seagate comments from our local store. 8/10 of the comments reported that they had problems/drive failing! so i shall not go with seagate. WD blacks are the wd1001FALS i take it? they cost $130Aus, compared to the green versions at $103Aus. apparently the greens benchmark at 111MB/s, have 3 platters (333GB) and idles at 2.8W! the blacks benchmark at 106MB/s and consume 6.5W at idle.
Yes, the Caviar Black = WD1001FALS

I'd like to see where you got the numbers on the Green's though. I've only seen them get ~77MB/s. Makes sense too, as they're spinning slower. Now if the platter density is much higher, it would start to make sense. So the link would help (gives the model # I hope). ;)

a very handy and cheap way to get good performance and data redundancy. i think thats my most favourite for the moment.
It is with many others as well. :D

so the basic idea, is that its not where the RAID is made, its what resources are used to calculate/compute the data portion of the array. ? seems like the lowest (dumbest) sort of description.
It really is that simple though. If the system is used to handle the load, it's software. No matter if the drives are attached to the main board, or a SATA card (Fake RAID).

If the card has it's own resources (processor, cache) to keep the load off the CPU, then it gets classified as a true hardware implementation.

oh i see! so its not going to be solved by installing MacFuse or MacDrive or anything? because the arrays use their own individual ways of working, the other OS's do not have access.
Yes. If you tried to partition an array, and use one for OS X and the other for Windows, one of them is going to get blown (usually Windows it seems, from what gugucom ran into). Granted, it was in a MP, but it would still happen. OS X would over-write the Windows Partition Table.

over the network though, access is granted correct? (via SMB/AFP and whatnot)
Yes, assuming you have permission. It's not accessing the drives directly, so the partition scheme or file system won't matter.

Now whether or not the OS can read the file, is another story ("Unkown File Type" might pop up on occasion). :p
 
morning!
Yes, the Caviar Black = WD1001FALS

I'd like to see where you got the numbers on the Green's though. I've only seen them get ~77MB/s. Makes sense too, as they're spinning slower. Now if the platter density is much higher, it would start to make sense. So the link would help (gives the model # I hope). ;)
sure ill just bring them up. all the numbers were from Tom's hardware review. i was surprised as you were.

this link is for the WD1001FALS, they have this exact model at our store. apparently it spins at 5400RPM and as i said before peaks at 106MB/s. funnily enough, the next link i will provide says that this drive is 7200RPM. how odd

this link is for the WD10EACS. note that our store stocks the WD10EADS model, which is the latest and fastest (the differences are 3 platters vs the older 4 platters, which make for the speed increases). 111MB/s are recorded apparently, but give 91MB/s in their tests.


It is with many others as well. :D
such as?


It really is that simple though. If the system is used to handle the load, it's software. No matter if the drives are attached to the main board, or a SATA card (Fake RAID).

If the card has it's own resources (processor, cache) to keep the load off the CPU, then it gets classified as a true hardware implementation.
i hate how the hardware side of RAIDs get so expensive. why cant they just be the same price as like a firewire card (i.e. $10). then i would snap on it!


Yes. If you tried to partition an array, and use one for OS X and the other for Windows, one of them is going to get blown (usually Windows it seems, from what gugucom ran into). Granted, it was in a MP, but it would still happen. OS X would over-write the Windows Partition Table.
im sure that if i did it with my hackintosh under OSX that it would be fine. but i dont think i would ever try to partition the JBOD because that sort of ruins the whole idea of it in the first place! i might as well of just kept the 3 drives in single partitions (not that i have them yet lol).


Yes, assuming you have permission. It's not accessing the drives directly, so the partition scheme or file system won't matter.
networking is what i do ;) there WILL be permission!

Now whether or not the OS can read the file, is another story ("Unkown File Type" might pop up on occasion). :p
im sure it will be fine haha.

i have another question (that will probably go for another 3 pages haha).

if i wish to eventually take my current hackintosh and put it in another case, and fill my current case with a bunch of drives (im thinking about 15 or so), what methods of connection are there to connect to my hackintosh? im guessing the card needed to handle 15 or so HDDs wouldnt be cheap? would this connect via a singular cable to the hackintosh? (eSata? FW? LightPeak :p ?).
 
this link is for the WD1001FALS, they have this exact model at our store. apparently it spins at 5400RPM and as i said before peaks at 106MB/s. funnily enough, the next link i will provide says that this drive is 7200RPM. how odd
It is a 7200rpm unit. WD's page.

Here's the performance data you need to be looking at (for sequential throughputs). Note that 106MB/s figure is the Max throughput, not the average, which is 85.1MB/s. :eek:

this link is for the WD10EACS. note that our store stocks the WD10EADS model, which is the latest and fastest (the differences are 3 platters vs the older 4 platters, which make for the speed increases). 111MB/s are recorded apparently, but give 91MB/s in their tests.
Again, you've been looking at the wrong information.

First, that 111MB/s figure is from WD. It's highly inflated (manipulated), and not to be trusted. They claimed 118MB/s with the smaller RE3 line! What a joke! All drive makers do this unfortunately, so that's why you want to look at independent reviews.

Here's WD's page on the WD10EADS. And here's the page you need to be looking at for sequential throughputs for it on Tom's. They got 76.8MB/s as the average, and is right where I've seen them (remember ~77MB/s? ;)).

Umbongo, Sidewinder are just a couple IIRC, and plenty of those I've worked with over the years when software RAID was used.

i hate how the hardware side of RAIDs get so expensive. why cant they just be the same price as like a firewire card (i.e. $10). then i would snap on it!
Dream on. The components are of higher counts, and cost more. Then there's far more engineering involved... :eek: No way it's going to get that cheap. :p

if i wish to eventually take my current hackintosh and put it in another case, and fill my current case with a bunch of drives (im thinking about 15 or so), what methods of connection are there to connect to my hackintosh? im guessing the card needed to handle 15 or so HDDs wouldn't be cheap? would this connect via a singular cable to the hackintosh? (eSata? FW? LightPeak :p ?).
It can be done (save the single cable part unless you're willing to live with FW), and there's multiple ways to do it. RAID card/s or multiple eSATA would be strong candidates (each drive gets a port, and there are cables that carry 4 ports, but nothing larger). But you have to be careful with the cable lengths and adapters used. SATA's passive specification is only 1.0m total. That's not much, as it has to include the internal cables in the case as well. Even pro boxes go over, and it's critical. Too long, or too much contact resistance can destabilize an array easily, resulting in "drop-out" madness. :eek: :rolleyes:

Even the exact cards make a difference, as the signal voltages may be stepped up a tad (i.e. SATA controllers will do better at longer lengths with SATA drives than a SAS controller). It may seem odd, but it's what happens in practice (I'm not going into the details).

You'd want the existing case to have the shortest depth possible. So full towers + passive SATA is basically guaranteed to be a problem. ;) SAS is another story, as it can go 8.0m (much higer voltages).
 
It is a 7200rpm unit. WD's page.

Here's the performance data you need to be looking at (for sequential throughputs). Note that 106MB/s figure is the Max throughput, not the average, which is 85.1MB/s. :eek:


Again, you've been looking at the wrong information.

First, that 111MB/s figure is from WD. It's highly inflated (manipulated), and not to be trusted. They claimed 118MB/s with the smaller RE3 line! What a joke! All drive makers do this unfortunately, so that's why you want to look at independent reviews.

Here's WD's page on the WD10EADS. And here's the page you need to be looking at for sequential throughputs for it on Tom's. They got 76.8MB/s as the average, and is right where I've seen them (remember ~77MB/s? ;)).
ahh typical for me to be looking at the wrong thing, thanks for that.

hmm. 85.1MB/s vs 76.8MB/s. hardly a speed increase by any means, cant really justify an extra $90 for 10MB/s more speed. i think i will stay with the greens for the time being.


Umbongo, Sidewinder are just a couple IIRC, and plenty of those I've worked with over the years when software RAID was used.
google reveals that Umbongo is apparently a kids juice drink, and sidewinder is a mouse by microsoft ;) i will research more a bit later when i get out of bed!


Dream on. The components are of higher counts, and cost more. Then there's far more engineering involved... :eek: No way it's going to get that cheap. :p
blast. looks like i shall have to slave away a bit more then at work *sigh*


It can be done (save the single cable part unless you're willing to live with FW), and there's multiple ways to do it.
i might be able to live with FW. FW3200 ;):rolleyes: what is due out soon isnt it? allows for 400MB/s, more then i could ever use.
RAID card/s or multiple eSATA would be strong candidates (each drive gets a port, and there are cables that carry 4 ports, but nothing larger).
interesting. eSata would allow for 2m max length (according to wiki), is that correct? a 12 or 16 drive array wouldnt be out of the question then.
But you have to be careful with the cable lengths and adapters used. SATA's passive specification is only 1.0m total. That's not much, as it has to include the internal cables in the case as well. Even pro boxes go over, and it's critical. Too long, or too much contact resistance can destabilize an array easily, resulting in "drop-out" madness. :eek: :rolleyes:
i would have both boxes right next to each other, 1m would suffice only just i reckon.

Even the exact cards make a difference, as the signal voltages may be stepped up a tad (i.e. SATA controllers will do better at longer lengths with SATA drives than a SAS controller). It may seem odd, but it's what happens in practice (I'm not going into the details).
yes please dont go into details lol i only just got up! i imagine it plays a very big role though.

You'd want the existing case to have the shortest depth possible. So full towers + passive SATA is basically guaranteed to be a problem. ;) SAS is another story, as it can go 8.0m (much higer voltages).
hmmm. would you recommend against doing this then? it sounds more trouble then its worth.

what about a headless computer (4 SATA onboard, then another 4SATA connected via PCIe), that is then shared via the network? speed would be fairly low but storage capacity would be very nice (thats the main idea).

decisions decisions... both of those situations seem pretty silly to implement, any better ones?

p.s. i see you're now a 601! congrats :) im moving up to 603 soon :D
 
hmm. 85.1MB/s vs 76.8MB/s. hardly a speed increase by any means, cant really justify an extra $90 for 10MB/s more speed. i think i will stay with the greens for the time being.
It can matter in array types that use parallelism of drives for performance. But in the case of a backup drive, you don't need it. So the Green's are more attractive for the lower cost, whether lower power bills are applicable or not (i.e. enterprise & their massive quantity requirements).

i might be able to live with FW. FW3200 ;):rolleyes: what is due out soon isnt it? allows for 400MB/s, more then i could ever use.
USB 3.0 is actually going to be faster (sustain 400MB/s+). :eek: Likely less expensive too (the chip's not as complex vs. FW3200). ;)

interesting. eSata would allow for 2m max length (according to wiki), is that correct? a 12 or 16 drive array wouldnt be out of the question then.
The page you pulled is misleading though, as there's missing information (stupid omissions).

SATA's spec:
1.0m = Passive signals (card to a disk)
2.0m = Active signals (card to a Port Mulitplier board then to the disk). The PM boards have separate power off the enclosure's PSU, and it stabilizes, allowing for the increased distance.

In passive situations, it can, and does go past the 1.0m, and usually hits the limit of ~1.5m (@ ~600mV). But on SAS, it's a weaker signal derived off the SAS signal (20V for SAS, dropped ~400mV). That 200mV difference matters in terms of distance before the signal degrades to the point it's unstable. And it's one of the reasons cabling is critical when running SATA drives on SAS cards. (Adapters create contact resistance and this is also an issue, as it reduces the voltage even further over the same distance).

i would have both boxes right next to each other, 1m would suffice only just i reckon.
You'd need to keep the external cables at 1.0m or less, as there's cables internal to the case holding the drives. It would exceed 1.0m, and the longer it gets, the less stable it will be (drop outs). So you really do have to be careful. This is one of the reasons you have to test thoroughly before trusting your data to the setup. ;)

what about a headless computer (4 SATA onboard, then another 4SATA connected via PCIe), that is then shared via the network? speed would be fairly low but storage capacity would be very nice (thats the main idea).
This is a nice way to go, and there's multiple ways of doing it.

iSCSI
ATA over Ethernet
NAS

iSCSI and AoE are faster than NAS, as they don't need the upper layers (no IP or TCP usage). Definitely worth a look.

You can use Linux or Open Solaris to make the drives in a ZFS/Z-RAID/Z-RAID2 pool if you wish. Rather nice, and doesn't add to the cost at all. Just the time to set it up is all. :)
 
It can matter in array types that use parallelism of drives for performance. But in the case of a backup drive, you don't need it. So the Green's are more attractive for the lower cost, whether lower power bills are applicable or not (i.e. enterprise & their massive quantity requirements).
lower power bills are the main idea here, we had a 33% price rise in electricity costs recently so the lesser the watts the better. either way, the bottleneck is going to lie with Time Machine as it never goes above 15MB/s or 20MB/s anyway (too busy processing and comparing i think).


USB 3.0 is actually going to be faster (sustain 400MB/s+). :eek: Likely less expensive too (the chip's not as complex vs. FW3200). ;)
i know i know, it will be faster but the CPU still has to handle everything. i like the idea of a dedicated controller wire FireWire. i wonder what soft of CPU usage would we expect when handling 400MB/s of data!?

apparently Light Peak will be incorporated into USB3.0 now, so meh i dont know


The page you pulled is misleading though, as there's missing information (stupid omissions).

SATA's spec:
1.0m = Passive signals (card to a disk)
2.0m = Active signals (card to a Port Mulitplier board then to the disk). The PM boards have separate power off the enclosure's PSU, and it stabilizes, allowing for the increased distance.
so the port multiplier is the same as a booster or repeater (in networking terms). ;)

In passive situations, it can, and does go past the 1.0m, and usually hits the limit of ~1.5m (@ ~600mV). But on SAS, it's a weaker signal derived off the SAS signal (20V for SAS, dropped ~400mV). That 200mV difference matters in terms of distance before the signal degrades to the point it's unstable. And it's one of the reasons cabling is critical when running SATA drives on SAS cards. (Adapters create contact resistance and this is also an issue, as it reduces the voltage even further over the same distance).
You'd need to keep the external cables at 1.0m or less, as there's cables internal to the case holding the drives. It would exceed 1.0m, and the longer it gets, the less stable it will be (drop outs). So you really do have to be careful. This is one of the reasons you have to test thoroughly before trusting your data to the setup. ;)
these drop outs, are they dealt with by the computer? will it ask for the data to be "resent" such as with sending data over a TCP network? or does something else happen?


This is a nice way to go, and there's multiple ways of doing it.

iSCSI
ATA over Ethernet
NAS

iSCSI and AoE are faster than NAS, as they don't need the upper layers (no IP or TCP usage). Definitely worth a look.
now THAT is cool! AoE definietely looks the real winner in this case (doesn't have to go above the ethernet layer). performance of NAS are pathetic, i would never consider one of those.

for both iSCSI and AoE, the computer must have say Linux/windows/osx that is running in order to share the drive? or doesnt it need an active OS?

You can use Linux or Open Solaris to make the drives in a ZFS/Z-RAID/Z-RAID2 pool if you wish. Rather nice, and doesn't add to the cost at all. Just the time to set it up is all. :)
hmm interesting idea. never dealt with ZFS but have heard good things about it. i only wish apple didnt drop designing it!

very interesting :D
 
lower power bills are the main idea here, we had a 33% price rise in electricity costs recently so the lesser the watts the better. either way, the bottleneck is going to lie with Time Machine as it never goes above 15MB/s or 20MB/s anyway (too busy processing and comparing i think).
Keep in mind, that backup drives spend most of their time spun down, so the power draw is very small most of the time (board is still partially active), and what's used during activity is still rather low. ~12W under full operational load per disk (for a 7200rpm unit).

It's when they're active (used for primary data, not backups), and you're running thousands of them it makes a significant difference (large Corporate Data Centers for example).

i know i know, it will be faster but the CPU still has to handle everything. i like the idea of a dedicated controller wire FireWire. i wonder what soft of CPU usage would we expect when handling 400MB/s of data!?
It won't be much, say ~1% on one core. :rolleyes: :p

so the port multiplier is the same as a booster or repeater (in networking terms). ;)
No. A switch, as it switches up to 5 disks on a single SATA port.

these drop outs, are they dealt with by the computer? will it ask for the data to be "resent" such as with sending data over a TCP network? or does something else happen?
I was referring in terms of Direct Attached Storage (DAS), but it's similar. In software based arrays, the system handles the error recovery. But in a RAID card, it takes on that function. It also plays by a different set of rules than drives under OS control due to the parallelism of drives, and that's why the firmware timeout values are different between consumer and enterprise model drives.

now THAT is cool! AoE definietely looks the real winner in this case (doesn't have to go above the ethernet layer). performance of NAS are pathetic, i would never consider one of those.

for both iSCSI and AoE, the computer must have say Linux/windows/osx that is running in order to share the drive? or doesnt it need an active OS?
AoE is my personal favorite for home use, but I don't share it with other systems (so I'm selfish :p). Easy and inexpensive. The OS is free, so that helps quite a bit.

It's a computer, so it does need an OS. It acts as a server per se, but to a single computer. If you need multiple systems on the network to access the unit, then you have to go NAS or iSCSI.

hmm interesting idea. never dealt with ZFS but have heard good things about it. i only wish apple didnt drop designing it!
The Z-RAID/Z-RAID2 functions are similar to RAID 5/6 respectively, but there's no write hole issue. It's functional approach is different, and eliminates that problem. So you can run it off the system, and not have to spend $$$ on a RAID card. Large drive quantities are also possible via an HBA (Host Bus Adapter). It's basically a large port count card, without the RAID functions (cheaper than the RAID versions).
 
Keep in mind, that backup drives spend most of their time spun down, so the power draw is very small most of the time (board is still partially active), and what's used during activity is still rather low. ~12W under full operational load per disk (for a 7200rpm unit).
true. the hardware review i listed before didnt have a full load power example, only idle (2.8watts if im reading correctly ;)).

It's when they're active (used for primary data, not backups), and you're running thousands of them it makes a significant difference (large Corporate Data Centers for example).
haha i most certainly wont be doing that....yet :p

It won't be much, say ~1% on one core. :rolleyes: :p
oh, really? i see. i was hoping that it would impact it more. *hatred for USB comes out*. :p


No. A switch, as it switches up to 5 disks on a single SATA port.
of course. no idea what came over me. interesting concept. i imagine there would be a pretty big performance hit when all drives are operating under full load?

ahh. interesting
wiki said:
This means that realistically only around 3 drives can be connected before the data from the drives saturates the controller port.

I was referring in terms of Direct Attached Storage (DAS), but it's similar. In software based arrays, the system handles the error recovery. But in a RAID card, it takes on that function. It also plays by a different set of rules than drives under OS control due to the parallelism of drives, and that's why the firmware timeout values are different between consumer and enterprise model drives.
yea the timeout values are optimised for optimal performance and least errors etc, makes sense.


AoE is my personal favorite for home use, but I don't share it with other systems (so I'm selfish :p). Easy and inexpensive. The OS is free, so that helps quite a bit.
selfish tsktsk. seems a really simple, effective and cheap setup.

It's a computer, so it does need an OS. It acts as a server per se, but to a single computer. If you need multiple systems on the network to access the unit, then you have to go NAS or iSCSI.
but can it still be shared over a network via file sharing? or does that sort of defeat the purpose? i thought the whole idea of it working on the ethernet layer was for it to be able to easily transfer data throughout networks etcetc.


The Z-RAID/Z-RAID2 functions are similar to RAID 5/6 respectively, but there's no write hole issue. It's functional approach is different, and eliminates that problem. So you can run it off the system, and not have to spend $$$ on a RAID card. Large drive quantities are also possible via an HBA (Host Bus Adapter). It's basically a large port count card, without the RAID functions (cheaper than the RAID versions).
interesting. i must read up on the different RAID combinations with ZFS. looks very tempting :D

but for now, the 3TB JBOD looks very promising.

short term plans (over next 4 months): purchase 2x1.5TB or 2x1TB drives and setup another JBOD for my external FW enclosure.

long term plans (2/3 years): save up for a way to implement AoE or something similar.

im getting excited!! :D
 
of course. no idea what came over me. interesting concept. i imagine there would be a pretty big performance hit when all drives are operating under full load?
PM's typically top out at 250MB/s for throughput (no matter the drive count), so yeah, the overhead can be a bit costly in terms of throughput. So it's good for 3x mechanical drives before it's going to throttle you often (it would still happen in peak throughput situations, but it's fine as the average is far more common). So ~80 - 85MB/s *3 = 240 - 255MB/s.

but can it still be shared over a network via file sharing? or does that sort of defeat the purpose? i thought the whole idea of it working on the ethernet layer was for it to be able to easily transfer data throughout networks etcetc.
AoE = No, as it can't be routed.

Now if you have it hooked to one system, and let that system act as a server, Yes. But it's not direct, and requires 2 systems. It's not cost effective, and eats additional power as well. Definitely a bigger drain on your wallet.

If you built a separate system as a file server that must have Network access capability, NAS or iSCSI (faster than NAS, btw) make sense. AoE doesn't.
 
PM's typically top out at 250MB/s for throughput (no matter the drive count), so yeah, the overhead can be a bit costly in terms of throughput. So it's good for 3x mechanical drives before it's going to throttle you often (it would still happen in peak throughput situations, but it's fine as the average is far more common). So ~80 - 85MB/s *3 = 240 - 255MB/s.
something like this seems alright, dont you think> 3 internal SATA ports. no need for any more on the one board otherwise you will be limited by the card itself.

Now if you have it hooked to one system, and let that system act as a server, Yes. But it's not direct, and requires 2 systems. It's not cost effective, and eats additional power as well. Definitely a bigger drain on your wallet.
so the AoE is basically just for the one machine, such as a workstation or whatever. it connects via ethernet? a fibre implementation of this would be waaaayyyy too costly.

If you built a separate system as a file server that must have Network access capability, NAS or iSCSI (faster than NAS, btw) make sense. AoE doesn't.
iSCSI seems alright :) not really fond on NAS - it just seems too tacky!
 
something like this seems alright, dont you think> 3 internal SATA ports. no need for any more on the one board otherwise you will be limited by the card itself.
If your system has PCI-X (limit of 1064MB/s), then it can handle 4x SATA mechanical drives with ease. Current SSD's (Intel's) would hit the limit. Future (faster) verions, you'd get throttled.

so the AoE is basically just for the one machine, such as a workstation or whatever. it connects via ethernet? a fibre implementation of this would be waaaayyyy too costly.
Yes. AoE is fast though, as you can run it on teamed ports, and it can be run distances you can't do with other connections.

iSCSI seems alright :) not really fond on NAS - it just seems too tacky!
Both have thier place, and both are network capable. Choose which one you want, and go with it. :p
 
If your system has PCI-X (limit of 1064MB/s), then it can handle 4x SATA mechanical drives with ease. Current SSD's (Intel's) would hit the limit. Future (faster) verions, you'd get throttled.
sounds like a good way to go then :D

Yes. AoE is fast though, as you can run it on teamed ports, and it can be run distances you can't do with other connections.
the beauty of it i guess :D a fibre implementation would be ever so lovely but too expensive argh. what is your current setup for your AoE?

Both have thier place, and both are network capable. Choose which one you want, and go with it. :p
when the time comes i shall choose what seems the best. expect me to come back asking in a a few years (you will be around then right? ;)).

well i guess this brings an end to this thread. the information has been invaluable (adds thread to bookmarks ;)). thanks to all for their input :D

if anybody happens to read this down the track, feel free to add/ask anything they think might need adding - i am always on MR and will answer pretty swiftly.

DoFoT9
 
sounds like a good way to go then :D
I presume that's a good price there.

the beauty of it i guess :D a fibre implementation would be ever so lovely but too expensive argh. what is your current setup for your AoE?
FC isn't cheap, and no way you'd use it unless you need that level of bandwidth.

My AoE started life as a P4 based Dell. If you don't have anything to start from, look at AMD based systems (likely what I'd do the next time). You can do a lot with them without spending lots of $$$.

Like I said, you don't need that much, as Linux is a lightweight OS. Especially if you strip it down to the bare minimum.

when the time comes i shall choose what seems the best. expect me to come back asking in a a few years (you will be around then right? ;)).
No idea. :p
 
I presume that's a good price there.
not currently, but i hope to change that once i finish uni :rolleyes:


FC isn't cheap, and no way you'd use it unless you need that level of bandwidth.
who knows where i will end up in a few years :p probably the gutter. :(

My AoE started life as a P4 based Dell. If you don't have anything to start from, look at AMD based systems (likely what I'd do the next time). You can do a lot with them without spending lots of $$$.

Like I said, you don't need that much, as Linux is a lightweight OS. Especially if you strip it down to the bare minimum.
never had an AMD funnily enough. i might give it a go though! the system wont need to be powerful of course.

do you see any new technologies taking advantage of Light Peak? i wonder if its worth checking out some of the new technologies from that...

naww. :eek: not sick of me asking constant questions yet? :rolleyes: ill be here for a while, i guess.
 
never had an AMD funnily enough. i might give it a go though! the system wont need to be powerful of course.
A dual core would suffice for most things in terms of AoE, NAS, or iSCSI. One core for the OS, another for processing the Z-RAID1/2 calculations.

do you see any new technologies taking advantage of Light Peak? i wonder if its worth checking out some of the new technologies from that...
I'm going to wait yet, and see what "shakes out", as I want to see the costs. I know it's far less expensive than FC optical for example, but in the end, cost will be the determining factor for most (adoption rate by various device makers, not just consumers).

10G Ethernet on the next MP is a good example. Nice to see it, but it's not going to mean much, as the switches are over $6kUSD right now. If those devices fall to say what 100M devices are now ($$$), then it might take off. However, not that likely. 1G devices aren't even considered inexpensive. Not by home users anyway. ;) :p
 
A dual core would suffice for most things in terms of AoE, NAS, or iSCSI. One core for the OS, another for processing the Z-RAID1/2 calculations.
i guess the calculations could get quite intense. low end dual core AMD it is ;)


I'm going to wait yet, and see what "shakes out", as I want to see the costs. I know it's far less expensive than FC optical for example, but in the end, cost will be the determining factor for most (adoption rate by various device makers, not just consumers).
the fact that intel and intel alone will be the ones marketing and selling the product gives me reason enough to believe that it will be quite expensive to begin with. it may even be that way for a very long time unfortunately.

10G Ethernet on the next MP is a good example. Nice to see it, but it's not going to mean much, as the switches are over $6kUSD right now. If those devices fall to say what 100M devices are now ($$$), then it might take off. However, not that likely.
10G on the MP is long overdue, but as you say very expensive to be utilised by the hardware.

1G devices aren't even considered inexpensive. Not by home users anyway. ;) :p
tried to understand what you mean here but there are so many negatives its not possible at this time of day! :eek: are you saying 1G devices are considered to be expensive? our house is running 1G ethernet and it wasnt all that expensive, around $200 including the cables and an enterprise-ish grade switch. but then again that doesnt include the computers that is capable of running at that speed :D
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.