Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by Jeff Harrell
A clarification.
There are ways to improve the reliability of a stripe set, by using parity data calculations to increase the number of disks by one (rather than by doubling them), but those aren't supported by Disk Utility at this time. When you hear about RAID-3 and RAID-5, that's what they're talking about. (There's a RAID-4, but nobody uses it.)

Just a clarification.


As long as we're nitpicking, it's RAID 3 that nobody uses. RAID 4 is in common use, and you need look no further than NetApp's filesystem to find it.
 
Originally posted by bretm
Having a RAID doesn't mean your material is backed up. That is only if you stripe your setup as mirrored.



having a mirror doesn't mean your material is backed up. I've seen mirrors fail, bad data from one set get copied to the good mirror, multiple simultaneous disk failures, etc.

Having level 0 backups on tape stored in multiple geographically-diverse locations and checked for readability and integrity on a regular basis means your material is backed up.

Hainvg a mirror just means you're hoping a disk or three won't fail.
 
Re: Re: Sound very logical!

Originally posted by Jeff Harrell

In a perfect world, you'd have efficient software RAID-3, but Disk Utility currently doesn't support that. If you want to rah-rah Apple for something, cheer for software RAID-3, not for hardware FCAL support.


How, exactly, is software-based RAID preferable to hardware-based RAID? It's certainly not more efficient, and I would hesitate to apply the term "efficient" to software-based RAID.

As for RAID 3, see my previous post on the matter. It's not in use. Even if we were discussing RAID 4 (the version of that approach that IS in use), you'd certainly want hardware-based RAID 4, not software-based. Software-based RAID 4 would introduce unnecessary overhead on every write, and would place an unnecessary burden on the system's CPU.

Hardware-based RAID is in widespread use. Why cripple a system, performance-wise, by doing in system memory what's commonly and relatively cheaply done in firmware?
 
Fibre-channel on motherboard.

This is just IMHO. I think built-in Fibre-channel makes perfect sense, coming from Steve Jobs/Apple. The PowerMac is not marketed as as a home desktop computer. The eMac and flat-panel iMac are. The PowerMac is meant to be a pro (and might I add, high-end specialized multimedia capable) computer. Remember when Apple first started shipping PowerMacs with built-in FireWire? How much was as add-on FireWire PCI-card back then? Several hundred dollars... How common and affordable were FireWire equipped peripherals? All there was were high-end, expensive miniDV and pro DV cameras. This was how many years ago? Maybe 4? And now... Hey, you can even buy an MP3 player with FireWire! ;)

People, stop thinking like the one's who created the Y2K fiasco. And stop expecting Apple to "make perfect marketing sense". Apple innovates, and the rest of the world follows. Not the other way around.

I think the person who mentioned FireWire over Fibre-channel is on to something. This would play right into Apple's style. They did develop FireWire, after all. And if they could manage a 3.2 Gb/s network connection, that would be a phenomenal marketing advantage. This might also be an explanation for no mention of an Ethernet port. Maybe they've done away with it. Want 10/100/1000 Gb/s Ehernet? Plug an adapter into one of the Fibre-channel ports. Like they replaced VGA with ADC. Hook up two PowerMac 970's directly, and you've got a 3.2 Gb/s network. What more could you want in a video/audio workstation? And I was thinking (fantasizing?), if this thing has two Fibre-channel ports (as MacBidouille suggests), for a small network you wouldn't even need a hub. Just daisy-chain the machines together. Cool. :D
 
Isn't RAID-5 the common one?

Originally posted by mcl
As for RAID 3, see my previous post on the matter. It's not in use. Even if we were discussing RAID 4 (the version of that approach that IS in use

Isn't RAID-5 the most common parity-based RAID? My hardware RAID boxes do 0, 1 and 5 (plus combinations like 10, 01, 50, ...).

Google:

RAID-3 - 21,500 hits
RAID-4 - 17,700 hits
RAID-5 - 155,000 hits

Look at http://www.jadlog.demon.co.uk/raid3.htm - RAID-4 not even mentioned.
 
Re: Re: Rog's post

Originally posted by ZeeOwl
Companies do occasionally abandon a projet under development.

No they don't. My Power Express, my Orient Express, my Hyperbolic and my StarMax 6000 are all on their way. really.
 
I still think its possible. No one has yet countered my opinion that building it into their chipsets would reduce the cost significantly from using the $500 add-on boards. This is what they did with Gigabit Ethernet as well.

Infact, I would not be surprised if Apple's new southbridge would include:

- Surround Sound
- Dual Gig Ethernet
- ATA-133 (Serial ATA is rumored to be a revision off)
- USB 2
- Fibre Channel

There are many PC motherboards (from Tyan for example) for high-end applications that already include dual gig ethernet, usb2 and S-ATA RAID along with dual Opteron support. I do not see it too hard for apple to develop the chip above.

Infact, developing it allows apple to bring the features to as many machines as quickly as possible, while reducing overhead. If they add the features to the xserve, they might as well put them in the PowerMac. It costs way too much for a company of Apple's size to develop multiple southbridge chips to be in use simultaniously.
 
Re: Isn't RAID-5 the common one?

Originally posted by AidenShaw
Isn't RAID-5 the most common parity-based RAID? My hardware RAID boxes do 0, 1 and 5 (plus combinations like 10, 01, 50, ...).

Yes, it is. However, we were discussing (the absence of) RAID 3.


Look at http://www.jadlog.demon.co.uk/raid3.htm - RAID-4 not even mentioned.


Yes. That's because the RAID 3 they describe IS RAID 4. RAID 3 has a different approach to parity writes.

Perhaps the industry started referring to RAID 4 as RAID 3 when I wasn't looking, and stopped mentioning RAID 4 altogether. However, I assure you that RAID 4 (as distinguished from RAID 3) is what's in use today when people claim they're using RAID 3. Again, one need look no further than NetApp's filesystem.
And that information comes from one of the people who wrote WAFL, and was backed up by quite a bit of evidence.


RAID 3 simply wouldn't ever actually be used. The performance sucks, because the drives all do synchronous writes. It kills transaction-based I/O, and is therefore fairly worthless for real-world use.

If you doubt the existence of RAID 4, I refer you to "A Case for Redundant Arrays of Inexpensive Disks (RAID)" Patterson, Gibson, & Katz (1988), presented at the 1988 conference of the ACM Special Interest Group on Management of Data (SIGMOD). That's the paper that defined RAID.


And, once again, I'm not saying anything about RAID 5. It exists, I've used it for years, it's perfectly suited to certain tasks, and unsuitable for others. However, the topic of conversation is RAID 3/4, and which one is in use today, not 5. Nobody's questioning the existence or widespread use of RAID 5.
 
Re: Re: Re: Sound very logical!

Originally posted by mcl
How, exactly, is software-based RAID preferable to hardware-based RAID?
If you're more interested in bandwidth than IOPS, the difference between hardware and software RAID is not significant. Compare a hardware RAID from, say, HDS to an XVM on IRIX, both with RAID 0+1. The XVM is faster. I've done the tests myself.

As for RAID 3, see my previous post on the matter. It's not in use.
Uh, sorry, dude. You're just plain wrong about this one. Stone+Wire is software RAID-3. Arguably there's no more high-performance sequential read-write filesystem in the world.

Hardware-based RAID is in widespread use. Why cripple a system, performance-wise, by doing in system memory what's commonly and relatively cheaply done in firmware?
Because it's neither more common nor cheaper to do it with a hardware controller. I mean, if you want to use an incredibly crappy controller, sure, but if you're comparing a reliable system to another reliable system, think software RAID-0+1 or RAID-3 to hardware RAID 0+1 or RAID-3. For servers, sure, think hardware storage management. For desktop (or at least single-user) use, think software RAID. Dollar per unit value, that's how it works out.
 
Re: Isn't RAID-5 the common one?

Originally posted by AidenShaw
Isn't RAID-5 the most common parity-based RAID?

In my experience, RAID-3 is more common. It depends on whether you're doing sequential read/write or sparse. If you're talking about a file or database server, RAID-5 is a good choice. If you're talking about video playback or editing, RAID-3 wins.
 
Re: Re: Isn't RAID-5 the common one?

Originally posted by mcl
Yes. That's because the RAID 3 they describe IS RAID 4. RAID 3 has a different approach to parity writes.
Um. No. RAID 4 and 5 distribute parity data across all drives in a stripe set. Four and five differ in the units they use for calculating parity. RAID 3 on the other hand writes data to (for example) 1, 2, 3, and 4, and parity to 5. RAID 4/5 uses striped parity, while RAID 3 uses a dedicated parity drive. The differences can be really confusing.
 
Re: Re: Isn't RAID-5 the common one?

Originally posted by Jeff Harrell
In my experience, RAID-3 is more common. It depends on whether you're doing sequential read/write or sparse. If you're talking about a file or database server, RAID-5 is a good choice. If you're talking about video playback or editing, RAID-3 wins.


RAID 5 for databases?!? Never! The performance hits on writes make it ridiculously slow for high-TPS DBs!

If you're going to run a DB with RAID storage, you should go 0+1.
 
Re: Re: Re: Isn't RAID-5 the common one?

Originally posted by Jeff Harrell
Um. No. RAID 4 and 5 distribute parity data across all drives in a stripe set. Four and five differ in the units they use for calculating parity. RAID 3 on the other hand writes data to (for example) 1, 2, 3, and 4, and parity to 5. RAID 4/5 uses striped parity, while RAID 3 uses a dedicated parity drive. The differences can be really confusing.


Look, you're simply wrong. If you want to debate what RAID 3,4, and 5 do, go read the paper I cited earlier. It's the authoritative source. Then come back and tell me RAID 4 doesn't use a dedicated parity disk.

The differences are only confusing when you mistake marketing pap for facts. Stop listening to the people in the ties, and start listening to the people that write the code, build the boards, and run the systems.
 
Re: Re: Re: Re: Sound very logical!

Originally posted by Jeff Harrell
If you're more interested in bandwidth than IOPS, the difference between hardware and software RAID is not significant. Compare a hardware RAID from, say, HDS to an XVM on IRIX, both with RAID 0+1. The XVM is faster. I've done the tests myself.

Differences in hardware vs. software RAID for 0, 1, 1+0 and 0+1 are negligible. Your results are entirely unsurprising. I'd like to see figures supporting such a claim for RAID 4 or RAID 5, however.

I don't actually expect you to produce them, because I know you can't demonstrate faster transactions for software RAID 4 or 5 vs. hardware. The hardware-based solution will win when there's parity computations to consider. Either you know this and you're trying to be deceptive, or you don't know this and you're arguing from atop very shaky parapet.


Uh, sorry, dude. You're just plain wrong about this one. Stone+Wire is software RAID-3. Arguably there's no more high-performance sequential read-write filesystem in the world.

Really? Then why do they also offer hardware RAID 5 in the same platform, and tout it as higher-performance than their so-called "RAID 3"?

Since Discreet's website layout leaves something to be desired, please point me to a tech note, manual, or white paper that describes the parity writes of their RAID 3, and I'll show you the part in the Berkeley paper that classifies that particular use as RAID 4, and I'll even explain to you why it's not RAID 3. Until then, the only evidence I see supporting your claim that it's RAID 3 is that Discreet calls it RAID 3, therefore it must be true.

Oh, and why don't I see Stone+Wire RAID 3 storage attached to the mission-critical servers in any of the world's largest financial houses, digital movie studios, financial markets, Fortune 500 companies, etc?

If they're that good, people would be stumbling all over each other to buy and use them. They're not.

(and when I say, "why don't *I* see, I do mean me. I have had occasion to work with the folks who spec, buy, install, and maintain that equipment, or work on it directly. I have first-hand knowledge of the systems, both desktop and production server, in Pixar, Dreamworks, and ILM, as well as many of the largest, most well-known high-tech companies in the Valley, and several out East and in the Midwest.)


Oh, and I'm not your "dude". Is faux-familiarity somehow supposed to bolster your argument and distract from a glaring lack of supporting evidence?

For servers, sure, think hardware storage management. For desktop (or at least single-user) use, think software RAID. Dollar per unit value, that's how it works out.


Yes, I'll agree that it's more cost-effective to use software-based RAID on the desktop. However, neither you nor I were arguing from that position. We were both discussing enterprise-class solutions. Whipping out software-based RAID on the desktop as justification for your position at this late point in the discussion is disingenuous at best.
 
Re: Re: Isn't RAID-5 the common one?

Originally posted by mcl
Yes. That's because the RAID 3 they describe IS RAID 4. RAID 3 has a different approach to parity writes.

Actually I think they are correctly describing both RAID 3 and RAID 4. The difference between the two being the way data is striped, either at the block level or the byte level. This fact is described elsewhere, including the paper you reference:
Page 113, regarding RAID 4's changes over RAID 3: "We no longer spread the individual transfer information across several disks, but keep each individual unit in a single disk."

The parity is calculated differently, but the site doesn't get into the algorithm, and the layout of disks they show could be either 3 or 4.

The only thing that would imply they are actually talking about RAID 3 rather than RAID 4 is that they recommend this type of RAID for Video/image editing etc. RAID 4's advantage over RAID 3 is small file read access, a quality that would not benefit those applications.

edit: grammar.
 
Re: Re: Re: Isn't RAID-5 the common one?

Originally posted by andyduncan
Actually I think they are correctly describing both RAID 3 and RAID 4. The difference between the two being the way data is striped, either at the block level or the byte level. This fact is described elsewhere, including the paper you reference:
Page 113, regarding RAID 4's changes over RAID 3: "We no longer spread the individual transfer information across several disks, but keep each individual unit in a single disk."

The parity is calculated differently, but the site doesn't get into the algorithm, and the layout of disks they show could be either 3 or 4.

The only thing that would imply they are actually talking about RAID 3 rather than RAID 4 is that they recommend this type of RAID for Video/image editing etc. RAID 4's advantage over RAID 3 is small file read access, a quality that would not benefit those applications.

edit: grammar.


Untrue. RAID 3 also suffers in comparison to RAID 4 on writes, because RAID 3 writes are all disk-synchronous, so writes (always a short-wait event in the kernel, and thus problematic if delayed) are delayed until the data for all disks is ready to be written. Thus, each RAID 3 write takes more time than each equivalent RAID 4 write. In almost every case, substantially more time. RAID 3 is the worst-performing of all the RAID levels in terms of writes, and this is why it is not in use today.


As for the parity computations, RAID 3 computes bytewise parity across all disks, whereas the parity on RAID 4 is a delta of blockwise parity -- a much more efficient algorithm.



Having said all of this, and realizing that most of you are arguing from the point-of-view of video editing, whereas I'm arguing from a position of general enterprise-class mission-critical application, I think it's important to point out that both RAID 3 *AND* RAID 4 suffer from bottlenecking at the spindle of the parity disk. Interleaved parity, such as that found in RAID 5, offers better performance than either for reads. Issues regarding filesize can be addressed easily by increasing the default block read and write size on the filesystem written on top of the RAID layout. I wouldn't be surprised if several of you have been evaluating disk performance without ever changing the default filesystem parameters (and where possible, physical data layout on the platter, a la AIX) to more closely match the application to which the disks will be put.

I'm tired of arguing. The vendors have obviously been marketing "RAID 3" to A/V production houses, for whatever reason. I can't argue with marketing...it is what it is. Further, without concrete, detailed information on the striping and parity computations involved, such arguments quickly become futile, because it's impossible to debate without that necessary common ground.

If someone's got some hard, real-world (not based on the 1990 Berkeley simulator, and not ripped from a RAID vendor's marketing materials) data demonstrating RAID 3 (synchronous parallel reads/writes with bytewise parity on a dedicated disk) performance on small and large block reads and writes, for software and hardware RAID implementations, outperforming RAID 4 and/or RAID 5, I welcome you to introduce it.

If all you've got is a paper touting the benefits of RAID 3, without any meaningful comparison to the other RAID levels, or without the software/hardware comparison, or without the comparison on small-block v. large-block reads/writes, don't bother.
 
Posters

off-topic:

There is clearly an abundance of intelligent posters on the boards for once.

I don't know anything about RAID at all, but I'd just like to ask everyone to try and be a little less heated - this all feels a little to bitter at the moment.

Peace to all (I don't often feel like this):D
 
If this is FC there are VERY few reasons to use this over SCSI 160 or 320 unless you are running a SAN cluster in Active/Active/...

There really isn't that much more throughput to an individual machine when using FC for storage.

Now if Apple is just bringing FC to the masses as they did GbEnet to bring prices down then great! However, I don't see too many consumers spending ~$30,000 on a switched FC storage network. Also, if there is only 1 port and 1 channel then it isn't very useful anyhow as nearly everyone wants redundancy (multiple switched) when putting FC into place.

Also, the comments regarding software RAID being as fast as hardware RAID is just plain wrong. Try and get 600Mb/sec transfer with software RAID on a heavily loaded machine.
 
Originally posted by macosr
If this is FC there are VERY few reasons to use this over SCSI 160 or 320 unless you are running a SAN cluster in Active/Active/...

More than 100 devices per loop. Cheap optical cables. Multi-kilometer cable runs. Switches for storage consolidation. There are lots of reasons why Fibre Channel is neat other than performance. It may or may not be what you need, but it's not a one-to-one comparison with SCSI.

Now if Apple is just bringing FC to the masses as they did GbEnet to bring prices down then great!

Problem is, Apple's thrown its weight behind FireWire, which pretty much is a one-to-one comparison with FCAL. I know there's FireWire 3200 gear out there, although I don't know if it's prototype-stage or incredibly-expensive-production-model stage. Both run over fiber optic cables (cheap and far!), both support switches and hubs for managed topologies, both are about the same speed, give or take a gigabit per second. Why would Apple built FCAL into the machine when they'd prefer FireWire rise to challenge FCAL?

Try and get 600Mb/sec transfer with software RAID on a heavily loaded machine.

Both been there and done that. Maybe you're comparing hardware RAID to crappy software RAID. XVM on IRIX is the bee's knees. I've personally seen a filesystem write (write!) more than 2 GB/s. That's bytes with a capital B. Really big system, lots of disks. Not a hardware controller in sight. The configuration was RAID 0+1. In my own lab, smaller machines were comparable, doing more than 95 MB/s reads and writes off of eight FC disks (Seagates, if I remember correctly) through XVM. You can't get much faster on a single FCAL loop.

Is Apple's striping/mirroring implementation anywhere near that good? Don't know. But it's not safe to naturally assume it isn't, either.
 
Originally posted by Jeff Harrell
More than 100 devices per loop. Cheap optical cables. Multi-kilometer cable runs. Switches for storage consolidation. There are lots of reasons why Fibre Channel is neat other than performance. It may or may not be what you need, but it's not a one-to-one comparison with SCSI.



Problem is, Apple's thrown its weight behind FireWire, which pretty much is a one-to-one comparison with FCAL. I know there's FireWire 3200 gear out there, although I don't know if it's prototype-stage or incredibly-expensive-production-model stage. Both run over fiber optic cables (cheap and far!), both support switches and hubs for managed topologies, both are about the same speed, give or take a gigabit per second. Why would Apple built FCAL into the machine when they'd prefer FireWire rise to challenge FCAL?



Both been there and done that. Maybe you're comparing hardware RAID to crappy software RAID. XVM on IRIX is the bee's knees. I've personally seen a filesystem write (write!) more than 2 GB/s. That's bytes with a capital B. Really big system, lots of disks. Not a hardware controller in sight. The configuration was RAID 0+1. In my own lab, smaller machines were comparable, doing more than 95 MB/s reads and writes off of eight FC disks (Seagates, if I remember correctly) through XVM. You can't get much faster on a single FCAL loop.

Is Apple's striping/mirroring implementation anywhere near that good? Don't know. But it's not safe to naturally assume it isn't, either.

FireWire is not a direct competitor to FibreChannel. FibreChannel is designed for storage networks only. FireWire is more general purpose, and does not yet push such high speeds. Even FW800 doesn't come near in disk performance, especially when you go with dual channel 2Gbp FibreChannel.
 
Assuming this is the motherboard for the new PowerMac (or whatever it will be called) this connector is more likely a TosLink optical connector, allowing output of digital sound and DTS/5.1 to be sent to an Amplifier. I'm hoping on this one!! Why would Apple build fibrechannel into a motherboard when only a few people would use it and when PCI cards are readily available to do the same thing?
 
Originally posted by Jeff Harrell
More than 100 devices per loop. Cheap optical cables. Multi-kilometer cable runs. Switches for storage consolidation. There are lots of reasons why Fibre Channel is neat other than performance. It may or may not be what you need, but it's not a one-to-one comparison with SCSI.

Problem is, Apple's thrown its weight behind FireWire, which pretty much is a one-to-one comparison with FCAL. I know there's FireWire 3200 gear out there, although I don't know if it's prototype-stage or incredibly-expensive-production-model stage. Both run over fiber optic cables (cheap and far!), both support switches and hubs for managed topologies, both are about the same speed, give or take a gigabit per second. Why would Apple built FCAL into the machine when they'd prefer FireWire rise to challenge FCAL?

Both been there and done that. Maybe you're comparing hardware RAID to crappy software RAID. XVM on IRIX is the bee's knees. I've personally seen a filesystem write (write!) more than 2 GB/s. That's bytes with a capital B. Really big system, lots of disks. Not a hardware controller in sight. The configuration was RAID 0+1. In my own lab, smaller machines were comparable, doing more than 95 MB/s reads and writes off of eight FC disks (Seagates, if I remember correctly) through XVM. You can't get much faster on a single FCAL loop.

Is Apple's striping/mirroring implementation anywhere near that good? Don't know. But it's not safe to naturally assume it isn't, either.

I completely agree with your assesment on FC and the other features. Here is the real question..."will all home users that will use these features please raise your hand." NONE

Firewire != FC

Regarding RAID throughput. I won't argue it...I was just pointing out what is true. BTW, we get the ~600Mb/sec on RAID 0+1 with a total of 4 drives. If you have some secrete way of doing software RAID then maybe you should contact Dell, Sun, HP, etc so they could some some money on their hardware RAID setups :D
 
Fiber is NOT RAID!!

<rant> altogether now, people, say it- FIBER does NOT equal RAID.
or, to put it another way, you don't eed fiber to set up a raid, so will everyone posting saying that it makes sense for apple to put fiber on a powermac mb so that people can use RAID arrays please stop it! yr makin' us mac users look like dips. setting up a raid array on a mac does NOT require fiber, it just requires at least 2 identical capacity HD's. that's it. so go buy 2 drives, turn 'em into a RAID with OSX disk utility, and forget about having fiber on your next powermac mb, 'cause that just ain't gonna happen. </rant>

personally, i think this means that if this rumored mb does exist, it must be headed for a new xserve. that's the only thing that makes sense. either that or it's all hooey.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.