Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: Re: Sound very logical!

Originally posted by Jeff Harrell
In a perfect world, you'd have efficient software RAID-3, but Disk Utility currently doesn't support that. If you want to rah-rah Apple for something, cheer for software RAID-3, not for hardware FCAL support.
I agreed with everything you said up until that point. If you're doing video editing or anything which requires high speed write performance you don't want to be using any flavor of RAID that calculates parity... That's the problem with RAID flavors 3, 4, and 5. The parity calculations are a bitch and you take a huge performance hit on writes.
 
Re: Re: Re: Sound very logical!

Originally posted by illumin8
I agreed with everything you said up until that point. If you're doing video editing or anything which requires high speed write performance you don't want to be using any flavor of RAID that calculates parity... That's the problem with RAID flavors 3, 4, and 5. The parity calculations are a bitch and you take a huge performance hit on writes.
That isn't necessarily true. (See, for example, the aforementioned Stone+Wire.) And the extra work necessary to handle parity is more than worth it when you consider the alternative. Let's say you've got 32 drives attached to your machine for storing HD video. They're striped. You lose one disk. Now you have to rebuild and repopulate the entire array. The other option is to double the number of drives to 64 and mirror them, which would cost a fortune.

Suddenly, RAID-3 sounds like a pretty good idea.
 
Ahh... the first insightful post

Originally posted by Postal
Here's a question: is there actually a Fibre Channel chipset on the mainboard, or are they only seeing connectors? It's entirely possible that the mainboard has a pass-through for when a Fibre Channel card is actually installed.

MB never specifically said that there was an integrated Fibre Channel chipset, so there could be a daughter card. And yes, this could be a specific PowerMac/Xserve board for the people who choose Fibre Channel, rather than the one that everybody gets. It could even be just a test board that includes Fibre Channel as a matter of course.
Wow, that's a very insightful post. You see, there is a definite possibility that Apple would want to include a FC-AL controller that would have higher integration with the motherboard. Just to give you an idea of the bandwidth requirements:

64-bit 33mhz. PCI slot can push 256 MBytes a second bandwidth. These are the slots currently used on the PowerMacs. The new FCAL controllers are 2gbps, or roughly 250 Mbytes per second, however, they are full duplex, so you could be pushing 500 Mbytes per second through them if you are reading and writing. This would clog your PCI bus. You could switch to 64-bit 66mhz. PCI which would double your bandwidth to 512Mbytes per second, however, I'm guessing that Apple might decide to do something better:

Chain an FC-AL controller directly off of a Southbridge on the motherboard. Give it a dedicated channel to the CPU through the chipset, just like AGP graphics. This eliminates bottlenecks on your PCI bus, and by allowing you to add a daughterboard to your existing PowerMac motherboard, doesn't increase the cost too much for low-end users that don't need to push 500+ Mbytes a second out to disk.

This is starting to make sense now. It still is some awfully powerful hardware that less than 1% of Apple's market probably needs, but for the video professionals that need it, it is essential.
 
Originally posted by mcl
The production beta-testing I did on EMC's FC-AL driver for Solaris back in '96 was a hallucination.
Perhaps that's why you perceive FC as being buggy. If you were working on beta drivers for EMC, I can imagine you might think that.
I never said that FC wasn't in common use. I said it was still buggy. And for external FC-AL (not the internal FC-AL which is common on high end workstation and server gear, and which is not germaine to this conversation, because this conversation is dealing with EXTERNAL FC-AL connectors), it still is.
This is not true. Do you think banks, telcos, airlines, and other Fortune 100 companies would trust their largest databases and applications to storage that was buggy? Sure, there are occasionally issues, but I personally work on Sun servers at a data warehouse controlling 20 TB (that's Terabytes) of FCAL storage and they have not had unscheduled downtime for the last 3 years. I also work on Sun servers at a Telco using 10 TB of FCAL storage. Do you think these companies would trust their most mission-critical apps (for the telco it's their billing database, tracking every cellphone number dialed by every customer) to an unproven and "buggy" technology?
Yes, that fits my definition of "short-haul". My definition of "long-haul" spans oceans and continents.
F-cking ridiculous... Who on earth wants their disks to be sitting on a different continent than their server? There is no need for such technology.
 
Unfair comparison

Originally posted by mcl
In years past I managed the systems for the nation's largest nuclear power company, with terabytes of FC storage, and we experienced unexpected LIPs quite regularly, particularly when the server was a Sun, and the storage was non-Sun (typically, but not always, EMC).
Just for the record I think it's unfair to make a comparison between today's various Fibre Channel implementations and those from years ago. As with any new technology, there have been bugs and they have been resolved. In my experience it doesn't pay to be on the bleeding edge of any technology.

I too have seen issues with using Sun servers on EMC storage, and they can usually be resolved by one of four things:

1. Updating firmward on your HBA.
2. Update the HBA's driver in Solaris.
3. Update microcode on the EMC array.
4. Patch the OS or edit your /etc/system file, for example SET_LWP_STACK_SIZE being set incorrectly by Veritas can definitely cause the system to panic.

As always, if you follow the manufacturer's recommendations you will probably be fine. Now that Sun has an enterprise storage offering I highly recommend that you check it out (obligatory plug:) . It is based on Hitachi storage and has been tested to work extremely well with our servers.

BTW, did you work for Entergy? Just curious, I know a couple IT guys that work there.
 
Cool!

Originally posted by Jeff Harrell
I've personally seen a filesystem write (write!) more than 2 GB/s. That's bytes with a capital B.

Cool! I've never seen a filesystem write that fast before. I'm just curious how you accomplished all that bandwidth out to disk?

Since the fastest FCAL controllers right now are 2Gbps (gigabits, not bytes), it would have taken what, at least 8 of them in the same server connected to the same array to accomplish this? Not to mention PCI bandwidth.

I'm not saying I don't believe you, I'm just curious to know the details, being kind of a server geek myself.
 
Re: Ahh... the first insightful post

Originally posted by illumin8
Just to give you an idea of the bandwidth requirements:

64-bit 33mhz. PCI slot can push 256 MBytes a second bandwidth. These are the slots currently used on the PowerMacs. The new FCAL controllers are 2gbps, or roughly 250 Mbytes per second, however, they are full duplex, so you could be pushing 500 Mbytes per second through them if you are reading and writing. This would clog your PCI bus. You could switch to 64-bit 66mhz. PCI which would double your bandwidth to 512Mbytes per second, however, I'm guessing that Apple might decide to do something better:
Currently the X-Serves/Cluster Nodes are running...

X-Serves - Two full-length 64-bit, 66MHz PCI slots and one half-length 32-bit, 66Mhz combination PCI/AGP slot

Cluster Node - Two 64-bit, 66MHz PCI slots
 
Re: Re: Ahh... the first insightful post

Originally posted by Sun Baked
Currently the X-Serves/Cluster Nodes are running...

X-Serves - Two full-length 64-bit, 66MHz PCI slots and one half-length 32-bit, 66Mhz combination PCI/AGP slot

Cluster Node - Two 64-bit, 66MHz PCI slots
Do you know how many PCI busses they have? If both 64-bit slots are tied to the same PCI bus then they are sharing that 512Mbytes a second bandwidth.

My point is that Apple seems to be tying more and more system critical functions into the Southbridge. Audio, Gigabit Ethernet, USB, Firewire, and now FC-AL. This would make sense because then the most common peripherals have access to a lot more bandwidth, and your Avid or Protools hardware can have the PCI bandwidth all to itself.
 
Re: Cool!

Originally posted by illumin8
Cool! I've never seen a filesystem write that fast before. I'm just curious how you accomplished all that bandwidth out to disk?

Since the fastest FCAL controllers right now are 2Gbps (gigabits, not bytes), it would have taken what, at least 8 of them in the same server connected to the same array to accomplish this? Not to mention PCI bandwidth.

I'm not saying I don't believe you, I'm just curious to know the details, being kind of a server geek myself.
I don't remember the precise configuration. It was an Origin 2000 with 64 CPU's. It think it was 64 CPU's; it was four racks, so it could have been as few as 16 or as many as 64. But I think it was fully populated.

The disk was Clariion FC RAID. I forget the precise model; it was the one SGI OEM'd, so we just called it FC RAID.

I think we had 12 loops, and XLV running across all of them. Each unit was a RAID, and we used XLV to stripe across 12 of them. So, in today's terms, it was a RAID-0 of RAID-5's.

It was part of a filesystem test-bed system. Kind of a how-far-can-we-push-it-before-it-breaks sort of thing.
 
Re: Re: Re: Ahh... the first insightful post

Originally posted by illumin8
Do you know how many PCI busses they have? If both 64-bit slots are tied to the same PCI bus then they are sharing that 512Mbytes a second bandwidth.

My point is that Apple seems to be tying more and more system critical functions into the Southbridge. Audio, Gigabit Ethernet, USB, Firewire, and now FC-AL. This would make sense because then the most common peripherals have access to a lot more bandwidth, and your Avid or Protools hardware can have the PCI bandwidth all to itself.
The Gigabit Ethernet card is on one PCI channel with the 33 MHz slot. (AGP slot on the Powermac)

The other PCI channel with the Fibre card is on another PCI bus shared with -- the two PCI to ATA bridges (4 Apple drive modules), the low bandwidth I/O USB/Serial/CD-ROM, and the 2 PCI slots where the Fibre card is. (PCI slots and Keylargo on the PowerMac)

The rest of the high bandwidth I/O -- FW, ethernet and such is on the UniNorth chip. (Basically the same as the PowerMac)

---

Apple put the high bandwidth I/O on the Northbridge (UniNorth chip) and the low bandwidth on the Southbridge (KeyLargo chip).

But in the latest PowerMac dropped the USB off the KeyLargo chip -- this was the does my mac have USB 2 discussion.
 
Re: Unfair comparison

Originally posted by illumin8
As always, if you follow the manufacturer's recommendations you will probably be fine. Now that Sun has an enterprise storage offering I highly recommend that you check it out (obligatory plug:) . It is based on Hitachi storage and has been tested to work extremely well with our servers.

At the time, the manufacturer's recommendations (EMC) were, "Only use our GBICs at both ends of the loop. The other manufacturer's recommendation (Sun) was, "Only use our GBIC on the server side."

Of course, I could regale you with horror stories of vendor recommendations, from the time we were asked to keep quiet about the loss of a 1TB array (which was extremely large at the time...consumer hard drives were about 10GB tops) because the vendor supplied us with 1TB of faulty SCSI drives -- the spindle lubricant would suddenly polymerize without warning, causing the platters to seize). And so forth.

These days, my need for storage is more on the order of pushing massive amounts of data over pure fiber links (can only be fiber, because the RF noise must be near-zero) to and from custom inhouse filesystems. However, it must also be done cheaply, so FC-AL's out of the picture (so is anything else that isn't commodity, to offset the cost of the custom boards we must design, build, and install).


BTW, did you work for Entergy? Just curious, I know a couple IT guys that work there.

Nope. ComEd (greater Chicago tri-state area. I believe they still own/run the highest number of nuclear power plants in the country (12?) though the NRC usually has about half of them shut down due to various problems at any given time).
 
If we see any optical port I believe it will be an optical audio port not FC. This makes far more sense than a FC port.
 
I was right!

It IS optical audio via a toslink connector. If you could only see my smug face!

Pitty the G5 is so darn expensive though.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.