Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Typically you always have a TX (transmit) and an RX (receive) fiber. The reason there are two pairs of such cables is because SAN (e.g. Xserve RAID) when done right has two independent switching fabrics, typically referred to as Fabric X and Fabric Y or Fabric A and Fabric B. This means every host has a connection to two switches. Disk arrays have connections to the same two fabrics as well.

Just to clarify....

DMP (Dynamic Multi-Pathing) uses two fabrics, but both fabrics have the RX/TX connections. That's why you have 4 fibres - two more-or-less independent pairs of fibre connection to storage (people paranoid about availability wouldn't trust a dual-port HBA - they'd insist on two HBAs).

The goal is that if a fabric fails (due to a broken cable, a failed HBA (Host Bus Adapter - the Fibre Channel PCI (/-X/e) card), a failed switch or whatever) you can transmit and receive on the other fabric.

Ethernet over Cat5 also has separate transmit and receive pairs - but it seems like one wire. Fibre (and nobody has mentioned that the orange cables means that they're using MMF, not SMF) isn't condensed into a single connector/cable (unless you're using copper fibre).

This cluster is a bit amateurish - 750 TB spread across 40 boxes? Check out the 500 TB IBM disk system (http://www-03.ibm.com/systems/storage/disk/ds8000/index.html).

Instead of 40 boxes that add up to 750 TB, why not two boxes that are 1000 TB?

But of course, the audience is video editors, not storage architects.
 
Not to get off topic but that Cinema display looks quite glossy..:confused:

nab-apple-2007-1_400.jpg
 
I see nobody made a point about the 20+ barcelona chairs in the Apple lounge, those suckas are hand made and you have to buy them in pairs even if you only want one. I'm not sure on the pricing but I think it costs 1,400 dollars per chair just for one... but of course you cannot buy one!

Designed by Mies Van Der Rohe.
 
I can clarify

What are all the cables here? I can't identify some of them :confused:

(apologies for big pic)

The orange ones are LC multimode duplex fiber optic cables connecting the workstation to the Xsan.

The black ones are SDI/HD-SDI in/out of the Kona 3 card they have in the tower. That's a video capture/monitoring/accelerator card used by folks in the post-production/broadcast industries to get high quality digital audio/video in/out of the Mac Pro. HD-SDI is a 4:2:2 10-bit stream that can carry uncompressed digital audio as well. You can also get 4:4:4 uncompressed video in/out by using two of the HD-SDI ins/outs as dual-link HD-SDI.

If that all means nothing to you -- don't worry about it. :)
 
So what is this xserve farm for?

I see a bunch of final cut studio 2 computers running, but do they use the xserves as background render machines or something? If so this would give the false impression that final cut studio is faster than it really is.
 
xserve cluster is beeeeeautiful!

That is a nice photo...really says something about how important exterior design is...imagine if your corporate server room was in a glass room instead of buried in the cellar.

Edit: I was so distracted by the photo, I forgot to ask my question: WTF is all that needed for at a conference? Is it just for show or are they actually crunching several PB of data??

Edit2: And what is that box underneat the fiber switch that says "Exabyte" on it...looking at their website, is it tape storage?? I didn't know tapes were still used in modern systems.
 
They're served up by Google. They sometimes have no idea who they're aiming at.

I was reading a Tech forum yesterday (trying to figure out how to get my Apple Bluetooth Keyboard working on my HP laptop), and at the bottom of the page are three ads (not Google ads, though)...

1) MCSE certification
2) PORN
3) Windows2000 Servers

I couldn't freakin believe that they had porn advertisements right there in the middle of the page. Luckily there weren't pictures it just said something like "Hot Girllz Click Here"...still I was shocked.
 
Lol! That's impressive. They could demonstrate rendering a Shrek quality movie on that setup.
 
So what is this xserve farm for?

I see a bunch of final cut studio 2 computers running, but do they use the xserves as background render machines or something? If so this would give the false impression that final cut studio is faster than it really is.

Likely as a mix of Final Cut servers, encoding farm, and Xsan meta controllers. Apple isn't trying to fake anything, they are trying to show off what their solutions can do for customers in this space.
 
This cluster is a bit amateurish - 750 TB spread across 40 boxes? Check out the 500 TB IBM disk system (http://www-03.ibm.com/systems/storage/disk/ds8000/index.html).

Instead of 40 boxes that add up to 750 TB, why not two boxes that are 1000 TB?
I think you misunderstand IBM's DS8x00 storage system. The base processing frame (a full rack) can only hold 128 disk drives (8 disk enclosures with 16 drive slots each) which yields you either 9.3 TB or 38.4 TB of storage depending on the disks you use. To go beyond that you have to add expansion frames (a full rack) which can hold 256 disk drives each (16 disk enclosures with 16 drive slots each) and each expansion frame can add another 18.6 to 76.8 TB of storage depending on disk drives.

So to get 320 TB of storage you have to install 3 full size racks which is a total of 40 disk enclosures which all funnel thru 2 redundant 4-way processor units (some what of a bottle neck for video streams such as Apple is trying to support in this situation).

If you wanted to match the storage that Apple appears to have online you would need about 8 full size racks (over 90 disk enclosures) for storage alone using the IBM solution. The IBM solution would cost you more then the same amount of storage in a cluster of Xserve RAIDs and the IBM solution would involve more disks drives (the Xserve RAID supports 750GB disk drives, aka 10.5PB in a single Xserve RAID).

In other words "one box" doesn't nearly get what you think unless by a "box" you mean 8 or so full size racks (or 16 in the case of your 1000TB comment). :)

Of course I would use IBM (HP, etc.) storage solutions in data centers way before I would a cluster of Xserve RAIDs but for video work flows which need to support many high-data rate streams a cluster of Xserve RAIDs mixed with Xsan (or similar products from other vendors) does nicely.

(I used to work in HP storage division on small to high-end Fibre Channel based solutions and related software.)

I should note that the "3/4 Petabytes" comment would have to include storage in the Xserves in addition to the Xserver RAID units (40 * 14 drives slosts = 560 drives * 750GB disks = 420TB of storage in the Xserver RAIDs)
 
Lol! That's impressive. They could demonstrate rendering a Shrek quality movie on that setup.

I'd like to see that 1100 note cluster that ranked 5th on the supercomputer list a few years back, re outfitted with new 8 core Mac Pros. I mean, just how powerful is 26,400 GHz of combined processor grunt? I'd like to see shrek rendered on that.
 
sweet

the setup is no doubt really impressive, and Final Cut Studio looks really nice.... And people were worried Apple was loosing its focus on computers when they changed their name ;)

first, in the orignal post "comprising of", and now loosing? You are a f'n mod for christ's sake.

There is never an excuse for the lose/loose screwup. Sorry, but it's true.

eh, whatever
 
Many thanks to Yoursh, shawnce, birdsong, AidenShaw, and Nicky G for explaining about the cables.

The black ones are SDI/HD-SDI in/out of the Kona 3 card they have in the tower. That's a video capture/monitoring/accelerator card used by folks in the post-production/broadcast industries to get high quality digital audio/video in/out of the Mac Pro. HD-SDI is a 4:2:2 10-bit stream that can carry uncompressed digital audio as well. You can also get 4:4:4 uncompressed video in/out by using two of the HD-SDI ins/outs as dual-link HD-SDI.

If that all means nothing to you -- don't worry about it. :)

It does mean a bit - I'm still learning about this, but I expect to be working with this stuff for art films in a few years time as it becomes more mainstream / cheaper. That Kona 3 is a nice card, first I've heard of it - only about £2000 in the UK - I remember when this stuff used to cost £50000.

http://www.creativevideo.co.uk/public/view_item_cat.php?catalogue_number=aja_kona-3
 
So, while you can transmit and receive on the same piece of fiber, such a system is rarely used (I think Verizon FiOS does this). Typically you always have a TX (transmit) and an RX (receive) fiber.

Using passive splitters that allow full duplex on single fibers is not uncommon for metro and backbone networks. The passive optics are cheap, extra fibers and single-mode GBICs are not, so if you don't need a bunch of lambdas it makes sense. But inside the data center, you can use dirt cheap multi-mode fiber networking since the distances are so short, obviating any benefits.
 
I see nobody made a point about the 20+ barcelona chairs in the Apple lounge, those suckas are hand made and you have to buy them in pairs even if you only want one. I'm not sure on the pricing but I think it costs 1,400 dollars per chair just for one... but of course you cannot buy one!

Designed by Mies Van Der Rohe.

completely off topic, but...
I NOTICED!!!!! wow, i just studied that in my AP Art History Class. lol:D
 
I work for Sun in their UK sales centre and I am used to selling some SERIOUS kit on a day to day basis. E25K clusters utilising 64 USIV+ hooked up to SL8500 tape libraries etc, is not uncommon... but seriously, this takes the biscuit. Sun sell some serious server technology and I'm telling ya, the server world is a fairly small sector of Apple's business, but holy cow... this is just barmy. 750TB on site? At an Expo?! Madness. My geek side wants to punch the air. Go Apple! I am so tired of the unresponsive crap we operate on at work. Sun Ray ultra-thins hooked up to who knows what at the data centre. All I do know is that it is SLOW SLOW SLOW!

Anyways. Nice kit. If we can get ZFS into Leopard and do some nice Sun style energy efficency work then I can see that my personal and work lives will be working well together!
 
Why all this at a show? I'll tell you.

NAB is not just a "conference," it is one of the largest video/broadcast expos in the world. This is where EVERYONE in the industry shows off their stuff, and major purchasing decisions are made. This clearly was the year Apple wanted to show that the Era of Avid is over.

The servers and RAIDs were doing many different things -- Xsan, Final Cut Server, PictureReady ingest points, render clusters, etc. etc. etc. It was very impressive, and Apple clearly had one of the best setups at NAB. It was packed the entire time, but frankly, it was the new applications that stole the show, not the hardware (even though it was impressive.) Final Cut Pro 6, Motion 3, Soundtrack Pro 2, Compressor 2, and Motion 3, not to mention Final Cut Server, were pretty much overwhelming everyone who checked them out. These were not minor updates, they were major, major updates, and everyone knew it. Things that people were praying for for years became a reality -- mixed format timeline in FCP, 3D in Motion, a workgroup asset manager for FCStudio workgroups, a REALLY high-end color-grading and finishing solution, and don't forget ProRes, which will probably become the defacto post-production format for editors who would typically want to work in uncompressed 10-bit 4:2:2 YUV (you now get essentially the same quality in one-sixth of the storage space, and with 1/6th the bandwidth requirements -- this is HUGE.)

I'll be honest, I'm still recovering from NAB -- Vegas is simply the weirdest place in the world IMHO, and it takes a little while to re-adjust to the "real world." ;-)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.