Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
alfismoney said:
First of all, the XSAN system is set up to run off of XRAID, which apple (and all other raid makers) ship with redundant power supplies, controller cards, and disks to prevent drive crashes from bringing down a system and also feature cell phone/email/pager/big flashing network alert systems that are automated at the hint of any problem. While none of these are foolproof, they do add up to a system that, when properly set up and administered, is infinitely more reliable than any hard disk the average consumer has ever used. There's data security built in that works fairly well along with a lot of software administration.

No argument at all, but what does that have to do with my comments about the XServe falling seriously behind the curve for I/O capabilities - even considering the limitations of entry-level 1U servers?


alfismoney said:
More importantly the XSAN is designed to relay everything across a fibre channel hub that moves MUCH faster than Gbit ethernet and is only meant to serve high end systems where, when you drop 2x$13,000 for two raid arrays plus $4000 for an xserve plus $4000 for a switch plus $5000 in backup parts and another $14,000 for a tape backup system you don't really need to worry about the tiny expense of buying a $500 card for each G5 you hook up to it.

Look again, I was replying (in my other post, not the one that you quoted) to someone who seemed to think that Xsan would pool all the free space on all your Macs into one uber-filesystem.

I quoted some prices to point out to him that Xsan is not low-end freeware - I agree that you'll have very serious amounts of money invested in order to use it. I ordered two 16-port Brocade 2Gbps Fibre Channel switches today, and 20 more dual port 2 Gbps HBAs. And more than $5000 worth of fibres to connect them. Fibre Channel is serious money....


alfismoney said:
Not to mention the fact that you have a standards-compliant technology as opposed to 10Gbit, which still hasn't been finalized and won't be for a few years yet.

Sorry, but you are wrong here. 10GbE is IEEE 802.3ae, adopted in 2002. 10GbE is real, and is available from many manufacturers today. (e.g. http://www.foundrynet.com/products/l23wiringcloset/edgeiron/24GS_48GS_8x10G.html or http://news.com.com/HP+signs+on+high-speed+networking+start-up/2100-1010_3-5477022.html)

If you were paying attention to 10GbE, you'd see that the hot buzz is that it's moving from a network backbone role (your gigabit ethernet switches link to each other with 10GbE) to a server to switch role.


alfismoney said:
As for the PCIExtreme addition, realize that you've got a million other drive limitations, such as ATA only running at 150MBps, plus the channel limitations of fibre and ethernet, that are much more limiting than your PCI bus, so while I'd like to see apple make the change I think it's a little bit less pressing than simple processing power and room for 16 gigs of RAM...

It's PCI Express.... Was that a mistake, or an attempt to be cute?

Plus, SATA is 150 MBps per disk, so a 7 disk array theoretically exceeds the bandwidth of PCI-X. Only 7 disks.... In practice, a 6 disk array would be bandwidth-limited. (Even though the media transfer rate wouldn't be a problem, the transfers to and from those big caches on the disk would be affected. That means more latency.)

This is very application dependent. A 1U NFS server probably won't be PCI-X limited - unless you put a quad port 2Gbps FC HBA or a 10GbE or IB card into it. Even then, it would probably run out of CPU power before hitting hard PCI-X limits.

But think about the Xserve poster child - the VA Tech super-cluster. PCI Express is 3 times faster than PCI-X for InfiniBand, with 20% less latency. Instead of 7th on the supercomputer list, it might have been in the top 5 if the Xserve had current technology for I/O.

And please DO NOT underestimate the importance of lower latency.... Too often people focus on sustained bandwidth without realizing that latency is the real problem. Some applications need MB/sec, but many others need IOs per second.

"Latency" usually means that you're waiting for something. It doesn't matter how fast your CPU or memory bus happen to be - everyone waits at the same speed!
 
sorry i replied to another posting at the start of my quoted response to you, i wasn't paying much attention when i was writing (PCIExtreme and all)

good point on the latency issue, i hadn't considered that in my numbers and you're completely right there. it will make a big speed difference. of course, that doesn't change my thought that flat out bus speed isn't the real issue here, only latency is. allow me to explain.

if you hand build your own lvd scsi cables out of gold (i've seen it done in labs) for maximum efficiency and use 15,000rpm drives you're not going to come close to 100MBps in regular usage. yes, you can get burst rates that are faster, but in regular usage it just doesn't happen. over serial ata, written across seven drives, you start to get higher throughput but you're not going to move enough data through the raid system controller to max it out, nor are you going to max out your fibre channel connection. sure, it will happen on occasion, but not regularly. this is why you need a full 7 disk setup to handle uncompressed high resolution video signals. the latency will improve drive response time but not improve the terrible throughput you end up getting from each drive. In the end, my argument is that you're never going to be moving more than 800MBps over this particularl pci bus until there's more data moving over your raid array, which apple's current fibre implementation just doesn't do. nor do most people need it. the upgrade is currently worthwhile only for the latency, not the megabyte speed.
 
MacBandit said:
Well yeah a Mac plus would be fine for text. I'm not surprized that a 1GHz increase in clock cycles shows a speed increase in every day use. I did say comparable clock to clock afterall.

I guess my question I still have for myself is - are there advantages the G5 has over the G4 at the same clock cycle? It just seems like such a radically different chip there must be more to it than MHz.
 
AidenShaw said:
But think about the Xserve poster child - the VA Tech super-cluster. PCI Express is 3 times faster than PCI-X for InfiniBand, with 20% less latency. Instead of 7th on the supercomputer list, it might have been in the top 5 if the Xserve had current technology for I/O.
Your other points are quite valid but I doubt that the VA Tech cluster would be much higher even with PCI Express. The reason why I doubt this is that the VA Tech cluster and the Spanish cluster also based on the PowerPC 970 has the highest performance per processor of all top 20 systems except the Earth Simulator. If the VA Tech was held back by PCI-X then the G5 processors would have to be even more superior than the list currently suggests. I know the G5 is a good processor but it isn't THAT good.
 
But AidenShaw ... where did you find the numbers on the latency for PCI Express vs PCI-X. I think I've read that PCI Express was faster than AGP in throughput but had a higher latency, so I also assumed that PCI Express would have a higher latency than PCI, but it seems I was wrong.
 
gekko513 said:
But AidenShaw ... where did you find the numbers on the latency for PCI Express vs PCI-X. I think I've read that PCI Express was faster than AGP in throughput but had a higher latency, so I also assumed that PCI Express would have a higher latency than PCI, but it seems I was wrong.

http://www.mellanox.com/products/shared/InfiniHost_III_Ex_HCA_PO_050.pdf

"PCI Express offers a dramatic jump in I/O bandwidth capabilities with two to four times the peak bandwidth of PCI-X.

The latest performance results demonstrate that InfiniHost Express is delivering almost 3 times the bandwidth of PCI-X and 25 times the bandwidth of gigabit Ethernet.

Additionally, this bandwidth is delivered with a 20% reduction in latency.

This performance will enable faster as well as more powerful clustering and storage solutions for the summer of 2004 as the new PCI Express Servers arrive in the market."



Assuming that results for one architecture (AGP vs. PCI-Express) translate to another (PCI-X vs PCI-Express) is risky, and in this case false.
 
gekko513 said:
Your other points are quite valid but I doubt that the VA Tech cluster would be much higher even with PCI Express.

I think that you're right - the VAtech cluster might have moved to 6th from 7th, but the Itanium cluster at #5 is much faster than VA Tech. A boost from the communications wouldn't have been enough to jump that gap and beat the Itanium cluster.

Note, however, that real performance in super-cluster performance is heavily dependent on the bandwidth and latency of the interconnects. The LINPACKD test used for the Top500 ranking is less dependent on bandwidth and latency than many useful applications. So on real cluster applications, the benefits of PCI-Express might be more more dramatic.

InfiniBand's big advantage is that for its relatively low cost ($1500 to $2000 or so per connection) it's more or less in the same league as the expensive special-purpose cluster interconnects.

Intel systems with PCI-Express have an advantage in bandwidth and latency right now. In this spring's Top500 listing, don't be surprised if Xeon 64-bit clusters take over several of the spots in the top 5 or top 10. (Not because of InfiniBand on LINPACKD, but because it will help real applications scale to larger clusters - and coincidentally do well on LINPACKD.)


gekko513 said:
I know the G5 is a good processor but it isn't THAT good.

Right - it isn't THAT good on real applications, but on LINPACKD it IS that good. ;)

Even IBM says that the PPC970 performs unrealistically well on LINPACKD....

"In practice, only a small portion of peak capacity is achieved because a processor is rarely scheduled to do simultaneous “multiply and adds” in double precision.

However, the LINPACK benchmark, which is often used to rank supercomputers (the Top500 Supercomuter list), makes extensive use of simultaneous multiply and add."
 
sjl said:
...In order to avoid filesystem corruption, CXFS (and almost certainly Xsan) use a "metadata server". Each system allocates storage through the metadata server, but actually accessing that storage is done directly...

Wonderful post. I was going to do a write up, but I've come in to this thread a little late and your post was wonderful.
 
X-Serve Dual Power Supplies

What the X-Serve needs is Dual Power supplies like the X-RAID, hard to fit in a 1U enclosure. Perhaps 2U or some very creative engineering. Backup power supplies in box are mandatory for enterprise data systems where I work. We could get allot of X-Serves if they just had that second power supply. Xsan running on X-Serves and X-RAIDs would be perfect for or server migration.
 
Re: X-Serve Dual Power Supplies

The Red Wolf said:
What the X-Serve needs is Dual Power supplies like the X-RAID, hard to fit in a 1U enclosure.

HP, IBM, Dell and even SuperMicro have 1U systems with dual power.


But you're right - Apple needs a 2U or 3U system with more expansion slots and more redundancy. A 2U with 6 disks and 3 PCI-X slots and dual power would give the Xserve better traction in many IT departments. (Not as much traction as a quad-CPU 4U unit with 6 slots, but better than an entry 1U.)
 
swissmann said:
I guess my question I still have for myself is - are there advantages the G5 has over the G4 at the same clock cycle? It just seems like such a radically different chip there must be more to it than MHz.

Well yes it does. The two most obvious is that the PPC970 has a real FPU and a FSB that's about a minimum of 5 times faster.
 
but does that matter in real life

MacBandit said:
Well yes it does. The two most obvious is that the PPC970 has a real FPU and a FSB that's about a minimum of 5 times faster.


Funny that most of the charts at http://www.barefeats.com/imacg5.html show the difference in performance between the G4 and the G5 is more or less the frequency difference between the two.

For example, on one iMovie4 test:

- 83 sec - Dual 1.42 PM G4
- 62 sec - Dual 2.00 PM G5

Just on clock rate, you'd expect 59 sec for the G5 - but it scaled worse than the clock rate.

So much for "real FPU" and a 5x bus....

As always, the only benchmark that is really useful is the one that runs the programs that you run. LINPACKD on VAtech's supercluster doesn't tell you how fast your Imac will rip your CDs! Random architectural features like "real FPU" and "double-pumped hyper-transformer" don't tell you much either.
 
AidenShaw said:
Funny that most of the charts at http://www.barefeats.com/imacg5.html show the difference in performance between the G4 and the G5 is more or less the frequency difference between the two.

For example, on one iMovie4 test:

- 83 sec - Dual 1.42 PM G4
- 62 sec - Dual 2.00 PM G5

Just on clock rate, you'd expect 59 sec for the G5 - but it scaled worse than the clock rate.

So much for "real FPU" and a 5x bus....

As always, the only benchmark that is really useful is the one that runs the programs that you run. LINPACKD on VAtech's supercluster doesn't tell you how fast your Imac will rip your CDs! Random architectural features like "real FPU" and "double-pumped hyper-transformer" don't tell you much either.

Yet in a number of results the G5 showed a larger then clock cycle increase in over-all performance.

1.42GHz to a 2GHz = 141% increase

Itunes 4 Convert 34Minutes
G5/2.0 = 61 secs
G4/1.42 = 112 secs
184% increase

FileMaker 7 - 12 Actions
G5/2.0 = 170 secs
G4/1.42 = 253 secs
149% increase

Also in gaming performance the FSB really shows a benefit.

In any case results are only as good as your original hardware comparison. You have to have similar HDD and video cards. In most cases BareFeats fails to do this and is notorious for showing the results it wants to. I would take any result they publish with a grain of salt equal to the size a G5 itself.
 
MacBandit said:
In any case results are only as good as your original hardware comparison.


I didn't cite the iTunes - because that's often a test of the CD reader as well as the CPU. iMovie, however, is brute CPU.

If something like "FileMaker" is using files, then the faster SATA drives on the G5 could be a major reason for the better than clock result.

Anyway, BareFeats is pretty good at the "how does this generation Mac compare to the last generation". Maybe they don't isolate all the variables - but neither will the end user. With Apple's limited configuration choices, it's very hard to do strict comparisons with only one variable.

The end user just wants to know how one system compares to another, and it doesn't really matter to the end user if the improvement is due to the new disk or the double-pumped hyper-transformer. Is "this new Mac significantly faster than the last Mac" is the question.
 
AidenShaw said:
I didn't cite the iTunes - because that's often a test of the CD reader as well as the CPU. iMovie, however, is brute CPU.

If something like "FileMaker" is using files, then the faster SATA drives on the G5 could be a major reason for the better than clock result.

Anyway, BareFeats is pretty good at the "how does this generation Mac compare to the last generation". Maybe they don't isolate all the variables - but neither will the end user. With Apple's limited configuration choices, it's very hard to do strict comparisons with only one variable.

The end user just wants to know how one system compares to another, and it doesn't really matter to the end user if the improvement is due to the new disk or the double-pumped hyper-transformer. Is "this new Mac significantly faster than the last Mac" is the question.


The original question was based on a CPU to CPU comparison not on the end user computer to computer.
 
MacBandit said:
The original question was based on a CPU to CPU comparison....


Exactly why I picked iMovie and not an I/O-heavy test... :)


Anyway, to the original question:

swissmann said:
are there advantages the G5 has over the G4 at the same clock cycle?

I'd make two additional comments....

1) swissman - you have a 1GHz G4 and a 2 GHz G5. Run your own timings, is the 2GHz more than twice as fast as the 1 GHz?

2) I think that many tests have concluded that for most applications the primary benefit of the G5 is clock speed - the CPI for the 2 chips is more or less the same, with perhaps a small advantage to the G4.

It will be very interesting when Moto^H^H^H^H Freescale comes out with new "G4" chips with fast busses, especially when they are dual core and have low power consumption!!
 
not so, not so

MacBandit said:
That defeats showing off one of the benefits of the G5 which is in/out data throughput due to a vastly superior FSB.


Not at all, the FSB is what moves data between memory and the CPU -- therefore helping CPU-intensive jobs get to memory.

The FSB isn't involved in I/O at all - the I/O goes from memory to the I/O busses (HyperTransport <-> PCI-X). DMA transfers the data to/from memory without the CPU (FSB) being in the path.

Only after the I/O system has moved the data into memory can it go over the FSB to the CPU. (and vice-versa for output)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.