Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thanks for the info. I see that scan have just got the 5580 in stock (I guess they giving it priority given the recent review in Custom PC) and also one of the Supermicro boards is now in stock though as it is a server rather than workstation board it is not much good.
If you want an enthusiast board for OC'ing the CPU's, I'm afraid you'll have to wait awhile. :( Perhaps the prices will have dropped slightly by then, but I wouldn't expect by much.

If you want to get an idea on ASUS, try searching for the Z7S WS. It's their enthusiast DP board for the Xeon 5400 series. IIRC, they became available in Oct 2008.
 
I'll see what I can post later tonight.

ASUS P6T6 WS Revolution
Core i7 920, OC'd to 4.11GHz
6GB OCZ Platinum DDR3 @ 1600MHz (it didn't do as well with Crucial UDIMM ECC, due to slower CAS=7, so it's stored for later)
ATI HD Radeon 4870 1GB
300GB Velociraptor (boot)
8*WD 320GB RE3 RAID 5 (ARC-1231ML)
Lian Li PC-V2010 Full Tower (Silver)
2x Lian Li EX-H34 Hot Swap bays
Corsair HX1000W PSU
Noctua NH-U12P SE1366 cooler
Noctua NF-P12 fans (4)
Yate Loon 140mm (2)
Vista 64 Ultimate
Linux

Basically, the i7-965 with ECC. ;)

What is write performance like with the Areca cards, I know reads are fast and writes are bad with software solutions but how does this card do?
 
35xx is a XEON Branded i7 Chip, it can use ECC that is about it.

Here are Cinebench scores, no point in posting Geekbench as the results from Windows to OS X are not directly comparable.


CINEBENCH R10
****************************************************

Tester :

Processor : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
MHz : 3800
Number of CPUs : 8
Operating System : WINDOWS 64 BIT 6.0.6001

Graphics Card : GeForce GTX 275/PCI/SSE2
Resolution : <fill this out>
Color Depth : <fill this out>

****************************************************

Rendering (Single CPU): 5352 CB-CPU
Rendering (Multiple CPU): 21949 CB-CPU

Multiprocessor Speedup: 4.02

Shading (OpenGL Standard) : 6800 CB-GFX


****************************************************

so, based on that, yours scored better than all on the single CPU, and only a couple beat you on the multiple CPU?

that's crazy.

I'll see what I can post later tonight.

ASUS P6T6 WS Revolution
Core i7 920, OC'd to 4.11GHz
6GB OCZ Platinum DDR3 @ 1600MHz (it didn't do as well with Crucial UDIMM ECC, due to slower CAS=7, so it's stored for later)
ATI HD Radeon 4870 1GB
300GB Velociraptor (boot)
8*WD 320GB RE3 RAID 5 (ARC-1231ML)
Lian Li PC-V2010 Full Tower (Silver)
2x Lian Li EX-H34 Hot Swap bays
Corsair HX1000W PSU
Noctua NH-U12P SE1366 cooler
Noctua NF-P12 fans (4)
Yate Loon 140mm (2)
Vista 64 Ultimate
Linux

Basically, the i7-965 with ECC. ;)

so can you have dual W3570s in one machine?
 
What is write performance like with the Areca cards, I know reads are fast and writes are bad with software solutions but how does this card do?
It gets as high as 1.2GB/s. :eek: Cache of course, but 575MB/s on avg. :D
so can you have dual W3570s in one machine?
NO, AFIAK. It doesn't have the second QPI channel used in DP boards, and I don't recall seeing that you could. You can however, use a DP Xeon 55xx in an SP board, as it can shut its second QPI off. Sort of a waste though, as they're more expensive. Might be worth considering if you plan to get a DP board and second CPU at a later time however. You only loose the SP board's cost, minus anything you get if you sell it. ;)
 
It gets as high as 1.2GB/s. :eek: Cache of course, but 575MB/s on avg. :D

NO, AFIAK. It doesn't have the second QPI channel used in DP boards, and I don't recall seeing that you could. You can however, use a DP Xeon 55xx in an SP board, as it can shut its second QPI off. Sort of a waste though, as they're more expensive. Might be worth considering if you plan to get a DP board and second CPU at a later time however. You only loose the SP board's cost, minus anything you get if you sell it. ;)

oh ok. so i might as well just get the i7 965 then?
 
oh ok. so i might as well just get the i7 965 then?
If you're using an SP board, use an i7 if you don't need/want ECC, or a W35xx if you do. You're the only one who can answer that one. ;) :D

For DP boards, you'd be better off sticking with 55xx parts. You can upgrade it to a DP system at a later time if you choose, assuming you can't from the beginning. It's an option, if you're on a tight budget ATM.
 
It gets as high as 1.2GB/s. :eek: Cache of course, but 575MB/s on avg. :D

NO, AFIAK. It doesn't have the second QPI channel used in DP boards, and I don't recall seeing that you could. You can however, use a DP Xeon 55xx in an SP board, as it can shut its second QPI off. Sort of a waste though, as they're more expensive. Might be worth considering if you plan to get a DP board and second CPU at a later time however. You only loose the SP board's cost, minus anything you get if you sell it. ;)

That is an insane transfer rate. It eats SSD :) I went with the Green Drives because I wanted quiet for my storage space and don't really need the speed for writing images or mp3's to disk. May I ask what kinda of software you run that requires that kind of transfer speed?
 
That is an insane transfer rate. It eats SSD :) I went with the Green Drives because I wanted quiet for my storage space and don't really need the speed for writing images or mp3's to disk. May I ask what kinda of software you run that requires that kind of transfer speed?
EDA
 
If you're using an SP board, use an i7 if you don't need/want ECC, or a W35xx if you do. You're the only one who can answer that one. ;) :D

For DP boards, you'd be better off sticking with 55xx parts. You can upgrade it to a DP system at a later time if you choose, assuming you can't from the beginning. It's an option, if you're on a tight budget ATM.

what exactly is ECC?

No you buy an i7 920 and overclock it. There is no point going higher unless you have money to waste.

but wouldn't i be able to overclock an i7 965 even more? and get more years of use out of it?
 
what exactly is ECC?



but wouldn't i be able to overclock an i7 965 even more? and get more years of use out of it?

Not enough to justify the price. 3.6Ghz - 4.0Ghz is kinda the norm for these chips on quiet air cooling. The 965 can't really do more.

As for longevity, nobody knows, both will require additional voltage but at $280 a chip who cares. By the time my chip dies (If it even does) I'm sure something better will be available.

As for ECC, see http://en.wikipedia.org/wiki/Error_control

I wouldn't be too concerned about it.
 
what exactly is ECC?
It's a memory type that has the ability to detect and correct errors, and results in a more stable system. :)

Read here for a more detailed explanation. ;)

but wouldn't i be able to overclock an i7 965 even more? and get more years of use out of it?
You can get a higher clock out of it, but you have to be careful, as either too much heat or voltage can kill it. You also would run into stability issues, and have to drop it back anyway.

If you do decide to OC, just take the time to test it. There's a few programs available that help check stability on of the CPU, HDD's, graphics card(s), memory,... :) You'll have a lot fewer headaches if you do. ;)
 
Not enough to justify the price. 3.6Ghz - 4.0Ghz is kinda the norm for these chips on quiet air cooling. The 965 can't really do more.

As for longevity, nobody knows, both will require additional voltage but at $280 a chip who cares. By the time my chip dies (If it even does) I'm sure something better will be available.

As for ECC, see http://en.wikipedia.org/wiki/Error_control

I wouldn't be too concerned about it.
I dropped mine back to 4.12GHz (i7 920 on air) for stability, and haven't had any issues. :)

I figured I'd rather roast a CPU that went for just under the $300 mark before going with a much more expensive part (W3570). :D

As for ECC, it's not needed for most tasks, but is needed in a few cases. I'd prefer to have it, as the calculations performed have to be accurate. One bit off can throw the output drastically, and have to be re-done. Assuming I even catch it. :eek: :p
 
thanks for the replies.

so ya'll think it's my hard drives then?

Very very likely. I suppose different codecs could also have something to do with it but as your procs weren't much higher than mine (totally) I suspect not much.


so, with RAID 0, i double my chances of drive failure, correct?

No. I see that false math all over the place. The failure rate MTBR/MTBF for the drives are exactly the same. That math is technically correct but it's basically 99.9% bullshate. It would be like saying that if you have a rock in each hand the odds that you'll drop one of them doubles. No, sorry, I'm not going to drop either till I'm good and ready to set it down. :D I wish the math geek that propagated this rather disingenuous proposition had never done so. It's caused billions of needless questions and spawned billions of wrong answers. This math also means that in a 100 or 1000 drive array we will have one drive failing every month or something crazy like that. Doesn't happen. In a 1000 drive array ya get a lemon or two right away and then kept properly the system operates for years on end - no problem.


also, are you using just a software raid?

In my case yes. 3-Drive RAID Level 0. I get burst speeds of +400 MB/s and sustained speeds around 300 MB/s over the first 15% ~ 20% of the platter surfaces. Cached I/O is also in the gigs per second on a software RAID but you're limited to the megs (accumulated) of the drives in the array while a hardware RAID is often upgradable. For iMovie I guess it's asking for an average sustained rate of like 260 MB/s for 1080p and will use seeks of zero if it can get them. 3 or 4 Drive RAID seeks are pretty high (== slow). You can figure out your frame sizes and bandwidth requirements fairly easily. Random seek (in ms) seems to be a major factor here as I'm hearing my RAID set actuations during scrubs - meaning seeks are probably very important for shuttle/jog and selections. This is really where an SSD will shine. They aren't that much faster than RAID at throughput (regardless of what people in these threads seem to think) but your seek speeds will be just about zero - which is awesome!

SSD will be better than a RAID for you're purpose and at everything except price per gig. So I guess just keep the setup you have now and move the files onto and off of the SSD for editing. That will give you about the same throughput speed (250 MB/s "bandwidth") as a 3 drive RAID 0, about an hour of footage (capacity) at 80GB, the same MTBR, and lightning fast seeks (I/O latency). At ~ $350 it's the same cost investment as the RAID too - only it's 80GB instead of 2.8 TB. :p


.
 
thanks for the replies.

i still can't decide between a pc and a new mac pro.

from what i gather, an i7 pc will be faster in every way over a 2.26 octo mac pro, correct? unless i'm running a bunch of virtual machines or something?

and since i already have a mac pro, the new mac pro won't be much of an upgrade.

but here's the thing. since my mac pro now is running as a server and i'm using it as a normal machine, i don't get the real benefits from it.
 
thanks for the replies.

i still can't decide between a pc and a new mac pro.

from what i gather, an i7 pc will be faster in every way over a 2.26 octo mac pro, correct? unless i'm running a bunch of virtual machines or something?

and since i already have a mac pro, the new mac pro won't be much of an upgrade.

but here's the thing. since my mac pro now is running as a server and i'm using it as a normal machine, i don't get the real benefits from it.

Well, there's really only two things to consider and on average they scale (or we can for most purposes, pretend that they scale) linearly.

There's clock speed. Some have called this "pep" which works for me.
And then there's bandwidth. Number_of_Cores X Clock_Speed = bandwidth.

That's it. We can benchmark particulars but generally speaking, that's all there is. Pick the balance between the two (plus price) which you think you want and go for it. I mean if you're set on buying a new system.

Anything else is just an order of complexity added in that probably won't matter much if at all, in practice.

Another consideration is to get a $200 used system and run it as your server. And trust me a $200 used system these days ROCKS! as a server!

EDIT: something like this:

Price $250
Clock 3.8GHz with HT
3GB Ram
73GB (SCSI/15000rpm)
COMBO Optical drive
Quadro FX3400
Windows XP Pro
KB/Mouse
etc. http://page19.auctions.yahoo.co.jp/jp/auction/x68836645

Sorry for the Japanese but that's where I live. Anyway systems like this are all over the place in almost every country - in great abundance. They make great personal servers! I ran one off of a DEC Alpha 233MHz running Digital Unix up until just a little while ago. ;)
 
Well, there's really only two things to consider and on average they scale (or we can for most purposes, pretend that they scale) linearly.

There's clock speed. Some have called this "pep" which works for me.
And then there's bandwidth. Number_of_Cores X Clock_Speed = bandwidth.

That's it. We can benchmark particulars but generally speaking, that's all there is. Pick the balance between the two (plus price) which you think you want and go for it. I mean if you're set on buying a new system.

Anything else is just an order of complexity added in that probably won't matter much if at all, in practice.

Another consideration is to get a $200 used system and run it as your server. And trust me a $200 used system these days ROCKS! as a server!

EDIT: something like this:

Price $250
Clock 3.8GHz with HT
3GB Ram
73GB (SCSI/15000rpm)
COMBO Optical drive
Quadro FX3400
Windows XP Pro
KB/Mouse
etc. http://page19.auctions.yahoo.co.jp/jp/auction/x68836645

Sorry for the Japanese but that's where I live. Anyway systems like this are all over the place in almost every country - in great abundance. They make great personal servers! I ran one off of a DEC Alpha 233MHz running Digital Unix up until just a little while ago. ;)

thanks for the reply.

yes, i could just buy another computer to run as the server. i could also just buy a mac mini for this purpose, or a pc and run linux on it. again, that would be another hard decision.

i'm just not sure what to do really
 
thanks for the reply.

yes, i could just buy another computer to run as the server. i could also just buy a mac mini for this purpose, or a pc and run linux on it. again, that would be another hard decision.

i'm just not sure what to do really

I dunno! But I can tell ya it's usually NOT a good idea to run dedicated server facilities on a machine you use as a workstation! And $200 for a system like that - what's the hard part? That's about the price of one HDD or something. :p And the one I linked to will just cream a mac-mini!!! Plus all the cool crap it comes with... FireWire, USB, UltraSCSI 360, ATA, etc. :p

Here's the manual BTW.. ftp://ftp.software.ibm.com/systems/support/system_x_pdf/88p9138.pdf
 
No. I see that false math all over the place. The failure rate MTBR/MTBF for the drives are exactly the same. That math is technically correct but it's basically 99.9% bullshate. It would be like saying that if you have a rock in each hand the odds that you'll drop one of them doubles. No, sorry, I'm not going to drop either till I'm good and ready to set it down. :D I wish the math geek that propagated this rather disingenuous proposition had never done so. It's caused billions of needless questions and spawned billions of wrong answers. This math also means that in a 100 or 1000 drive array we will have one drive failing every month or something crazy like that. Doesn't happen. In a 1000 drive array ya get a lemon or two right away and then kept properly the system operates for years on end - no problem.
It can get confusing. MTBF/MTBR doesn't change, but the UBE does. For example, a 10 drive stripe will change the UBE (entire set) by an order of magnitude , so a consumer set would drop to 1E13, and an enterprise set to 1E14.

I certainly wouldn't want to have to deal with a 1000 drive array of any type. :eek: :p Failures during a rebuild would drive me nutz. ;)

If it's managed by software RAID, it's likely no big deal, as it has the ability to spin the drives down, and likely not being run 24/7. In a small array, it shouldn't be noticed until the drives reach "old age". So at least 3 years, and possibly 5. (Given some of the recent issues, and warranty changes, I'd figure on the low side). ;) But say $50/drive per year (3 yrs), it's cheap enough that it wouldn't warrant any complaints. :D

Not so when a hardware controller is involved though. It may not have a MAID feature to spin down drives (which can cause instability on a hw controller anyway). Add in additional heat (arguable perhaps), and vibration tolerance, and it's a rather hard life for drives. For a high availibility system, it matters, as they're expected to spin 24/7 for a few years. A consumer drive has a bad habit of keeling over dead under those conditions. :p
 
It can get confusing. MTBF/MTBR doesn't change, but the UBE does. For example, a 10 drive stripe will change the UBE (entire set) by an order of magnitude , so a consumer set would drop to 1E13, and an enterprise set to 1E14.

I certainly wouldn't want to have to deal with a 1000 drive array of any type. :eek: :p Failures during a rebuild would drive me nutz. ;)

Hehehehe... I was exaggerating to make a point but I dunno... it might actually be fun - I mean if we got paid for it of course. :D

If it's managed by software RAID, it's likely no big deal, as it has the ability to spin the drives down, and likely not being run 24/7. In a small array, it shouldn't be noticed until the drives reach "old age". So at least 3 years, and possibly 5. (Given some of the recent issues, and warranty changes, I'd figure on the low side). ;) But say $50/drive per year (3 yrs), it's cheap enough that it wouldn't warrant any complaints. :D

Yeah, true, but this is different than "double the chances of [catastrophic] drive failure". Most people (I guess) won't know what 1E14, 1E13, etc. actually means in practice. I think the math is confused (every time I read posts) to mean the drive has twice the chance of breaking - boom - no more worky worky. Most people don't know or don't want to know about internal (and typically silent) error handling - they just know it works or not. Ya know. :)
 
Hehehehe... I was exaggerating to make a point but I dunno... it might actually be fun - I mean if we got paid for it of course. :D
It would have to be a really BIG check. :D :D

With No Guarantee on data security, let alone availability. ;) :p

Yeah, true, but this is different than "double the chances of [catastrophic] drive failure". Most people (I guess) won't know what 1E14, 1E13, etc. actually means in practice. I think the math is confused (every time I read posts) to mean the drive has twice the chance of breaking - boom - no more worky worky. Most people don't know or don't want to know about internal (and typically silent) error handling - they just know it works or not. Ya know. :)
I know what you mean. ;) The math never takes into account things like usage patterns, drive wear, environmental conditions,... Then there's defect rate. :eek: :p

Too much to make an accurate model I guess, or they'd be available. ;) :D
 
I dunno! But I can tell ya it's usually NOT a good idea to run dedicated server facilities on a machine you use as a workstation! And $200 for a system like that - what's the hard part? That's about the price of one HDD or something. :p And the one I linked to will just cream a mac-mini!!! Plus all the cool crap it comes with... FireWire, USB, UltraSCSI 360, ATA, etc. :p

Here's the manual BTW.. ftp://ftp.software.ibm.com/systems/support/system_x_pdf/88p9138.pdf

yes, it is not a good idea to use a server as a workstation, which is what i'm currently doing.

but do i need 2 mac pros? then i'd always have a backup.

would it be safe to assume that an octo would do better with VMs?
 
yes, it is not a good idea to use a server as a workstation, which is what i'm currently doing.

but do i need 2 mac pros? then i'd always have a backup.

would it be safe to assume that an octo would do better with VMs?

Well someone who has actually administered high volume commercial servers would know more but I ran a 500 member FTP, hosted HTTP that got about 1000~2000 unique IP hits per month, and served up an IRC network with, umm, I think there were about 20 channels at the end. The FTP served out about 90 gigs a month and received around 30 gigs a month, the HTTP was around 30 gigs a month outbound, and I don't remember how the IRC profiled. The system stress was so light that I also used it for my email and some web surfing. Even with all of that I don't recall the CPU ever hitting 30% or higher for more than a second or two. the HDDs were not stressed either and the traffic (when I capped FTP at 75k per connection) was well within the ISP line speeds which of course was not a problem AT ALL for the link speeds - TBase100 at the time.

All this was not even a challenge for the little digital Multi with it's VX42 Alpha 21066A 233MHz processor and it's 128 megs or memory. So IMHO no, you do not need two Mac Pros to serve up files over a web connection or two (PS: yes, you can use multiple lines. ;)).

An octad might do better running VMs yes. But I think that's primarily because this becomes a bandwidth issue when you're assigning 2, 3, or 4 CPUs to each VM. But to me VMs are a convenience feature and not really meant for dedicated operation. If you need that OS up all the time is far better in my opinion, to get a cheap used box and stash the sucker under your desk and just leave it. KVM switches are wonderful for stuff like that! Or if it's just one extra box then just the second input on your monitor. VNC also works great if you want neither and also cut&paste + drag&drop between your OS's.

.
 
Well someone who has actually administered high volume commercial servers would know more but I ran a 500 member FTP, hosted HTTP that got about 1000~2000 unique IP hits per month, and served up an IRC network with, umm, I think there were about 20 channels at the end. The FTP served out about 90 gigs a month and received around 30 gigs a month, the HTTP was around 30 gigs a month outbound, and I don't remember how the IRC profiled. The system stress was so light that I also use it for my email and some web surfing. Even with all of that I don't recall the CPU ever hitting 30% or higher for more than a second or two. the HDDs were not stressed either and the traffic (when I capped FTP at 75k per connection) was well within the ISP line speeds which of course was not a problem AT ALL for the link speeds - TBase100 at the time.

All this was not even a challenge for the little digital Multi with it's VX42 Alpha 21066A 233MHz processor and it's 128 megs or memory. So IMHO no, you do not need two Mac Pros to serve up files over a web connection or two (PS: yes, you can use multiple lines. ;)).

well i didn't mean did i need 2 mac pros to run a server, more like do i need 2 mac pros period.

if i had 2, i would always have a backup machine if something went wrong. remember, the info on my server is priceless (especially the data that is stored in mysql dbs)

also, i do serve up some videos, so the bandwidth can get way up there. but that was so slow, almost not usable. (well over 500 GB in a month though, but again, just testing really).

what about VM's? wouldn't 8 cores be better for that than 4 cores?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.