Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I unfortunately don't know enough about how the command-line programs operate. With Arc, I'm *guessing* based on everything I read that it only reads in what it needs at that moment, so it'll be random reads. With Igor, it loads everything into RAM and stops -- if it runs out of RAM, it yells at you that it can't do anything else.

I *think* ISIS goes to/from the disk during its tasks, including writing multi-GB temp files (this latter part I know, the former part I'm guessing just based on the fairly small RAM profile). The crater code does everything in RAM, hence the hogged RAM resources I mentioned in earlier posts.


I've spent the last hour or so reading up on various websites' opinions on the i7-2600K versus the i7-3820 and -3930K. And I've emerged somewhat more confused than I think I was. Most say the difference between the 2600K and 3820 is pretty small (except when the 3820 is overclocked) and unless you're making use of multithreaded applications, the quad memory channel the LGA 2011 architecture offers doesn't really help much over the dual. And the motherboards are ~$100 more expensive. And the processors don't have on-board GPUs, so the money I thought I'd be saving by not getting a GPU won't happen 'cause I'll need one so I can hook up a monitor.

On the other hand, the LGA 2011 offers me 64 GB of RAM and the processors address the RAM at 1600 MHz versus 1333. The motherboards also offer native PCIe 3.0, but I don't see a need for that ATM for my usage.

So many things to consider ... :rolleyes:
 
I guess the thing to remember is that you will always be bottlenecked somewhere, changing spec will just change the bottleneck.

If you've currently got the app running on a Windows box you can get some performance metrics out of perfmon.

You'll need to play with it to get a feel for what metrics are available, but total cpu time (under processor), page faults/sec (under memory) and the disk queue lengths/%idle time (under physical disk) would be a good place to start.
 
@OP: Just curious - you want to run Windows, Linux/UNIX and you like Macs. Why not simply buy a new Mac and use bootcamp? The reason why I say this is that I have various Windows-based data acquisition programs running in my lab, but we use makes where possible because they tend to be more reliable than PC's. Indeed, I think the Mac is one of the best machine for running Windows.

If you build a PC you must just find that your PC software does not actually run the more quickly....
 
@OP: Just curious - you want to run Windows, Linux/UNIX and you like Macs. Why not simply buy a new Mac and use bootcamp? The reason why I say this is that I have various Windows-based data acquisition programs running in my lab, but we use makes where possible because they tend to be more reliable than PC's. Indeed, I think the Mac is one of the best machine for running Windows.

If you build a PC you must just find that your PC software does not actually run the more quickly....

For this reason:

Some stuff will ONLY work on Windows, Windows machines are much more customizable, upgradable, and generally cheaper due to more competition parts-wise.
 
I unfortunately don't know enough about how the command-line programs operate. With Arc, I'm *guessing* based on everything I read that it only reads in what it needs at that moment, so it'll be random reads. With Igor, it loads everything into RAM and stops -- if it runs out of RAM, it yells at you that it can't do anything else.

I *think* ISIS goes to/from the disk during its tasks, including writing multi-GB temp files (this latter part I know, the former part I'm guessing just based on the fairly small RAM profile). The crater code does everything in RAM, hence the hogged RAM resources I mentioned in earlier posts.


I've spent the last hour or so reading up on various websites' opinions on the i7-2600K versus the i7-3820 and -3930K. And I've emerged somewhat more confused than I think I was. Most say the difference between the 2600K and 3820 is pretty small (except when the 3820 is overclocked) and unless you're making use of multithreaded applications, the quad memory channel the LGA 2011 architecture offers doesn't really help much over the dual. And the motherboards are ~$100 more expensive. And the processors don't have on-board GPUs, so the money I thought I'd be saving by not getting a GPU won't happen 'cause I'll need one so I can hook up a monitor.

On the other hand, the LGA 2011 offers me 64 GB of RAM and the processors address the RAM at 1600 MHz versus 1333. The motherboards also offer native PCIe 3.0, but I don't see a need for that ATM for my usage.

So many things to consider ... :rolleyes:

The 3820 seems to beat the 2600K around 5-10% on average, though for some tasks it does considerably better. The biggest selling point for the 3820 in your case is the memory bandwidth and max memory of the controller for socket 2011. If you get the board with 8 RAM slots that's 64GB you can handle.

With what you're saying about many of these programs only being single core. Then you can run virtual machines and let all those cores loose and use all that RAM with multiple tasks running simultaneously.

Now I do agree with throAU that XEON and ECC RAM is the most reliable and expandable. Given the cost I was recommending something close to your original price that will allow far more expansion and capability. Though overclocking is a bad idea for what you are doing. The 3820 can be overclocked as the boards support it. It's just the multiplier is locked unlike on K series CPU's. So you have to spend time fiddling with other multipliers rather than just the CPU multiplier a VCore settings.

You mentioned RAID 10 in one of your posts. RAID 5 is fine. There is no reason not to use the SATA II slots for the RAID 5 or 10 either on the data drives as regular hard drives come no where close to saturating the SATA II BUS.

On that ASUS board with all the SATA 6 ports. The reason why only the two SATA III ports and four SATA II ports support RAID is since that is all the Intel X79 controller will support. The additional SATA III ports and the eSATA are run by a secondary third party controller.

They also fail more quickly. You get what you pay for.


Lets not forget the infamous GeForce 8600M in the Macbook Pro, blown capacitors in the iMac G5, Intel iMacs with screens that are too yellow, paint rubbing off Powerbook G4s, case chipping of the Macbook, antennae issues with the iPhone 4, &c.

Apple uses the same hardware as everyone else. With Apple or any other OEM you get the lowest bidder with sufficient manufacturing ability. The only thing making Apple more reliable than other OEMs is that they only build high end computers. They are no more reliable hardware wise than other similarly priced systems from other manufacturers.

Custom built computers are as reliable as you can get. When building yourself you get to pick and choose all the top customer reviewed, professionally reviewed and benchmarked parts for the price. The ones built by computer shops get the same formula of cheapest parts possible as any OEM computer.
 
Thanks, velocity 4G. I'm kinda thinking along the same lines, but yeah, I do want to keep the cost kinda down.

With RAID 5 for the data drives, I'd still need 3 disks. And if I wanted to RAID 1 the OS/software, I'd need another 2. Unless I stick everything on the same HDDs and forego an SSD.

Here's an example build with the latest of what I'm thinking: http://pcpartpicker.com/p/6KGw . It's somewhat above what I was hoping to spend on this. I could go down a notch on the motherboard and do the Extreme6/GB and save $60. I already went down from a $250 graphics card to the $115 listed. But yeah, it's getting 3 HDD for a RAID 5 for data and 2 SSD for a RAID 1 for OS/software that's kicking up the price. I'd be putting data for images I'm working on at that moment on the SSD and then offloading them for storage onto the HDD.

Thoughts at this point? And do I need an extra CPU cooler? I assume I need to add a wireless card somewhere, too?

And thoughts on whether you (and others) agree with my supposition that if I go with a Sandy Bridge LGA 2011 that the Ivy Bridge ones will come out in a year-ish?


... and I really don't want to get into a Mac vs PC war at the moment. I'm a Mac fanboy as much as the next guy - secretary and VP of the Mac club in college, many free t-shirts, going to store openings, Apple windbreaker, and Apple tapestry hanging in my office. But I'm also not blindly devoted.
 
Last edited:
They also fail more quickly. You get what you pay for.

Half of that statement is true. You get what you pay for. I've got a PC from 2004 still running fine, and the only reason other boxes i've had previously have been thrown out was due to low spec, not failure.

Hard drives fail in any hardware, apple isn't somehow immune to hardware failure. Granted the service on failed hardware is better, but to claim that all pc hardware is somehow unreliable is completely false.


edit:
Depending on your IO patterns (I'm assuming the worst - that they are quite random and reasonably write intensive, as you're dealing with millions of data points), RAID5 may be a bad idea performance-wise.

Why? It is fine for read-heavy workloads, but...

It performs very badly on writes. one IO operation turns into several (essentially, it "costs" more of your disk pool's IO) - to read from all drives (in addition to the initial read you did to get the data into RAM), perform a checksum calc, and then write to all drives, even if only 1 byte in the stripe is modified. RAID 10 simply writes to the mirrored disks on the side of the stripe that got changed, no read (from all non-parity disks) necessary to calculate parity.

These days, disk is cheap and RAID10 performs much better for writes (for the cost of one extra drive). That extra drive however does mean that rebuilds are much faster (RAID5 degraded performance is much worse), and if the right disks fail, you can handle 2 drive failures at the same time if they're the "right" drives.

Personally (if you aren't sure on the application's IO profile) I'd get 4 drives, and before you get too settled, test 4-drive RAID5 and RAID10 (i'd personally just set up a single RAID10 pool create partitions within that for system/data) back to back to see which is faster.

If your workload is NOT write heavy (and its mostly all read) RAID5 will be fine, but again... disk is cheap and raid5 rebuilds/degraded performance sucks.

Again though, if your workload is IO intensive, forget using the SSD for boot/system - use it (or a 60gb SSD) as cache for a raid array instead. Who cares how fast your box boots, if it spends ages grinding away doing the actual work on slow disks? Make use of the SSD to speed up your workload, you'll typically reboot the box once a month?
 
Last edited:
...

If your workload is NOT write heavy (and its mostly all read) RAID5 will be fine, but again... disk is cheap and raid5 rebuilds/degraded performance sucks.

Again though, if your workload is IO intensive, forget using the SSD for boot/system - use it (or a 60gb SSD) as cache for a raid array instead. Who cares how fast your box boots, if it spends ages grinding away doing the actual work on slow disks? Make use of the SSD to speed up your workload, you'll typically reboot the box once a month?

One of the four applications I'll use is read/write very heavy (the ISIS program). Arc would be used 40%, Igor 40%, crater detection 15%, ISIS 5%. (These are ROUGH guesses - but Arc and Igor are the main two I use daily, the other two are more done in spurts. ISIS and crater detection would be used as frequently as each other, but the crater detection takes much longer to run.) Arc is just reading into RAM, very little disk I/O except when opening up. Igor is the same, it reads stuff into RAM and then mainly goes from that with little disk I/O except when saving and opening. The crater detection also just reads into a ******-load of RAM and then writes out a text file that's on the order of a few megabytes. It's ISIS that has all the disk I/O.

That's why I'm now thinking that a way to optimize while save money is to have the two small SSDs and load the ISIS data onto them when I need to do processing in that. Otherwise, all the main datasets for Arc and Igor can be on the hard drives, I'll offload ISIS files onto the hard drives when they're done, and run the crater detection from them, too.

So then, would I even need SSDs? Would a 4-HDD-disk RAID 10 be just as good as a 3-HDD-disk RAID 5 for data with 2-SSD-disk RAID 1 for scratch and OS/applications?

I apologize if I'm not getting what you're saying and I'm asking stupid questions -- again, I'm learning all of this for the first time, and I only started Monday morning. :eek:
 
So then, would I even need SSDs? Would a 4-HDD-disk RAID 10 be just as good as a 3-HDD-disk RAID 5 for data with 2-SSD-disk RAID 1 for scratch and OS/applications?

I apologize if I'm not getting what you're saying and I'm asking stupid questions -- again, I'm learning all of this for the first time, and I only started Monday morning. :eek:

The problem with using SSD for system/scratch is that you'll be manually moving data around, and may run out of space to do what you want. Windows 7 also keeps every single update ever installed on the box (so you can roll back) so your windows install will grow. Also, with 32gb or more of RAM, you'll potentially use quite a lot of swap space (unless you turn swap off which isn't a great idea as windows/modern OS can use swap to keep more ram free for caching, etc).

By using an SSD for cache on a RAID array, you're effectively giving up the need to do your own data management, and allowing the disk controller to make the best use of your SSD to automatically cache "hot" data, whether it is the OS, your data, etc - and this will update on the fly as different data is in flight.

Here's an overview of what I'm talking about:
http://www.intel.com/content/www/us/en/servers/raid/raid-ssd-cache.html

Its kinda like the idea of the hybrid drives available from seagate but with much more SSD cache (up to 64gb).

Rather than figuring out what you need on your SSD, and manually moving stuff about as required, the OS/raid controller can do it at the block level automatically (i.e., if part of a file is "hot" only those blocks will be in SSD rather than copying the entire file), making more efficient use of your space.


In terms of not needing SSD - sure, you don't necessarily "need" it, but even a single SSD is much much faster than a RAID.

For example, I have a 16 disk RAID array here at work that is currently struggling to do 2,000 IOPs. A single desktop class SSD can do 8000 without breaking a sweat. The 16 disk array cost over $50,000au 4 years ago.

They are that much faster on non-sequential IO....


For the cost of a 60gb SSD to speed up your RAID (what, 100 bucks or less on a system worth a few grand?), I reckon it would be money well spent.


edit:
If i sound a bit like a storage nerd, its because I've been chasing down IO problems here recently with our vSphere cluster and done a lot of research into our new SAN purchase...

For a bit of an explanation of how to get an idea of how the different disk types and RAID levels affect your storage subsystem, check out this article:

http://www.yellow-bricks.com/2009/12/23/iops/

The big eye opener for many is that the theoretical max transfer speed of a disk is nothing like you will get in reality (as that is streaming a single contiguous file). In the real world you have multiple processes running at the same time, competing for disk access, and it becomes more random. A 7200rpm disk that can do say 100 megabytes per sec sequential may only manage 300k per sec of random 4k reads (which is extremely pessimistic and most applications would never do reads that small, but to illustrate the point that "it depends" on what your application does). This is the big win with SSD - as they have no seek time, random io is almost as fast as sequential...

The random IO improvement is what makes them so fast in general use, the higher peak throughput is nice, but its not the major reason they rock...
 
Last edited:
I'm thinking now I have a fundamental misunderstanding of SSD for caching and what drives can be used in a RAID array. That link was quite helpful. And I watched NewEgg's video on the ASUS P9X79 Pro page. I was under the impression that in a RAID array, the disks needed to be the same. If I were to just get 4 drives, say 2 120 or 128 GB SSDs and 2 1TB HDDs, can I use those 4 drives in the same RAID 10 setup and get that benefit you're talking about? As in, the motherboard (e.g., the Asus P9X79) will be able to figure all that stuff out? Or would I need a separate RAID controller that would be able to figure that out? Or am I still misunderstanding ... :(.
 
All drives in a single RAID group need to be the same size. Well, they don't *need* to be, but if you mix and match sizes only the smallest amount of space will be used - i.e., in a RAID5 with 500s and 1tbs, your array will only be able to use the first 500gb on the 1tb drives.

e.g., with 2x500s and 2x1tbs (just as an example) you could do:

- a RAID1 of the 2x500 and a second RAID1 of the 2x1tb (1.5tb total space)
- a single RAID10 using all 4 drives, but with 500gb wasted, unusable space on the 1tb drives (better IO than above, but only 1tb usable space)
- a single RAID5 using 500gb on each drive (1.5tb space, decent read speed, but slow writes)
- a RAID0 on the 2x500s and a RAID1 on the 2x1tb drives (2tb usable)

etc...

with the SSD caching (new intel chipset feature), the SSD won't be part of the RAID group, so with 4x 1tb and a 60gb ssd you could do (this would be my choice)

- 4x1tb in RAID10 (2tb usable) + a seperate 60gb SSD as cache. the OS sees one logical disk (which you can partition), but as far as windows is concerned, the SSD is invisible - the RAID controller uses it to speed things up.


In your example, of mixing 2x 120gb SSD and 2x1tb disks in a RAID10 (stripe across 2 mirrors - 4 drives required), you would get 240gb of usable space - only the first 120gb of all drives would be used...

also, the array would be slowed down to spinning disk speed (the SSDs will be waiting on the spinning disks to catch up to keep everything synced)... so whilst you COULD do that, it would be a waste. the SSD would show no benefit.
 
Last edited:
Okay, I think/hope I got it now. I'm also watching ASUS's explanation of SSD Caching which is helping. So ...

... I can do what I think you were telling me to do hours ago: Have an SSD, such as a 120 GB drive, hooked up to the specific SSD Cache SATA3 port on the motherboard. This drive would not count towards the data I can store. I could then do, e.g., a RAID 5 with 3 1TB drives on 3 of the 4 SATA2 ports on the motherboard. In a RAID 5, I'd have 2 TB of usable space (according to the formula on Wikipedia).

The benefit of this is that as I use applications and they pull data, frequently accessed data from the HDD RAID will be stored on the SSD cache so it'll pull from that as opposed to from the HDD -- the applications that don't store stuff in RAM, anyway.

Now do I have it right?
 
Yup, that's right.

The only caveat is that the intel SSD cache controller can only use 60gb or so max (whatever SSD you buy, this is a chipset limitation).

So, you have a choice:

Buy a 60gb SSD (or 30gb, 40gb or whatever) and commit the entire thing to cache
OR
Buy a bigger SSD, commit 60gb of it to cache and the remaining 60gb (or whatever) for something else (another "drive" in windows).

If it was me, i'd just go for a 60gb SSD (they're cheap) and not faff about with a small 60gb drive with the leftover space (on a 120), but that's a decision for you to make - you may have a use for 60gb worth of guaranteed fast (as opposed to cached RAID) scratch space?
 
Yup, that's right.

The only caveat is that the intel SSD cache controller can only use 60gb or so max (whatever SSD you buy, this is a chipset limitation). ...

Is this true even with the ASUS boards? I thought they were saying there's no limit.
 
Given your ram requirements I'd go for the socket 2011. Don't worry about not having the integrated vid card on the 11555. Even ivy bridge igp will be taken down by $75 vid cards.
 
I do see two glaring problems in your list. Unless you are dedicating one of those 1TB drives for a boot drive or that 60GB SSD. You are going to need one. You cannot use a RAID 5 array as a boot drive. Only RAID 0 or 1 can be used for a boot drive. If you try to use part of a 1TB drive as a boot drive and part for RAID you will suffer major performance bottlenecks. Nor would I use part of the 60GB SSD as a scratch disk and part for booting/apps as there is not enough space.

The other is I don't see a copy of Windows 7 OEM in there. If this is connecting to a server it must be Win 7 Professional OEM for domain support.

As for some price improving suggestions.

This RAIDMAX is 80plus Gold certified and modular. The Corsair you listed is less energy efficient, more expensive and not modular. It is about $10 cheaper to boot!

You can save nearly $100 on the case with the Antec Three Hundred. It is well built, roomy and has plenty of cooling.

You can save $50 with the ASRock X79 Extreme6. It actually has far better customer feedback than the ASUS P9X79.

You can save $20 with two sets of Corsair XMS 32GB memory.
 
I do see two glaring problems in your list. Unless you are dedicating one of those 1TB drives for a boot drive or that 60GB SSD. You are going to need one. You cannot use a RAID 5 array as a boot drive. Only RAID 0 or 1 can be used for a boot drive. If you try to use part of a 1TB drive as a boot drive and part for RAID you will suffer major performance bottlenecks. Nor would I use part of the 60GB SSD as a scratch disk and part for booting/apps as there is not enough space.

I plead the 34th Amendment -- ign'ance? :) I did not know you can't boot from a RAID 5. So ... add maybe a 32GB SSD for boot or something like that? Edit: Apparently 32s are out - 60 or 64GB? Price per GB is $1 for decently-rated SSDs in both 60 and 120 GB capacities. So ... maybe go back to sort of my original idea with a 60GB SSD for cache, a 120 GB SSD for OS and software, and then the 3 1TB drives in RAID 5 for data storage?


The other is I don't see a copy of Windows 7 OEM in there. If this is connecting to a server it must be Win 7 Professional OEM for domain support.

I can get a copy of Windows from my university, so I wasn't including that in there.

Edit: Curses! I went to double-check this and only students, not faculty/staff, can get Windows for personal use. Looking at the Wikipedia entry, it looks like I'd need Professional so I can access the RAM I'd be installing. Do I need Ultimate so I can also run Linux, or can I do something like partition whatever disk I get for startup and just install Linux on that?


This RAIDMAX is 80plus Gold certified and modular. The Corsair you listed is less energy efficient, more expensive and not modular. It is about $10 cheaper to boot!
Sounds good. I think someone somewhere recommended a Corsair and so I just went with that. Edit: Just noticed that it (the RAIDMAX) is also 3 lbs heavier, but I don't think that really matters.

You can save nearly $100 on the case with the Antec Three Hundred. It is well built, roomy and has plenty of cooling.
But that doesn't have cool blue or red LEDs. :( But seriously, so I don't need a "full" tower, just a "mid" tower? That changes things and I can look for one that I like aesthetically. :) Other than # "external" drive bays - and I don't EVER see having >2 optical drives, I do see that there would not be USB3 ports on the front of that one.

You can save $50 with the ASRock X79 Extreme6. It actually has far better customer feedback than the ASUS P9X79.
I was originally going to go with that motherboard, then with the Extreme9. My comparison was the Asus P9X79 /Pro/Deluxe. I then read three separate sites that were comparing X79 motherboards, and on all of them, the ASRock performed pretty much the worst. That's why - in addition to the SSD caching that Asus has on the motherboard - I was thinking of going with the Asus P9X79 Pro instead. Do you disagree with that reasoning?

You can save $20 with two sets of Corsair XMS 32GB memory.
I just spent 5 minutes trying to figure out the difference between that and the one I chose. It looks like the XMS vs. Vengeance is $20 cheaper ($230 Amazon for the XMS, $250 NewEgg for the Vengeance), has a CAS Latency of 11 versus 10, and doesn't have the "fan" heatsink on top. Is it worth $40 savings (2 sets of 4x8GB) for that? I don't know enough about CAS to judge.
 
Last edited:
One thing I did not think about. Is if your variant of Linux has drivers for all the new hardware. Make sure they are all available. Especially the RAID controller. As you would not be able to see the array if there are not proper RAID drivers. This should not be a problem if you virtualize Linux from Windows.

You don't need a full tower unless you needed a whole lot of hard drives, Optical drives or where trying to install a heavy duty self contained liquid cooling system. The wait of the PSU doesn't matter either unless you are going to be carrying the computer around.

I wasn't aware of those reviews of the ASUS and ASRock. I was just going by customer reviews. Which are my guiding light on two very close pieces of hardware. As for SSD caching according to ASRock the Extreme9 supports that. Which I think is an X79 chipset feature. I can't think of why there would be any real performance difference between the two. But looking at Tom's hardware there actually is a difference. You may as well get the ASUS.

As for the CAS Latency the lower the better. Though the two would make little difference. Considering what you are looking to pay right now 1% cost difference is nothing even if it just gains you 1% to 3% performance.
 
One thing I did not think about. Is if your variant of Linux has drivers for all the new hardware. Make sure they are all available. Especially the RAID controller. As you would not be able to see the array if there are not proper RAID drivers. This should not be a problem if you virtualize Linux from Windows.
Okay, I'll start with virtualization then. That's what the folks where I work do for the crater software. Re: RAID controller -- Do I need this? I thought the motherboard did it.

You don't need a full tower unless you needed a whole lot of hard drives, Optical drives or where trying to install a heavy duty self contained liquid cooling system. The wait of the PSU doesn't matter either unless you are going to be carrying the computer around.
Okay, mid tower it is.

As for the CAS Latency the lower the better. Though the two would make little difference. Considering what you are looking to pay right now 1% cost difference is nothing even if it just gains you 1% to 3% performance.
Okay, Vengeance RAM it is, CAS 10.

I may buy this weekend if I don't hear anything else from other people. I'm meeting with a guy hopefully tomorrow morning who should have some insight, and he's the guy who maintains the crater detection software.
 
What about this build? [EDITED]

Part list permalink / Part price breakdown by merchant / Benchmarks

CPU: Intel Core i7-2600K 3.4GHz Quad-Core Processor ($279.99 @ Microcenter)
CPU Cooler: Scythe SCMG-3000 74.2 CFM CPU Cooler ($42.85 @ NCIX US)
Motherboard: ASRock P67 Extreme4 Gen3 ATX LGA1155 Motherboard ($149.99 @ Newegg)
Memory: G.Skill Ripjaws X Series 32GB (4 x 8GB) DDR3-1333 Memory ($209.99 @ Newegg)
Hard Drive: Seagate Barracuda 1TB 3.5" 7200RPM Internal Hard Drive ($94.99 @ Best Buy)
Hard Drive: Seagate Barracuda 1TB 3.5" 7200RPM Internal Hard Drive ($94.99 @ Best Buy)
Hard Drive: Seagate Barracuda 1TB 3.5" 7200RPM Internal Hard Drive ($94.99 @ Best Buy)
Hard Drive: Sandisk Ultra 120GB 2.5" Solid State Disk ($129.99 @ Microcenter)
Video Card: Asus GeForce GTX 550 Ti 1GB Video Card ($119.99 @ Newegg)
Case: Antec DF-85 ATX Full Tower Case ($150.05 @ NCIX US)
Power Supply: Corsair 800W ATX12V Power Supply ($119.99 @ Microcenter)
Total: $1487.81
(Prices include shipping and discounts when available.)
(Generated 2012-04-12 21:00 EDT-0400)
 
Last edited:
Thanks, but I don't think you've quite followed the discussion, and some of those choices aren't mutually compatible. For example, the graphics card alone, which I don't need that kind of card for my work, requires a minimum of 600W which leaves nothing left from the PSU you selected. I also already purchased an optical drive (sale was ending), Windows Home Premium can only address 16GB RAM, and if I'm doing a butt-load of disk I/O, a 5400RPM drive is not the correct choice for me.
 
Thanks, but I don't think you've quite followed the discussion, and some of those choices aren't mutually compatible. For example, the graphics card alone, which I don't need that kind of card for my work, requires a minimum of 600W which leaves nothing left from the PSU you selected. I also already purchased an optical drive (sale was ending), Windows Home Premium can only address 16GB RAM, and if I'm doing a butt-load of disk I/O, a 5400RPM drive is not the correct choice for me.

Even better then. You can save more money.
 
Okay, I'll start with virtualization then. That's what the folks where I work do for the crater software. Re: RAID controller -- Do I need this? I thought the motherboard did it.

The X79 (C600) chipset of the motherboard has the RAID controller built into it. To actually detect the array the operating system must have the drivers for that integrated controller to speak with it. While single drives work with generic drivers.

The Linux distribution you choose may not have those drivers.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.