Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Okay between you and Nano it sounds like an SSD may be a good option for a BOOT Disk.

SSD's have really dropped in price ............ and continue to do so.

Look for a drive with the Sandforce controller. OWC, OCZ, etc.

I'll second the previous suggestion to go light on "future proofing" issue. The next gen MP's will likely be a big update. Also, very few programs take advantage of more than one core at this time. That will obviously change in the future but buying for future use in the computer world rarely makes sense.

You will do just fine with a Quad or Hex, an SSD boot drive, and a couple of large HD's in a SW RAID 0.

good luck with whatever you get.
JohnG
 
This is why I want a RAID Card. Any Recommendations on a better card?

You don't need a RAID Card right now. The whole point of the Mac Pro was that you could put in addition hardware later when you needed it.

This is the usual RAID card sales pitch wind up around here. Pass RAID 0 , 1 , and 10 off as doom and gloom . Gotta have RAID 5 or higher or else the sky will fall.

With the constraints you started with....

i. only internal storage is 'good' (extrenal storage "bad' )

ii. you only had "quad" like workload... and have already drifting toward getting a hexcore unit. You already have two more CPUs than you generally need right now. Software RAID is generally low overhead for a small collection of disks, but it is defacto zero overhead if have more CPUs than you need right now. Therefore, there is no offloading advantage at all of a "real" RAID card.

Dubious of any RAID card that is going to online migrate you from a 4 disk RAID 10 to a 4 disk RAID 5 or 6. IF you start with 4 disks the "online expansion" thing is a no op since maxed out at 4 disks. ( barring stuffing disks into the optical drives orifices ... which is another whole tech kit gadget sales pitch which ignores several factors *** )

If and when you Mac Pro starts to get even close to being CPU starved in a reasonable number of workloads that when can consider offloading the software RAID onto a card. However, you can buy four 2TB drives for the prices most of these recommended "real" RAID cards come in at. A year or two from now you'll be able to buy four 4TB drives for the same exact prices. If if costs you $300 to buy a card so that buy one less $120 drive that is not a good trade off.

The other factor not covered so far is archiving which. like backups , is best done externally. If RAID 10 2TB drives you'd end up with 4TB of user space. For someone just mucking around that is huge. It will stay huge if you migrate abandoned stuff ( i.e., things don't look at and/or work on for long time. Start project on Uncle Festers video vacation. After a year.... time to offload that some non spinning disk. ). That cuts down your backup time and expenses.

Many people drive themselves into "I gotta have extra super duper RAID solution" because fragment up the OS File system so much that latency read/writes are killing them. The simpler solution to that is don't frak up the file system. Don't fill it up to the brim with stuff. Archiving stuff off will help keep the % empty higher. That leaves the file system room to do its job if incrementally managing fragmentation.




*** A boot/Application SSD in the second ODrive space isn't really a huge consumer of bandwidth. It is primarily killing off latency not necessarily killing off tons of sustained bandwidth. RAID 10 of 4 spindle drives and one SSD may have some very short transient peak loads on internal controller but it is won't bottleneck for extended periods unless doing something quirky with boot/app drive.
 
SSD's have really dropped in price ............ and continue to do so.

So will spinning hard drives ( for much larger capacity levels ). It is not that all hard drives will go away. Nor will hybrids remain limited offerings.


The more critical factor is that the size of OS + Apps is not growing very rapidly. ( it is growing but not as quicky as the lower priced SSDs are growing in capacity). As oppose to user data, which trends to grow faster (bigger files coupled with tendency to 'pack rat' more stuff. ).
Since the amount of program data is relatively fixed, SSD become a useful option when they capacity starts to cross the line used for boot/OS drives in the past.


Look for a drive with the Sandforce controller. OWC, OCZ, etc.

It doesn't have to be SandForce. It just needs to be one that is not solely dependent on the "TRIM" command for good long term write performance. Sandforce is one of the better ones but they are not the only one. Intel isn't clueless. Neither is Crucial/Micron although they don't have lots of deployed versions yet. The other vendors tend to be much more opaque as to which controller is under the covers.

What you want to stay away from are "old" designs ( stuff from 1.5-2 years ago. ). For a while, some designs were drifting toward believing "TRIM" was the answer. It isn't. Especially on Mac OS X. The drive controller needs a garbage collector.
 
Dubious of any RAID card that is going to online migrate you from a 4 disk RAID 10 to a 4 disk RAID 5 or 6.
I can't think of a card that can do this. Online Expansion of the existing array, fine. But shifting to another level without starting over (no need to restore data), No.

If and when you Mac Pro starts to get even close to being CPU starved in a reasonable number of workloads that when can consider offloading the software RAID onto a card. However, you can buy four 2TB drives for the prices most of these recommended "real" RAID cards come in at.
If sticking with 10 (or any other level Disk Utility can manage), then a non-RAID HBA would be an option due to cost reasons (i.e. 4 port versions for $130 <ARC-1300-4i>, or say 8 port version for $400 <ATTO H608>).

Given the described usage and nothing more than a capacity number, a level 10 array on the ICH should be sufficient for immediate need, given the described usage (especially as the system isn't being used to earn a living; such cases are where a parity based DAS system are warranted, as the throughput does save enough time additional jobs can be taken on). As a hobbyist, it's not critical (walking away while a process completes is acceptable in order to save funds IMO). If the system will be used to earn a living at a later date, but not that far into the future, then going ahead and getting a RAID card and the other necessary hardware (enterprise disks, and any mounts/enclosures needed), may be acceptable (depends on how long this will be, as there's new gear coming out all the time = other options, or the ability to do it cheaper).

Not to say a RAID card isn't a nice thing to have for other reasons (i.e. features), but the really desirable cards aren't cheap. $300 won't get you much (say 4 port model with a modest processor, such as the ARC-1210 <4 port model based on an IOP332>).
 
snip................
It doesn't have to be SandForce. It just needs to be one that is not solely dependent on the "TRIM" command for good long term write performance. Sandforce is one of the better ones but they are not the only one. Intel isn't clueless. Neither is Crucial/Micron although they don't have lots of deployed versions yet. The other vendors tend to be much more opaque as to which controller is under the covers.

snip.............

Very true: I've been very pleased with the long term performance of the Crucial C300 drive in my MBP. The GC appears to work.

Anyway, I did go with the new OCZ drive (2VTX120G) for the MP because it was available in a 3.5" form factor. Bolted (screwed) right to the MP drive sled and was in the machine in 60 seconds. Easy peasy.................. plus the read/write performance is outstanding (slightly better than the C300).

regards
JohnG
 
:cool: NP. :)


If you're looking at RAID cards and all the associated equipment, be prepared to wet yourself. :eek: :p

Starting from scratch can exceed $3000USD without a lot of effort, but it can be done for less as well. It all depends on the specifics (port count, internal or external, and drive capacities used).

Keep in mind that it can be cheaper to get more ports and the ability to install them (enclosures/mounts), than a smaller physical setup using large capacity drives (i.e. 1.5 or 2TB enterprise disks). 1TB disks are currently the "sweet spot" for enterprise disks.

Okay I've wet myself. Looking at this RAID CARD http://www.attostore.com/sas-sata-raid-adapters/6gb-express-sas-r644.htm and an External SAN to tie to it. Cost wise I'm at $1095 for the RAID Controller, $400 for an External SAN / SATA Tower, and $300 per Enterprise Disc. Plus about $150 for the RAID Software GUI, and another $150 for the SAN Cable. If I went this route would I still need a Data Backup Solution.
So here's another question. What is RAID getting me that say an External DROBO array isnt? Looks like DROBO is covering both Data Backup and some sort of psuedo RAID Disk protection. DROBO also looks a LOT easier to manage and scales to a 16TB storage solution.

I'm starting to wonder if an iMAC i7 and DROBO isn't a better route to go.

My head is starting to spin.
 
Okay I've wet myself. Looking at this RAID CARD http://www.attostore.com/sas-sata-raid-adapters/6gb-express-sas-r644.htm and an External SAN to tie to it. Cost wise I'm at $1095 for the RAID Controller, $400 for an External SAN / SATA Tower, and $300 per Enterprise Disc. Plus about $150 for the RAID Software GUI, and another $150 for the SAN Cable. If I went this route would I still need a Data Backup Solution.
So here's another question. What is RAID getting me that say an External DROBO array isnt? Looks like DROBO is covering both Data Backup and some sort of psuedo RAID Disk protection. DROBO also looks a LOT easier to manage and scales to a 16TB storage solution.

I'm starting to wonder if an iMAC i7 and DROBO isn't a better route to go.

My head is starting to spin.
You can find the ATTO ExpressSAS for $829.19 from provantage (here). They're a good company to deal with (I use them often).

As per disks, what are you trying to use?
I ask, as $300 (assuming mechanical), would correlate to a 2TB model. It's actually cheaper to get more smaller disks (1TB), and it's also faster (performance is based on parallelism). I wouldn't advise using consumer grade SSD's in a parity array due to the write cycle limitations of MLC NAND Flash (SLC is better suited, but it's also much more expensive).

Where are you getting the $150 for RAID software?
The card does the work, so you don't need it (if you wanted to do a software implementation, you'd want to stick with 0/1/10, and either the built-in SATA ports (ICH), or a non-RAID Host Bus Adapter (i.e. ATTO H6xx model).

The cables needed to get from external ports are included in the Sans Digital enclosure (external to external).

Yes, you still need a backup solution (no matter if its a single drive or the most complicated RAID system possible). I'd go with a newertech eSATA card that supports Port Multiplier enclosures, and an eSATA box with up to 10x bays (limit of 5x disks per port via PM chips). It's the cheapest way to go.

Drobo's are usually software implementations as well, so be aware of that.
 
Okay I've wet myself.
So here's another question. What is RAID getting me that say an External DROBO array isnt? Looks like DROBO is covering both Data Backup and some sort of psuedo RAID Disk protection. DROBO also looks a LOT easier to manage and scales to a 16TB storage solution.

I'm starting to wonder if an iMAC i7 and DROBO isn't a better route to go.

My head is starting to spin.

Drobo is a form of RAID... They use their own proprietary parity scheme I believe... Pseudo RAID is as good a term as any.

Why do you need so much storage? Or RAID at all? Or a Drobo? As far as I can tell from your OP, you are using just 160GB of storage! I havent seen any evidence you need data center level storage. Why not Just get a Mac Pro with a 1 or 2TB drive and buy another (or even a Time Capsule) for time machine backups?... and change your pants. :p.
 
Drobo is a form of RAID... They use their own proprietary parity scheme I believe... Pseudo RAID is as good a term as any.
That works. :D And it's a software implementation for most of the units (there is one that's hardware IIRC).

But I prefer to build my own NAS using Linux for a software implementation (using Z-RAID or Z-RAID2; designed in a manner that there is no write hole).
 
That works. :D And it's a software implementation for most of the units (there is one that's hardware IIRC).

But I prefer to build my own NAS using Linux for a software implementation (using Z-RAID or Z-RAID2; designed in a manner that there is no write hole).

In delving into Drobo in more detail, it's utterly bizarre, but I guess it works.

From wikipedia...

BeyondRAID

Data Robotics, Inc. implements a storage technology that they call BeyondRAID in their Drobo storage devices. While not a true RAID ISO spec extension, it does provide for using up to 8 SATA hard drives in the devices and consolidating them into one big pool of storage. It has the advantage of being able to use multiple disk sizes at once, much like a JBOD unit, while providing redundancy for all disks and allowing a hot-swap upgrade at any time. Internally it uses a mix of techniques similar to RAID 1 and RAID 5. Depending on the amount of data stored on the unit in relation to the installed capacity, it may be able to survive up to three drive failures, if the "array" can be restored onto the remaining good disks before another drive fails.

The amount of usable storage in a Drobo unit can be approximated by adding up the capacities of all the disks and subtracting the capacity of the largest disk. For example, if a 500, 400, 200, and 100 GB drive were installed, the approximate usable capacity would be 500+400+200+100-(500)=700 GB of usable space. Internally the data would be distributed in two RAID 5-like arrays and one RAID 1-like set:

Code:
          Drives
 | 100 GB | 200 GB | 400 GB | 500 GB |

                            ----------
                            |   x    | unusable space (100 GB)
                            ----------
                   -------------------
                   |   A1   |   A1   | RAID 1 set (2× 100 GB)
                   -------------------
                   -------------------
                   |   B1   |   B1   | RAID 1 set (2× 100 GB)
                   -------------------
          ----------------------------
          |   C1   |   C2   |   Cp   | RAID 5 array (3× 100 GB)
          ----------------------------
 -------------------------------------
 |   D1   |   D2   |   D3   |   Dp   | RAID 5 array (4× 100 GB)

Note how they break the drives up into multiple partitions each belonging to a different RAID set! :eek:
 
Is looking for a machine that will last 5+ years really a wise choice or practical?

I've never really understood this buying behavior with computer technology that changes so fast. My personal philosophy is to buy a modest computer every 2-3 years. This way, the computer I own is in the sweet spot of meeting my needs a higher percentage of the time.

Its a reasonable question to ask, although the counterpoint is that the rate of change (pragmatic) has levelled off versus historical, so a 5+ year lifespan really isn't all that unreasonable of an expectation.

Also, you can't just look at core count and CPU clock speed to determine if that will be a relevant computing platform for you in 5 years. What about I/O? Particularly with disk storage performance going through the roof and nearly doubling every year or two?

True, although right now, I/O is our bottleneck, particularly for spinning disks. Hence, the interest in RAID-0 variations and SSD's.

SSD's alone promise to obsolete the 2010 Mac Pro's within the next year as their performance outstrips the SATA and ICH capabilties of current models. Sure you can cook up some kind of bootable PCIe solution, but even that would be temporary at best. Today's platform is just not going to be up to the disk storage performance coming down the pike. SSD's are already exposing serious bottlenecks that are going to get worse. Do you want those bottlenecks around your neck for 5 years?

Problem is that anything you buy today is going to have that I/O bottleneck, so all you're effectively advocating is to buy nothing until SATA-3 (6GBps) is on the motherboard...unless of course you believe that it might be possible to get a PCIe expansion card sometime in the future :rolleyes:

Ditto for USB-3, if and whenever. Hence the OP is considering a Mac Pro instead of an iMac.

FWIW, I'm in pretty much the same buyer's type of situation...


You can get a top performing 120GB drive for just over $200. As for whether the performance is worth it... it's a no brainer. As I said, it's the single biggest improvement you can make to the performance of your system.

...
With this drive you can probably house your OS/Apps and even your home directory. Only your raw video and image files may need to be stored elsewhere.

Agreed. My thoughts have been to keep the Apple OEM HDD in place, buy an SDD (maybe two for RAID-0) and use the SSD as the boot & apps drive...and keep the original HDD around as a backup mirror of the OS & Apps. For data, two of the three remaining bays would be for a RAID-1 of whatever large size makes sense at the moment. Bay 4 is TBD, although would probably be for another SSD for a dedicated Photoshop scratch.

Another thing to consider... do you really need 4TB of HD storage?
From what you posted so far, it doesn't sound like you are going to fill up that kind of storage any time soon, but there may be something I don't know, or missed? If you are like me and use your Mac Pro for hobby related video/image editing, you may want to consider a different approach.

Yes, as he mentioned RAW (still photography) files. My personal iPhoto library of still photos is only around 30K images of mixed JPG and RAW ... and is closing in on 200GB (thanks in part to upgunning to a Canon 7D this spring).

What differs is the horizon for the data storage needs - I expect that the OP's thoughts are along the lines of "Keep everything organized in basically one place" (a common hobbyst attitude), then the "Couple of TB" range will be around within a few years ... ie, the 5 year lifespan expectation.

Plus if there's also any ripping of DVDs (or worse, BD) anywhere under consideration, that will make a difference too.

However, the real value-added observation for the OP is that he probably doesn't need to buy a huge amount of storage right now today. As such, he may be able to wait awhile before dropping the bucks for some 2TB's, since HDD's costs per GB will continue to improve over time.

Similarly, the plan needs to also consider remote storage backup of that personal photo data. For the average hobbyst, the simplest thing here is to use ~3 (varying size OK) "frugal" 3.5" HDDs (perhaps existing) that a backup can be written to every 3-4 weeks, then the HDD can be taken into work and thrown into your desk drawer, and one of the two that are already in the desk drawer gets removed & taken home, to be used for the next backup cycle.


-hh
 
True, although right now, I/O is our bottleneck, particularly for spinning disks. Hence, the interest in RAID-0 variations and SSD's.
The I/O bottleneck exists to keep the costs low (single disk = cheap, and the current gen of I/O Controller Hub was designed around mechanical, not SSD's - which are now showing the limitations). But the user can solve it with either a software (up to the bandwidth limit of the ICH) or hardware RAID implementation (can exceed the ICH's bandwidth, as well as disk count).

Application software OTOH, is another story (specific application suite, as a competitor's offering may beat it with say n core multi-threaded support). It's up to the developer to add that capability (or any other feature the users may be after), assuming it's even possible, as not all applications can utilize it (word programming, or any other that relys on the user for input wouldn't benefit, and could cause problems if the OS can't schedule around the resources properly).

Given the user can't deal with both problems, that puts the larger problem on the software IMO.
 
Problem is that anything you buy today is going to have that I/O bottleneck, so all you're effectively advocating is to buy nothing until SATA-3 (6GBps) is on the motherboard...

That's not what I'm advocating.

What I'm advocating is that it's wisest to buy a modest system today that's well balanced in terms of I/O, CPU, Memory, and Storage. Buying a system that's unbalanced and highly skewed to one area with bottlenecks in another is what I'm advocating against.

If you buy a 12 core machine today, expecting to grow into it in 3-4 years, that's foolish in my opinion. The I/O limitations of your 3-4 year old system will be killing you by then. And, of course, by the time you can use 12 cores effectively, they will cost much less (as top of the line systems will have 24-32 cores) and the mid-line systems as a whole will be more balanced with improved I/O, denser memory modules for the same money, etc.

Buy a $5K machine every 5 years, or a $2.5K machine every 2-3 years?... that's the question. I'm advocating the latter.
 
Buy a $5K machine every 5 years, or a $2.5K machine every 2-3 years?... that's the question. I'm advocating the latter.
Assuming the $2.5k system every 2 - 3 years will work, or if it is, the best solution (areas that a true workstation really is needed, such as ECC, and more cores than currently exist on a single CPU).

But situations like this aren't common (means the software is already capable of n core multi-threading, the additional power is necessary vs. a single CPU, and the application is sensitive to errors, such as scientific simulation).

Another exception would be 3D work with enough jobs that can justify the performance of a DP system (more profit generated as a result of being able to take on more jobs in a period of time), yet ECC isn't truly necessary (software doesn't use recursive calculations). ;)
 
Assuming the $2.5k system every 2 - 3 years will work, or if it is, the best solution (areas that a true workstation really is needed, such as ECC, and more cores than currently exist on a single CPU).

But situations like this aren't common (means the software is already capable of n core multi-threading, the additional power is necessary vs. a single CPU, and the application is sensitive to errors, such as scientific simulation).

Another exception would be 3D work with enough jobs that can justify the performance of a DP system (more profit generated as a result of being able to take on more jobs in a period of time), yet ECC isn't truly necessary (software doesn't use recursive calculations). ;)

I'm assuming the stated use of the OP... hobby photo and video editing.
 
I'm assuming the stated use of the OP... hobby photo and video editing.
Makes sense to me as well (he clearly stated it's not for professional use), but I threw it out there. :p

If I were in his position, I would stick with a SP Hex model if performance is a higher priority and the budget will allow it along with any upgrades.
 
snip............

Buy a $5K machine every 5 years, or a $2.5K machine every 2-3 years?... that's the question. I'm advocating the latter.

Right now I'm very happy to be in the later camp. For me, there was no sense in going hex as the state of the SW intended for the MP (FCS) is rather pathetic. I do expect that situation to change................ hopefully by mid 2011 but even if FCS used 100% of all multi-cores the current $/performance trade didn't quite seem right to me. I expect the next rev of the MP (~early-mid 2012?) to have enough bumps in the architecture to justify an upgrade for me. Might even go hex at that point.

Funny how we all come to different answers to justify our decisions. :)

cheers to all the MP mini camps
JohnG
 
Right now I'm very happy to be in the later camp. For me, there was no sense in going hex as the state of the SW intended for the MP (FCS) is rather pathetic. I do expect that situation to change................ hopefully by mid 2011 but even if FCS used 100% of all multi-cores the current $/performance trade didn't quite seem right to me. I expect the next rev of the MP (~early-mid 2012?) to have enough bumps in the architecture to justify an upgrade for me. Might even go hex at that point.

Funny how we all come to different answers to justify our decisions. :)

cheers to all the MP mini camps
JohnG

Well after a few more disucssions today...........I think I know the system I'm going for.

The MacPro Single Processor Hex Core, with 8 Gig Memory, Internal RAID 10 (Software RAID), Four 1TB HDD's, One 240GB SSD, and a DROBO 4 or 5 Bay system for Backup.

When I I grow to more storage, I'll bump up to 2TB HDD's and bump up my DROBO Drives, If I grow beyond 4TB requirement with RAID 10 internal drives, I'll get the RAID card and have 8TB internal array and 8TB external array. My DROBO can scale to 16TB which is around 10+TB of useable storage / backup.

So I think this will be a platform that will last me 5+ years, with plenty of speed and performance. We'll see how quickly HD Video, RAW Format Photos, and the iTunes library fills up my disk space.
 
You can find the ATTO ExpressSAS for $829.19 from provantage (here). They're a good company to deal with (I use them often).

Yeah and their website / products were easier to navigate and understand. When I'm ready for the RAID Card I'll more than likely go with ATTO.

As per disks, what are you trying to use?
I ask, as $300 (assuming mechanical), would correlate to a 2TB model. It's actually cheaper to get more smaller disks (1TB), and it's also faster (performance is based on parallelism). I wouldn't advise using consumer grade SSD's in a parity array due to the write cycle limitations of MLC NAND Flash (SLC is better suited, but it's also much more expensive)..

Well I was originally thinking that I'd go with 2TB Enterprise Disks in all four bays day one. Whenever I config a router I always max out a PVDM slots HWIC slots day 1.........No upgrades later. But the cost of Disk Space is making me go the 1TB Enterprise Disk path. I'll upgrade when I need to and hopefully the cost is much lower. I wont pretend that I knew about the performance issues.

I'm only looking at an SSD as a Boot Drive to house my OSX, Apps, and work on projects. The RAID array will be where I archive files.

Where are you getting the $150 for RAID software? The card does the work, so you don't need it (if you wanted to do a software implementation, you'd want to stick with 0/1/10, and either the built-in SATA ports (ICH), or a non-RAID Host Bus Adapter (i.e. ATTO H6xx model)..
There is an ATTO GUI Based RAID manager CD that was a suggested add on to the RAID Card. looks like it shows drive status and performance for managing the array, didn't know if it was necessary or not.


The cables needed to get from external ports are included in the Sans Digital enclosure (external to external).

So these are standard cables then? I'd never seen or heard of a mini SANS cable before. But again the ATTO website recommended either a Mini-SAN to SAN cable or a Mini-SAN to 4xSATA cable. I guess the external SATA enclosure type would determine which cable would be required?


Yes, you still need a backup solution (no matter if its a single drive or the most complicated RAID system possible). I'd go with a newertech eSATA card that supports Port Multiplier enclosures, and an eSATA box with up to 10x bays (limit of 5x disks per port via PM chips). It's the cheapest way to go.

Drobo's are usually software implementations as well, so be aware of that.

I think I want to keep it simple and easy to manage. The DROBO looks like it works with minimal config required by me. So that sounds good. Plus the Array capacity and individual Disk status is very easy to understand. So DROBO looks very appealing to me. I've got enough to learn with the RAID config on the MAC.;)
 
Yeah and their website / products were easier to navigate and understand. When I'm ready for the RAID Card I'll more than likely go with ATTO.
Areca makes a good product as well, and it has a better price/performance ratio (performance is equal or better than the ATTO, it includes internal cables other brands do not, and is cheaper).

BTW, the performance is similar as they tend to take the same approach (based on the same parts and reference designs).

There are a couple of differences, most notably how they're interfaced with. ATTO uses their own software, while Areca uses an IP Address : Port in a browser). The other difference I can think of off the top of my head, is the number of drives supported via SAS Expanders (256 for ATTO, 128 for Areca). This won't apply to you, so don't worry about it. ;)

Well I was originally thinking that I'd go with 2TB Enterprise Disks in all four bays day one. Whenever I config a router I always max out a PVDM slots HWIC slots day 1.........No upgrades later. But the cost of Disk Space is making me go the 1TB Enterprise Disk path. I'll upgrade when I need to and hopefully the cost is much lower. I wont pretend that I knew about the performance issues.
Unfortunately, the 2TB disks are still too expensive/GB to make it the best solution when starting from scratch like this.

They can work out for a future expansion if the system is full (bays and ports are all used).

I'm only looking at an SSD as a Boot Drive to house my OSX, Apps, and work on projects. The RAID array will be where I archive files.
You could even keep working projects on the array (you can test out your exact usage both ways, and see which is faster).

Just keep in mind, if you're doing a lot of writes on MLC based SSD's (which I suspect you will), you'll burn out the cells faster as it's not an empty SSD (fewer cells to rotate between for wear leveling).

If you want to use SSD's for working data (and especially scratch), you could consider using separate SSD or a small array strictly for this. More cells to use for wear leveling, and if when they die, replace them (OWC has a 40GB unit for $118 last I checked). Figure ~ 1 - 1.5 years as a replacement cycle (MTBR) under heavy writes. It opens up new configurations that weren't viable prior to it's introduction due to cost reasons.

This may be a harder thing to deal with though, as you've stated its not being used to earn a living. Just another "log on the fire". :eek: ;) :p


There is an ATTO GUI Based RAID manager CD that was a suggested add on to the RAID Card. looks like it shows drive status and performance for managing the array, didn't know if it was necessary or not.
It will come with the ATTO card, as you need it to interface with it (access the settings). Their software is only meant to work with their own products, not those from other vendors. So don't waste your money and try.

Areca's will use a browser.


So these are standard cables then? I'd never seen or heard of a mini SANS cable before. But again the ATTO website recommended either a Mini-SAN to SAN cable or a Mini-SAN to 4xSATA cable. I guess the external SATA enclosure type would determine which cable would be required?
It's MiniSAS (Serial Attached SCSI).

Internal end = SFF-8087
External end = SFF-8088

There's also a Fan Out version, which has 4x separate SATA/SAS connectors attached to a MiniSAS end (internal and external versions).

There's even other ends that can be used (SFF-8470 for example).

The Areca's will come with 1x SFF-8087 to 4i*SATA cables per internal port = Internal MiniSAS Fan Out (so an 8x card will have 2x cables, and so on). ATTO cards have never seen one come with any cables. If you go externally, you won't need them anyway, and if you use the HDD bays internally, the MaxUpgrades kit (allows you to use the HDD bays), will have the internal cabling you need to get the drive signals to the card (internal port).

The Sans Digital enclosures will have 1x SFF-8088 to SFF-8088 cable per connector on the back (each connector has 4x ports in it). So the TR4X will have 1x cable, and the TR8X will have 2x of them.

I think I want to keep it simple and easy to manage. The DROBO looks like it works with minimal config required by me. So that sounds good. Plus the Array capacity and individual Disk status is very easy to understand. So DROBO looks very appealing to me. I've got enough to learn with the RAID config on the MAC.;)
They're slow, so they're great for backups, but not so much for performance.

You've a lot to digest here, and I've no idea how crazy you're actually willing to go. But as you seem to really be willing to consider good RAID cards, I'd recommend skipping the Drobo for one (better recovery control and performance)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.