Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

JulianBoolean

macrumors regular
Original poster
Aug 14, 2010
142
5
Hi all,

I have a shiny new 6 core on the way, woot woot! After reading a few other threads and a little research, here's what I think might work for my particular workflow and the drives I can pull from my old G5. Would greatly appreciate any and all suggestions, alternatives, opinions, scathing critique, cheerful chides, etc.

Primary Use: Creative Photo Retouching & Illustration. Sometimes a single image can grow to 8 gigs or more in the working stage before the layers are simplified or flattened upon client approval.

Primary Speed Bump Incremental saves throughout the day. With large files, a save (on my G5) can take 20-30 minutes. Puts me into a catch - 22. I'm stressed that a crash will evaporate all the work I've done in the last hour, but a save might cost me 30 minutes.

Configuration & Upgrades:

24Gb RAM (3 x 8),,, 4th slot empty $1,095.99

Optical Bay : OS + Apps
60Gb OWC extreme pro SSD,,, $179.99

Bay 1 : Current Job + Scratch
First of two 50 Gig OWC SSDs in Raid 0,,, $209.99

Bay 2 : Current Job + Scratch
Second of two 50 Gig SSDs in Raid 0,,, $209.99

Bay 3 : Completed Jobs
1.5Tb 7,200 rpm HD pulled from old G5

Bay 4 : Completed Jobs
1.5Tb 7,200 rpm, HD pulled from old G5

External 1: Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire

External 2 : TM Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire
Small Partition for Bootable Clone of OS + Apps

Thanks in Advance! - JB
 

sboerup

macrumors 6502
Mar 8, 2009
416
2
Well thought out and to me looks like a really awesome setup for what you are doing. That setup should last you years with incredible performance.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
I do PS and design for a living largest files tend to be about 1 gig or so

the one thing is cache/scratch and working files on the same disc is not as efficient as separating them ? something to think about
especially when writing huge files
since it is basically scratching to the same disc as its writing !

I am not a big fan of long hard work on a Raid 0 ? I use it for some things but not long drawn out work I would rather be working on a safer system should something happen

if I have to redo 4 hours of PS work that means the 4 hours I lost the 4 hours to redo it and the extra hours I push off my next client to get caught back up !
at my $ per hour its cheaper to have more HDs compared to loosing that time

just a thought :)


the 40 gig at $99 thinking of trying them for cache/scratch in raid 0 and use my 100 gig RE for other things
I currently have two 100 gig OWC RE SSD drives one for cache one for boot
areca 1222x with 8 750 drives setup raid 6 for working files and two stand along raid 5 boxes one for bu one for time machine and other offline offsite setups

I might say get a good card and get some externals so you can have fast storage for your working files and not have the chance of going down as much

to me I say if you have a raid 0 and a drive dies ? well its no difference than having a single HD and it dies ? either way you are hosed so I am not against raid 0 as a raid setup I am against betting against me loosing time and work !!!!
I prefer raid 1/0 or even raid 1 or Raid 6 or another form of keep working if a drive dies setup along with some speed
some hate raid 6 here ? but for some of us who deal with lots of large files and dont want the loss of raid 1/0 in disc space vs how many I have to run !
I think Raid 6 on the areca gives me the speed I want with the batter bu module its one more small safety for write issues that could happen


one other setup thought I might say get 4 2TB WD black raid 1/0 using disk utility those go in the sleds
take out my dvd and put it in external and then use those two ports for my SSD one for boot one for cache/scratch
use external BU


sounds like a fine setup but my thoughts for you would be to try comparing yourself having your scratch and working files on the same disc vs separate ? you might find you are OK with them on both

if you get the setup you mentioned I would try to split those two 50 and use one for cache/scratch one for main working files and compare the times

the SSD are a time saver ! this way you can do some testing the setup you wrote and the two drives separate

I have found I take advice then try it make notes compare it to my own findings
I would not take my word or any one else's here as the only thing to do but as good advice to start with and test for yourself ! :)
hope this helps you out
 

Giuly

macrumors 68040
You know that the Extreme Pro RE is just a real expensive last-gen SandForce-1200 drive and even OWC doesn't state/know why you should prefer it over the normal Extreme Pro with 60GB for RAID, other than a "RAID-READY Enhanced!!1" from the marketing department?
Only thing about it are the 5 instead of 3 years of warranty.

Other than that, the setup is state-of-art.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
You know that the Extreme Pro RE is just a real expensive last-gen SandForce-1200 drive and even OWC doesn't state/know why you should prefer it over the normal Extreme Pro with 60GB for RAID, other than a "RAID-READY Enhanced!!1" from the marketing department?
Only thing about it are the 5 instead of 3 years of warranty.

and 28% Enterprise class over-provisioning VS 7% over-provisioning
in real words %28 vs %7 ?

and their is some who knows if this over-provisioning really works out to the drives benefit ?
in theory with a drive as cache/scratch filling up and emptying out a bunch it might help out ?

so I do agree it might depend on use to decide on which edition to get and how its use is ?

and the 40s at $99 two of them in raid 0 would be better for cache/scratch than a single 100 IMHO :) and my plans to do this week ! is add two of these to my system and move it off the single SSD for scratch
using two SSD in scratch/cache I did gain a very small bump in some things and the fact $198 for 80 gigs is not to much
 

JulianBoolean

macrumors regular
Original poster
Aug 14, 2010
142
5
the one thing is cache/scratch and working files on the same disc is not as efficient as separating them ? something to think about especially when writing huge files since it is basically scratching to the same disc as its writing ! .....continued

Thanks for the detailed response Honumaui, much appreciated! :) I've always had my scratch disk on its own, but I'm considering putting my scratch and only my currently working files on the same two disk ssd array with Raid 0 because:

A. Could be wrong but, I don't think photoshop is ever accessing (writing) to the scratch drive and saving the current file at the same time, because saving your file, is a one thread deal. Can't do anything else in PS while it's saving. I need really fast photoshop incremental saves (writes) and 2 SSDs seems like a good fit for that concern.

B. Because my currently working image can demand disk space ranging anywhere from 400 megs to 25 gigs, it's a way to have some flexibility as to how much scratch space I have available, vs. having fixed partitions or disk sizes. Photoshop will just use whatever is left from over for scratch. I don't think fragmentation will be an issue will at least 75 gigs of free space at all times.

C. I will get 2x the speed of an allready fast SSD for both tasks

As for the increased danger of data loss with Raid 0, I'm thinking that the external FW drive with time machine will save my bacon with auto saves every hour on the hour.

Thoughts?, Counterpoints? -JB
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
at least what I have tested when writing MY large files it hits the scratch

I have most likely not had the SSD long enough to really put it through its PS tests :)


I used to use short stroked discs and create 4 25 gig partitions then use those in raid 0 for my scratch and it helped massively to write large files to a large 8 disc raid 1/0 setup ! this was a while ago and I used this for a long time and was quite fast compared to a single HD
but this was a while ago ? and not on 64 bit and not on more modern gear


so the big question will be does the scratch get in the way of writing ? I still think it does :) and writing to my boot shows that I think ?


I think if you get the two 50s and use them for storage ? I might say get two 40 for the $99 special and use it as a dedicated scratch for PS $198 extra ?


I have two of the 100s RE and plan on getting two of the 40s at this price for scratch/cache files
my main files will stay on my raid though as it does quite well
if anything one of the SSD will head over to the wifes computer and I might buy her a 40 gig for scratch ! also
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
OH doing those tests I mentioned my scratch when writing gave me the disc full
I am sure the testing with those size files took over the 100 gig disc I had !

anyway just checked I only have 17 gigs of LR cache files on the disc so this gives you a idea that PS took over all extra 83 gigs almost ?

might help you decide how big scratch disc you are going to need ?

my PS I have to point to my boot as next in line

but the fact is if your scratch fills up you cant write files to it? that might throw a wrench in your setup you are thinking of !!!! since I hit it with a 2.8 or so gig file and only had 17 gigs of other stuff on !!!

that really surprised me as I have never seen that ! but then again I never write my files to my scratch ?

good reason not to do it in real world though :)
 

Giuly

macrumors 68040
so my raid my replicate the speed of two SSD in raid 0
Which would be 570MB/s read and 550MB/s write for the OWC SSDs we are talking about, you showed of ~375-400MB/s.

A SSD is as fast as 2.5 top-of-the-line 7200RPM drives under optimal conditions in RAID0.
If I understood you correctly, you run 8 Disks in RAID0+1? That would be 4 drives in RAID0 + Mirroring. A RAID0 of two OWC SSDs is about 120-150MB/s faster (Well, unless you have VelociRaptors or 15000RPM SAS drives in your RAID).

OP: You can install 2 2.5" drives in the optical bay, you'd just need a PCIe SATA-II card. I'd rather use 4x40GB in RAID0 than 2x50GB (Or 4x25GB if OWC adds that to the Pro RE-line), because there is enough space to accommodate the drives. Waisting one SATA-II port for a slow DVD drive isn't really elegant, either. Put the drive inside an external enclosure and use the port more efficiently.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
meant to say might not my :)

if you read what I wrote I said I used to do raid 1/0 ? that was some time ago and posted what I use now

so you said
Which would be 570MB/s read and 550MB/s write for the OWC SSDs
this was from AJA ? or what program did you use ? or did you get these somewhere ?

using aja on the 128 file size my raid does this
write 1324.0
read 825.4
thats way faster than two raid 0 SSD ? numbers can show lots of things ? I am sure you know each system and depending on the program used can give many results being all over the place !
and I bet quickbench would give me different results ?

so why do I say they are close enough ?

MY SSD in Raid 0 using X bench and my Raid6 using X bench just the final numbers can post more but why ?
Results 457.16

my raid setup
Results 501.65

I know X bench is not so good ? but I did some tests in that when I had them set as raid 0 ? so it gives a idea compared to my raid for this comparison


so I guess since I took the time to do some PS tests for the OP with my system which I use to make a living doing PS with file sizes they work with will give the person a better real world idea

the fact I do this for a living and have SSD and have tried many setups I have very good real world experience to share ?
 

Giuly

macrumors 68040
meant to say might not my :)

if you read what I wrote I said I used to do raid 1/0 ? that was some time ago and posted what I use now

so you said

this was from AJA ? or what program did you use ? or did you get these somewhere ?

using aja on the 128 file size my raid does this
write 1324.0
read 825.4
thats way faster than two raid 0 SSD ? numbers can show lots of things ? I am sure you know each system and depending on the program used can give many results being all over the place !
and I bet quickbench would give me different results ?

so why do I say they are close enough ?

MY SSD in Raid 0 using X bench and my Raid6 using X bench just the final numbers can post more but why ?
Results 457.16

my raid setup
Results 501.65

I know X bench is not so good ? but I did some tests in that when I had them set as raid 0 ? so it gives a idea compared to my raid for this comparison


so I guess since I took the time to do some PS tests for the OP with my system which I use to make a living doing PS with file sizes they work with will give the person a better real world idea

the fact I do this for a living and have SSD and have tried many setups I have very good real world experience to share ?
The point is: I don't really understand you. Numbers show a lot, yes, if you assign them to identifiers. What does the numbers say? MB/s? MBit/s? Seconds it took to copy those files? RAID0/1 supposed to be RAID0+1, or RAID0 or RAID1? Your SSDs having a SandForce-1200 controller? Indilinx? Intel? Micron?
825.4MB/s is what 8 hard drives in RAID0 would do - or 3 of these SSDs, that's why I suggested to use a RAID0 of 4 smaller SSDs rather then 2 bigger ones.
 

CaoCao

macrumors 6502a
Jul 27, 2010
783
2
Hi all,

I have a shiny new 6 core on the way, woot woot! After reading a few other threads and a little research, here's what I think might work for my particular workflow and the drives I can pull from my old G5. Would greatly appreciate any and all suggestions, alternatives, opinions, scathing critique, cheerful chides, etc.

Primary Use: Creative Photo Retouching & Illustration. Sometimes a single image can grow to 8 gigs or more in the working stage before the layers are simplified or flattened upon client approval.

Primary Speed Bump Incremental saves throughout the day. With large files, a save (on my G5) can take 20-30 minutes. Puts me into a catch - 22. I'm stressed that a crash will evaporate all the work I've done in the last hour, but a save might cost me 30 minutes.

Configuration & Upgrades:

24Gb RAM (3 x 8),,, 4th slot empty $1,095.99

Optical Bay : OS + Apps
60Gb OWC extreme pro SSD,,, $179.99

Bay 1 : Current Job + Scratch
First of two 50 Gig OWC SSDs in Raid 0,,, $209.99

Bay 2 : Current Job + Scratch
Second of two 50 Gig SSDs in Raid 0,,, $209.99

Bay 3 : Completed Jobs
1.5Tb 7,200 rpm HD pulled from old G5

Bay 4 : Completed Jobs
1.5Tb 7,200 rpm, HD pulled from old G5

External 1: Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire

External 2 : TM Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire
Small Partition for Bootable Clone of OS + Apps

Thanks in Advance! - JB
Remember, you can stuff 4x2.5" drives in the optical bay six drive Raid 0+1 sounds fun
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
The point is: I don't really understand you. Numbers show a lot, yes, if you assign them to identifiers. What does the numbers say? MB/s? MBit/s? Seconds it took to copy those files? RAID0/1 supposed to be RAID0+1, or RAID0 or RAID1? Your SSDs having a SandForce-1200 controller? Indilinx? Intel? Micron?
you sure seemed to understand me when you said:
Which would be 570MB/s read and 550MB/s write for the OWC SSDs we are talking about, you showed of ~375-400MB/s.

WOW so you do understand what I wrote and what SSD I have :)


how about this you post up some real world times for the OP using PS and your SSD drives rather than try to pick on someone who was helping them ?

I really think real world times with PS using OWC SSD drives will help the OP more than you worrying about a few MB/s you got off some chart ?
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
Primary Use: Creative Photo Retouching & Illustration. Sometimes a single image can grow to 8 gigs or more in the working stage before the layers are simplified or flattened upon client approval.

Primary Speed Bump Incremental saves throughout the day. With large files, a save (on my G5) can take 20-30 minutes. Puts me into a catch - 22. I'm stressed that a crash will evaporate all the work I've done in the last hour, but a save might cost me 30 minutes.

Configuration & Upgrades:

24Gb RAM (3 x 8),,, 4th slot empty $1,095.99

Optical Bay : OS + Apps
60Gb OWC extreme pro SSD,,, $179.99

Bay 1 : Current Job + Scratch
First of two 50 Gig OWC SSDs in Raid 0,,, $209.99

Bay 2 : Current Job + Scratch
Second of two 50 Gig SSDs in Raid 0,,, $209.99

Bay 3 : Completed Jobs
1.5Tb 7,200 rpm HD pulled from old G5

Bay 4 : Completed Jobs
1.5Tb 7,200 rpm, HD pulled from old G5

External 1: Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire

External 2 : TM Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire
Small Partition for Bootable Clone of OS + Apps

Thanks in Advance! - JB
Unfortunately, I wouldn't go this route.

Consumer grade SSD's (MLC based) are not meant for high write conditions, which scratch usage is (I've posted on this before, so if you're interested in the specifics, you may want to do a search on MR ;)).

You'd be better off using mechanical disks in a RAID configuration compared to SSD's. And as you're post indicates you're earning a living at it, a stripe set for working data is not a good idea either, given the time spent fixing a problem, or worse, an insufficient backup system = lost data (not just having the disks for backup, but the frequency that those backups are made; the longer the time, the more work that has to be re-done to get the missing data back).

SLC based disks would be sufficient, but the capacities are low, and they're quite expensive yet (i.e. Intel X25-E models). At those funds, you have better options (redundancy, still better write cycle conditions, and higher sustained throughputs for similar funds).

Another thing to consider is that the ICH (SATA controller) in the system has a throughput limit of ~660MB/s. You'd throttle with 3x of those SSD's (~250MB/s each, so simultaneous access will throttle).

Option 1:
  • Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
  • Substitute mechanical disks for the SSD's in the stripe set (better suited for scratch)
  • Use the other mechanical disks as the primary data location (working data, as it's safer than a stripe set)
  • External backups as configured

The advantage here, is lower cost (it's cheaper than your original configuration), and better safety for your working data. But it's no where near where you should be IMO for earning a living with the system (for a hobbyist, it would be acceptable, as the data's not critical).

Option 2:
  • Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
  • 4x Mechanical disks in HDD bays 1 - 4, in a RAID 10 configuration (speed is ~ that of a 2 disk mechanical stripe set, but has the redundancy of 2x disks before data is gone). This is used for scratch and primary data (working files)
  • External backups/archival locations (can use single disks or JBOD; eSATA card and possibly a Port Multiplier enclosure would be less expensive over time, as you just add disks)

This is still inexpensive (especially for what you get), as you now have the minimum performance requirement, and some redundancy as well. But the performance isn't as good as it could be.

Option 3:

  • Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
  • Use a proper RAID card, and use either RAID 5 or 6 (can handle the write hole issue associated with parity based arrays; software implementations, such as the Highpoint 2314 are not). Specifics can be gone over if you're interested in this configuration, as there's more information and options to consider (internal, external, mixed = hybrid, disk count, future expansion requirements, OS requirements, boot requirements,...).
  • External backups/archival locations (can use single disks or JBOD; eSATA card and possibly a Port Multiplier enclosure would be less expensive over time, as you just add disks)

This is the best way to go IMO, and what I was referring to when I mentioned "better options for the funds" in terms of using SLC based SSD's.

You've more options and expansion capabilities with this route as well (i.e. use a hybrid = internal + external disks in the array). This is why the port count matters, disk count affects the arrays possible (5/6; or even nested parity 50/60, though I doubt you'd need to go this route). IF you've sufficient ports, you can increase capacity and performance just by adding disks (really nice, and the redundancy is a necessity given what you're doing with the system IMO).

The ARC-1222 or ARC-1222x are good cards to take a look at, as is the ARC-1680 family (12 port card+ for future expansion may be needed, depending on your capacity expansion). There's an internal adapter kit that will allow the HDD bays to work with internal cards, and a special cable that can take an internal port to an external enclosure. If you're more interested in an external only solution, you need to be looking at a MiniSAS (SFF-8088) compliant enclosure (example = Stardom SohoTank ST8-U5). External cables per MiniSAS port (handles 4x disks).

A few notes:
With mechanical disks, you want to stay at 50% or less full for performance reasons (inner tracks = slowest on the disks, and when you get into this area, your performance can drop below the minimum requirements; particularly to be noted on a 2x disk stripe set, or even a 10 array, which only offers you half the total capacity as a trade-off for the redundancy). In the case of a 10, you'd probably be best served by using 2TB disks.

With a RAID card (i.e. Areca), you need to run enterprise grade disks if you want it to work (consumer disks are unstable, so don't do it; their recovery timings are wrong for RAID cards). As a result of potential problems, it's advisable to use the HDD Compatability List to determine what drives to use (not all RAID card makers offer these, but Areca does, and is one of the reasons I like their products - saves a lot of hassle and aggravation).

You've not mentioned the need for a Windows disk, but if you create an array under OS X, you won't be able to boot Windows off of the ICH (SATA controller on the logic board). But this is fixable via a separate SATA controller and cable assembly (you'll have to make this by splicing 2x cables that are available together). Not expensive either (card + cables), and not hard to do.

RAID wiki might be a good thing to give a good look at, particularly 10/5/6 levels.

Also, you'd need to run a good UPS system as well (Online type is really what you should be using, though a Line interactive can be substituted so long as it has a step transformer in a budget pinch). BTW, Online types can be had refurbished to save on funds as well. A UPS isn't just an option with RAID, it's a necessity (you'll be burnt in terms of lost data if you try running without it).

I know this is a lot to read, and hope not too confusing, but it should help. :)
 

JulianBoolean

macrumors regular
Original poster
Aug 14, 2010
142
5
Wow, thanks

nanofrog - I was hoping you'd find my post! Wow, lots for me to sift through and absorb. Big thanks man. I'll be back with questions. -JB
 

nanofrog

macrumors G4
May 6, 2008
11,719
3

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
glad he jumped in as he kinda backs up what I said :)

and not sure why the other guy wants to get into a wacky match ?? OH well



I can say my system I make a living off is

mac pro 3,1 8 core 2.8 with 14 gigs (want more memory)

1222x card in raid 6
OWC SSD as boot and another for cache/scratch
also he backs up what I say about not using raid 0 ?

I use stand alone raid 5 boxes to backup to and to run Time Machine


one thought on the $99 40 gig OWC setups ? for me if they last a year it will pay for itself my cache is more for read and access speed coming off LR
the scratch is more when PS hits it ?

Nanofrog a question how much do you really think it takes to destroy a OWC SSD type drive for scratch ?

again curious as my thought was if it lasts a year or two ? thats all I need cause I am sure by then newer technology will be out to replace these ?

is that reasonable to think they might last up to 2 years or do you think 6 months or ?
 

Ryan P

macrumors 6502
Aug 6, 2010
362
235
Nano,

Thanks for that excellent writeup. I learned several things. I'm still a little murky on the RAID and Bootcamp though. Do you think you could expand a bit?

For example, if you had a RAID setup on an Areca card, could the Bootcamp partition go on the Areca RAID, or do you still need a separate SATA controller card with an attached harddrive to place the partition? Alternatively, if the only defined RAID array on your system was off the ICH could the Bootcamp partition go on a partition attached to a ICH attached harddrive.

Thanks!
 

Giuly

macrumors 68040
OP, btw: How many graphics cards are you intend to use? Because there is the OCZ RevoDrive, which is a RAID0 of 2 SandForce-1200 drives combined on a PCIe card rather than attached via SATA, if you decide to go SSD.

Honumaui said:
WOW so you do understand what I wrote(...)
Well, I calculated it out.
Honumaui said:
and what SSD I have
Yes, but only since your last post.

Honumaui said:
and 28% Enterprise class over-provisioning VS 7% over-provisioning
You and OWC call it "Enterprise-Class", I call it last-gen SF-1200 drive (vs. latest-gen SF-1200 drive).
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
glad he jumped in as he kinda backs up what I said :)

I can say my system I make a living off is

mac pro 3,1 8 core 2.8 with 14 gigs (want more memory)

1222x card in raid 6
OWC SSD as boot and another for cache/scratch
also he backs up what I say about not using raid 0 ?

I use stand alone raid 5 boxes to backup to and to run Time Machine


one thought on the $99 40 gig OWC setups ? for me if they last a year it will pay for itself my cache is more for read and access speed coming off LR
the scratch is more when PS hits it ?

Nanofrog a question how much do you really think it takes to destroy a OWC SSD type drive for scratch ?

again curious as my thought was if it lasts a year or two ? thats all I need cause I am sure by then newer technology will be out to replace these ?

is that reasonable to think they might last up to 2 years or do you think 6 months or ?
RAID 0 definitely isn't meant for those that rely on data, so basically it's not usable for pros, save scratch space (temp data).

It's fine for enthusiasts that can deal with the time involved when a failure occurs (most data is likely off of installation disks, not content they generated themselves, such as gamers needing to get the data off of the disk fast enough to load the game data needed for the graphics card).

I know there's pros that do use it, but it's either a result of insufficient budgets, or lack of understanding of RAID or their actual needs. :( I fear for such users, as they're basically playing Russian Roulette with their data.

Nano,

Thanks for that excellent writeup. I learned several things. I'm still a little murky on the RAID and Bootcamp though. Do you think you could expand a bit?

For example, if you had a RAID on an Areca card could Bootcamp go on the Areca RAID or do you still need a separate SATA controller card? Could you get away with it if you didn't create a RAID on the ICH and only on the ARECA.

Thanks!
It has to do with users that want to create a RAID via Disk Utility and run a Windows or Linux disk (all the disks are on the same controller). Disk Utility mods the system's firmware, and Windows/Linux disks will no longer boot (assuming BC was used prior to the RAID creation). If you do the RAID first, the Windows installation won't ever boot up.

By separating them to different controllers, a user can have both. This can be as simple as a SATA card for the Windows/Linux disk, or a proper RAID card for the array (though if there's another array created under Disk Utility, the separate SATA controller would also be necessary if the RAID card is set to boot EFI). IF not, and there's an available port (RAID card still boots BIOS), then you can attach another disk as a single, and use that.

Boot Camp won't work actually with a RAID card (allows both OS's on the same disk/array), but in such an instance (separate disk), you don't need it anyway, as it's a partition tool. You can actually use it (allows for clone applications that work under OS X to work), but I prefer to use Acronis myself (use it for Windows and Linux).

If for some odd reason, you leave the card as BIOS, and want to share the array between 2x OS's, that's possible too via partitions (you do use Disk Utility, but only to partition it; not to create the array). In this instance, you can have an OS X partition and bootable Windows/Linux partition (each appears as it's own array).

But I prefer to use separate disks (each gets it's own array in such a case). Better for recovery, as one array goes down, the other is still operational. This is helpful for single disk OS installs too (makes the recovery process faster, if you need to hunt down firmware, online documentation, and use a browser to access the card itself, which is the most important part, particularly in a MP, as you can't get to the firmware settings directly; not even if it's running Windows/Linux).

In your case, all all you need to do is install a separate disk on the ICH (system's SATA controller located on the backplane board in the 2009/10 systems), and install Windows (Disk Utility hasn't made any change to the system's firmware settings). :D
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
You and OWC call it "Enterprise-Class", I call it last-gen SF-1200 drive (vs. latest-gen SF-1200 drive).

no I just quoted a fact you forgot they mention ?
since you seem to be hung up on facts you read off the net ? since its obvious you dont actually have any of these things we are talking about ?


and 28% Enterprise class over-provisioning VS 7% over-provisioning
in real words %28 vs %7 ?



get a life little boy !!!!
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
Nanofrog a question how much do you really think it takes to destroy a OWC SSD type drive for scratch ?

again curious as my thought was if it lasts a year or two ? thats all I need cause I am sure by then newer technology will be out to replace these ?

I keep hearing some say dont use them for scratch ? but that seems to be related to the older ones and not the sandforce ?

any thoughts ?
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
Nanofrog a question how much do you really think it takes to destroy a OWC SSD type drive for scratch ?

again curious as my thought was if it lasts a year or two ? thats all I need cause I am sure by then newer technology will be out to replace these ?

I keep hearing some say dont use them for scratch ? but that seems to be related to the older ones and not the sandforce ?

any thoughts ?
Hard to say really, as it depends on the specifics (available capacity for writes, configuration <single disk v. RAID>, file sizes written, write frequency <most important>,...).

Manufacturers base their data on empty drives (definitely not real world conditions) and manipulated statistics (toss out the worst 10% of all the cells). To complicate matters further, things like TRIM and garbage collection will matter as well.

Currently, I'd say consumer models are good for ~3yrs for a boot/applications disk (single disk mode), and SLC based disks for ~ 5yrs. RAID however, will reduce this significantly (especially parity based arrays). And I suspect most users' scratch usage will as well from the information posted here on MR (particularly the lack of TRIM under OS X, and garbage collection for some models will exacerbate this issue). BTW, RAID disables things like TRIM, which is just one of the reasons it's worse for SSD's.

Now consider the following:
  • SSD's are great for random access usage, but expensive for sequential access (ignoring available SATA ports, which can be added cheaper than than the cost of an SSD; simple SATA card for example)
  • Scratch relys on sequential throughputs more than random access as I understand about the applications used (application based, not OS).

This means that you can get fast sequential access via mechanical for less money than SSD, and is better suited for the job. :eek: Until this changes, I see SSD as unstable due to it's immaturity (MLC specifically), and expensive for this particular purpose (particularly applicable to SLC based disks).

The source of all the consternation:
Most of the MLC based Flash is specified at 10,000 write cycles, and SLC for 100,000 cycles.

Micron has released a newer version (actually out), called eMLC, which is good for 30,000 writes, and their SLC is good for 300,000 writes. Not bad, but not commonly used just yet AFAIK (rather new, and it's going to cost more). This is then improved upon by wear leveling = rotation scheme for executing writes (rotates through all available cells before the first is re-written again, assuming it's available).

This gets complicated by real world usage (some capacity may be consumed by existing data, particularly in cases where the disk is used as a single disk, such as OS/application boot location). That leaves less capacity for data that's going to change more frequently (i.e. downloads to scratch). The higher the write frequency, the faster the available cells will die. Pure and simple, and why scratch is more dangerous to current MLC based SSD's.

So the manufacturer data needs to be "taken with a grain of salt" so to speak. ;)

Future, from available data:
This will improve over time, but it's hard to say how long it will take, as NAND Flash has it's limits (will continue to push the specifications until that limit is reached in production parts). Then another Flash technology will take over, such as FeRAM (current FeRAM is capable of 10E16 writes :eek: <MLC = 1E4, SLC = 1E5, so it's a significant increase>, which is better than current enterprise HDD's before any wear leveling or other techniques).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.