Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Does this enclosure reach the 250MB/s throughput? Would be a nice addition, if not replacement for my current box.
I've got a similar 4 bay enclosure but it levels off at 115MB/s. Cost 160€ without eSATA card. :mad:
The PM chips spec out at ~250MB/s.

The box itself can actually get close to that (4 disks, not 5; check here). If you take a closer look at the graph, you'll notice 2 models. Those are kits, and the only difference is the eSATA card in the kit (the SIL3132 is slower, while the other, based on a Highpoint 2314 <Marvell chip>, gives much higher throughputs). What do you expect for a super cheap card...? :eek: :p

Your card is the suspect, not the box. ;)

You may want to consider the Newertech card that supports PM chips, as it's cheaper than the Highpoint 2314, and uses a Marvell controller chip as well <6.0Gb/s btw> (OWC has it on sale).
 
If you really want to be the "fastest" with this setup, back down to 12GB ram. The 4th slot will knock you out of triple channel ram config.

Not so fast. While a synthetic test will show triple channel memory has a huge advantage, boosting your RAM by 33% will eat some if not all of those benefits when used in the right context.

Keeping as much as you can in real memory vs. virtual far outweighs dual vs triple channel speeds (imho). If you're pushing 12GB, I'd say it would be better to jump to 16.
 
You may want to consider the Newertech card that supports PM chips, as it's cheaper than the Highpoint 2314, and uses a Marvell controller chip as well <6.0Gb/s btw> (OWC has it on sale).

I was gonna ask that question, if the NewerTechnology with PM and 6gb was better. Thanks, I´m getting that.
I´m actually thinking about getting the configuration you recommended

Another solution to consider:
1. 1x 240GB OWC SSD (optical bay)
2. 2x 1TB 7200rpm disks for a RAID 0 scratch location (Blacks of your choice on capacity; 2x HDD bays)
3. 1x 2TB Backup (Green; HDD bay)
4. 1x 1TB Backup/Clone (Green model; HDD bay)
5. 4 disk PM enclosure (this one comes with a PCIe eSATA card using a SIL3132 chip, that will work with the OS X drivers from Silicon Image)
6. 4x 1TB Blacks in a level 10 for primary file storage (completed work files). As it's on a PM enclosure, the Max throughput will be 250MB/s (PM chip), but it's capable of running this (performance of the set is ~ that of 2x disks in a RAID 0, but redundant).
But I'll use the 1tb drive that came w/the computer as SSD TM, CC

Oh and Yes, when I said "Media" I meant finished outputs. not meant to be messed with and very important. :eek:

Wouldn't the enclosure be faster with the NewerTechnology 6gbs card and as RAID 10 for Scratch?

About the SSD in the optical bay;
The cables that come in there are normal sata-power cables like for HDs right?
And do I have to use a special mount for the SSD in the optical bay, or can I just set it with good quality velcro?

I'm checking the shipping status of the MacPro every half hour!
 
I was gonna ask that question, if the NewerTechnology with PM and 6gb was better. Thanks, I´m getting that.
I´m actually thinking about getting the configuration you recommended
The Newertech is faster than the SIL3132 based cards, and cheaper than the 2314 (~same performance), so it's the best of both cost/performance ratio of the three. :)

Wouldn't the enclosure be faster with the NewerTechnology 6gbs card and as RAID 10 for Scratch?
As per performance, the limiting factor is the PM chip and drives. No mechanical HDD can saturate 3.0Gb/s, let alone 6.0Gb/s. And the specific disks have their limitations too (say 100MB/s). The PM chip is good for 250MB/s max, and the real world performance will be less, as you're configuring a level 10.

You won't need redundancy for scratch space (a waste IMO), but it's a good thing to have for your primary data location. So use a stripe set for scratch (temp data). Worst case, if the array dies, you install a new disk and re-run the failed process.

About the SSD in the optical bay;
The cables that come in the ... are normal sata-power cables like for HDs right?
The cable used for the optical bays has both power and data, so it will plug right in and work. Nothing strange to contend with here.

And do I have to use a special mount for the SSD in the optical bay, or can I just set it with good quality velcro?
You don't have to use a mount, but you can buy one if you wish, or even make one if you're up for it. This is all up to you.
 
I'm not updating the sensor on my RED to the Mysterium-X just yet. (you can do that for $5,750, and get the exact same sensor the EPIC has) Everyone has a hard time editing 4k as it is now.
FCP doesn't support it yet, only 2k in Color. AE does and that's great for applying effects to 4k footage that will be scaled down to 2k. You just can't see the "seams". I am already apple certified in FCP, used to teach FC three years ago, and really don't want to start again learning Premier for it's awesome 4k REDCode handling abillities. I'll just wait another year for that to happen in FC, at least.

Actually, Color has supported RED 4K 2:1 at full debayer since version 1.5 (FCS 3) - but the newer 4.5KWS mode is only supported through a half-debayer downres (apparently, this is something Apple has to fix, not RED). Another annoying aspect about Color's R3D support is while the latest RED plugin does enable use of the new color science, FLUT control is completely missing from the Primary In Room. So, my dilemma has been lately - do I want better-looking grades (which means Redcine-X, for now) or do I want a smoother workflow? Needless to say, Apple has a lot of catching up to do. I just got the CS5 Production Suite and got to dabble with the new version of Premiere a bit. It's quite impressive. But the problem is that almost nobody in the industry cuts on Premiere.

On another note, if you upgraded to AE CS5 already, you'll find a lot of benefit in jumping to 16GB of RAM (buy it third-party), especially working with high-res codecs. I'm pretty convinced that another 8GB will be the next upgrade for my MP. 64-bit AE eats RAM for breakfast doing RAM previews (and even final rendering).
 
On another note, if you upgraded to AE CS5 already, you'll find a lot of benefit in jumping to 16GB of RAM (buy it third-party), especially working with high-res codecs. I'm pretty convinced that another 8GB will be the next upgrade for my MP. 64-bit AE eats RAM for breakfast doing RAM previews (and even final rendering).
Your avatar is awesome, I just saw goonies on bluray last week! :cool:

I'll upgrade the RAM when we're absolutely sure the 8GB sticks won't work because intel states that the 6 core takes 24GB so 8x3=24 + the triple channel thing. would be awesomer than raid 10 with three drives, But it probably won't work either. I dunno.

The cable used for the optical bays has both power and data, so it will plug right in and work. Nothing strange to contend with here.
You don't have to use a mount, but you can buy one if you wish, or even make one if you're up for it. This is all up to you.
I thought I knew but I wasn't sure, Thanks. Got really good velcro already :D

OK So Here is what I came up with taking your advice
I'm not sure if I should buy the first configuration, because of the speed in 3 drive RAID 0 or the second, the one you proposed. the prices are almost the same.

FIRST______________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in optical bay for OS, Apps $630

3x 640GB = 1.90TB RAID 0 for Scratch $222
1x 2TB Green CCC of RAID 0 $119

External
1x 2TB Black Final Outputs, Archive $175
1x 2TB Green CCC Final outputs Archive $119
1x 1TB TM CCC of OWC SSD stock

Sans Digital TR4M 4 Bay $130


$1,475

I really don't need copies of the finaloutputs & archive drive immediately
because they render to RAID 0 and are then moved to
the 2TB Black in the bay, then CCC to the 2TB Green also in the bay, then deleted from the RAID 0

SECOND_________________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in Optical bay for OS, Apps $630

2 x 1TB Blacks RAID 0 $170
1x 2TB Green CCC of RAID 0 $119
1x 1TB CCC & TimeMachine stock

External
4x 1TB Blacks final outputs & archive $340
Sans Digital TR4M 4 Bay $130


$1,460
_________________________________________________


Watcha Think?
 
Your avatar is awesome, I just saw goonies on bluray last week! :cool:

I'll upgrade the RAM when we're absolutely sure the 8GB sticks won't work because intel states that the 6 core takes 24GB so 8x3=24 + the triple channel thing. would be awesomer than raid 10 with three drives, But it probably won't work either. I dunno.


I thought I knew but I wasn't sure, Thanks. Got really good velcro already :D

OK So Here is what I came up with taking your advice
I'm not sure if I should buy the first configuration, because of the speed in 3 drive RAID 0 or the second, the one you proposed. the prices are almost the same.

FIRST______________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in optical bay for OS, Apps $630

3x 640GB = 1.90TB RAID 0 for Scratch $222
1x 2TB Green CCC of RAID 0 $119

External
1x 2TB Black Final Outputs, Archive $175
1x 2TB Green CCC Final outputs Archive $119
1x 1TB TM CCC of OWC SSD stock

Sans Digital TR4M 4 Bay $130


$1,475

I really don't need copies of the finaloutputs & archive drive immediately
because they render to RAID 0 and are then moved to
the 2TB Black in the bay, then CCC to the 2TB Green also in the bay, then deleted from the RAID 0

SECOND_________________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in Optical bay for OS, Apps $630

2 x 1TB Blacks RAID 0 $170
1x 2TB Green CCC of RAID 0 $119
1x 1TB CCC & TimeMachine stock

External
4x 1TB Blacks final outputs & archive $340
Sans Digital TR4M 4 Bay $130


$1,460
_________________________________________________


Watcha Think?


Im not understanding how are you going to use a single drive in raid 0?
 
Hmm, I don't see the point in RAID 0 on SSD drives. You'll already get VERY fast performance from them.

RAID 0 with SSD drives can give you better wear leveling if you hold the high water mark on usage to the capacity of just one of the drives. For example, two 50GB SSD drives will each get half of the write updates. So you have cut the erase cycles in half. So if a single drive would have worn out in a 1.5 years, you now have 3. That's typically long enough to make money with then and buy a new pair.

If look at the difference between the OWC Mercury drives Pro and Pro RE drives it is a change in over provisioning. Instead of over provisioning 37% this is closer to 100% ; just not strictly enforced by the hardware.

Most folks doing the exact same thing with hard disk drives only the over provisioning is orders of magnitude higher ratio. In order to get a 50-100GB scratch space will throw 3TB of space at the issue.

50:100 ratio or 50:3000 ratio ....

which one seems more $/performance efficient ??? In many cases the first one. The other effect that you have gotten another internal drive sled slot back. Disk spindle bloat, to avoid inner tracks and higher file system fragmentation, sucks up lots of physical space.


I would personally have a 400 or 500GB mid range SSD as the system drive, a fastest performance I can get 250GB SSD as the scratch and render disk.

That doesn't seem to be a good combo. First, the huge system drive seems indicative of Applications that have some huge library of some large binaries files. Typically that means the this will get blended into the project data. Second, that often leads to large scratch usage. If you use a relatively high percentage of the SSD storage for scratch, > 50% , then you are going to squat on cells while at the same time looking for lots of clean/erased ones. That is going to decrease lifecycle for the drive.
You want to keep the percentage low, not high.

if need very large scratch spaces then disks are better. Over provisioning large spaces requires lower $/GB costs.

If going to use only 25% of the 250GB for scratch on average then it is a better fit.
 
FIRST______________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in optical bay for OS, Apps $630

3x 640GB = 1.90TB RAID 0 for Scratch $222
1x 2TB Green CCC of RAID 0 $119

External
1x 2TB Black Final Outputs, Archive $175
1x 2TB Green CCC Final outputs Archive $119
1x 1TB TM CCC of OWC SSD stock

Sans Digital TR4M 4 Bay $130


$1,475

I really don't need copies of the finaloutputs & archive drive immediately
because they render to RAID 0 and are then moved to
the 2TB Black in the bay, then CCC to the 2TB Green also in the bay, then deleted from the RAID 0
I'm a bit confused over the primary working file storage. Is it on the stripe set, or to another disk?

What I'm wondering, is if the RAID set is used for more than scratch space?

SECOND_________________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in Optical bay for OS, Apps $630

2 x 1TB Blacks RAID 0 $170
1x 2TB Green CCC of RAID 0 $119
1x 1TB CCC & TimeMachine stock

External
4x 1TB Blacks final outputs & archive $340
Sans Digital TR4M 4 Bay $130


$1,460
_________________________________________________
The point of the recommendation of the level 10 was as the primary working files to be stored, and use the stripe set strictly for scratch (temp data).

Clarifying the working files storage location (not just archival once completed), is known, I'll go from there.
 
I'll upgrade the RAM when we're absolutely sure the 8GB sticks won't work because intel states that the 6 core takes 24GB so 8x3=24 + the triple channel thing. would be awesomer than raid 10 with three drives,

Awesome price tag it will have. Crucial ( Micron ) has a 2 stick pair for the 6 core Mac Pro going for $1,100 all by itself. I think the 24GB will put you back around $1,600. About a year or two from now that will have a more affordable price.
 
FIRST______________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in optical bay for OS, Apps $630

3x 640GB = 1.90TB RAID 0 for Scratch $222
1x 2TB Green CCC of RAID 0 $119

External
1x 2TB Black Final Outputs, Archive $175
1x 2TB Green CCC Final outputs Archive $119
1x 1TB TM CCC of OWC SSD stock

Sans Digital TR4M 4 Bay $130


$1,475

I really don't need copies of the finaloutputs & archive drive immediately
because they render to RAID 0 and are then moved to
the 2TB Black in the bay, then CCC to the 2TB Green also in the bay, then deleted from the RAID 0

SECOND_________________________________________________

NewerTech MAXPower PCIe esata w/port multipier $80
240 OWC SSD in Optical bay for OS, Apps $630

2 x 1TB Blacks RAID 0 $170
1x 2TB Green CCC of RAID 0 $119
1x 1TB CCC & TimeMachine stock

External
4x 1TB Blacks final outputs & archive $340
Sans Digital TR4M 4 Bay $130


$1,460
_________________________________________________


Watcha Think?

I know you're not really asking me :)p), but another approach would be to keep all your media content within the Mac Pro, and just use external storage for backups.

You could do a 4 disk RAID0 array within the Mac Pro that could house your scratch, final output, and archive duties.

You could then run an external Drobo, NAS, or cheaper FW/USB enclosure for backup duty with as many 2TB drives as you feel you need. This way, the performance of the external solution matters less and you can go with a cheaper enclosure that uses USB/FW... or a Drobo which has it's own RAID scheme to cover you against drive failures.
 
Awesome thread dudes! I've been eavesdropping this discussion quite a bit the last couple days and have learned many new things, (props to all) yet also have become painfully aware how much I don't know!

Quick Dumb Question : Is there any advantage to having the OS on one HD and the apps on another? I know that photoshop needs a scratch disk to go figure stuff out from time to time. Wouldn't the OS benefit from having it's own place to figure stuff out?

Thanks!
 
Awesome thread dudes! I've been eavesdropping this discussion quite a bit the last couple days and have learned many new things, (props to all) yet also have become painfully aware how much I don't know!

Quick Dumb Question : Is there any advantage to having the OS on one HD and the apps on another? I know that photoshop needs a scratch disk to go figure stuff out from time to time. Wouldn't the OS benefit from having it's own place to figure stuff out?

Thanks!
There are special cases (capacity and performance) where separation can be beneficial, but not for the usage you're describing.

But from what I can gather in your post (presume a single disk), you don't need to bother. The OS doesn't need scratch space (data is primarily read for an OS).

Now if you wanted to increase performance of the disk IO with a RAID set, then it may be a good idea (depending on the specifics, as there's cases where it's fine to put the OS on the array as well). As it happens, Photoshop can benefit from a stripe set (mainly for scratch space). And in such a case, separating the OS would be a good idea.

Now if you went with a redundant array that also boosts performance (say a level 10 or partity based array such as 5/6), then placing the OS on that array is fine. Separation still has an advantage (still have a working OS if the array dies), but budget can preclude this.
 
Back from holiday!

sorry for the long delay, but I usually take a holiday even from emails when I go to the beach, plus had a ton of work when I got back!

First of all I want to thank you all for your invaluable advice and in pushing me into the best decision, I have learned a lot.

Taking your all your advice, and looking at my needs, I finally pulled the trigger on this setup for my hexacore that is finally arriving on Tue 31, it's at Fort Worth right now, Whooo! this stuff will start arriving mon-tue.


Internal
1x 240GB OWC SSD in optical $630
4x 1TB WD Blacks RAID 0 for FCP, AEscratch $340
(media has to live in the scratch disk)

External
NewerTech MAXPower PCIe esata w/port multipier $80
Sans Digital TR4M 4 Bay $130
2x 2TB WD Blacks RAID 0 CCC of internal RAID 0 $340
1x 1TB WD Black, pictures, music, TM, CCC of SSD $85
1x 1TB stock, CCC of pictures, music
$1,605

Please tell me what ya'll think!

I'll keep you all posted on the results and upload some pictures of the aja system test.



Rockanroll and be good.
 
sorry for the long delay, but I usually take a holiday even from emails when I go to the beach, plus had a ton of work when I got back!

First of all I want to thank you all for your invaluable advice and in pushing me into the best decision, I have learned a lot.

Taking your all your advice, and looking at my needs, I finally pulled the trigger on this setup for my hexacore that is finally arriving on Tue 31, it's at Fort Worth right now, Whooo! this stuff will start arriving mon-tue.


Internal
1x 240GB OWC SSD in optical $630
4x 1TB WD Blacks RAID 0 for FCP, AEscratch $340
(media has to live in the scratch disk)

External
NewerTech MAXPower PCIe esata w/port multipier $80
Sans Digital TR4M 4 Bay $130
2x 2TB WD Blacks RAID 0 CCC of internal RAID 0 $340
1x 1TB WD Black, pictures, music, TM, CCC of SSD $85
1x 1TB stock, CCC of pictures, music
$1,605

Please tell me what ya'll think!

I'll keep you all posted on the results and upload some pictures of the aja system test.



Rockanroll and be good.
I don't recommend using a stripe set as a backup location for another stripe set, as each has a greater risk of failure than that of a single disk (failure rate = failure rate of a single disk * n disks).

Instead, I'd create a RAID 10 from the internal disks (HDD bays), and use an external source/s for backups, Windows, and a dedicated stripe set.

As for the backup, you might want to consider a JBOD configuration (external), as it will be seen as a single volume, but the risk factor is still that of a single disk. You'd also want to consider either a second enclosure, or a single unit that holds additional disks (1x eSATA can run up to 5x disks, so 10x disks max is possible with the Newertech card).
 
I don't recommend using a stripe set as a backup location for another stripe set, as each has a greater risk of failure than that of a single disk (failure rate = failure rate of a single disk * n disks).

Instead, I'd create a RAID 10 from the internal disks (HDD bays), and use an external source/s for backups, Windows, and a dedicated stripe set.

As for the backup, you might want to consider a JBOD configuration (external), as it will be seen as a single volume, but the risk factor is still that of a single disk. You'd also want to consider either a second enclosure, or a single unit that holds additional disks (1x eSATA can run up to 5x disks, so 10x disks max is possible with the Newertech card).


Internal
1x 240GB OWC SSD in optical $630
4x 1TB WD Blacks RAID 0 for FCP, AEscratch $340
(media has to live in the scratch disk)

I want the fastest and cheapest raid array possible as I will be working with 5 hd streams from a live taped tv show.
Also I like to get very creative and work with a lot of layers in AE +100
including comps and pre comps inside the composition.
That's why I think RAID 0 is better for me than 10.
But, what R, W speeds do you think I can get with the 4 64MB caviar blacks?

External
NewerTech MAXPower PCIe esata w/port multipier $80
Sans Digital TR4M 4 Bay $130
2x 2TB WD Blacks JBOD CCC of internal RAID 0 $340

This 2 drives might work better in JBOD as you said in case of 1 disk failure. I will be upgrading to a second hd bay box when I have the budget.
which I predict will be real soon :D

1x 1TB WD Black, pictures, music, TM, CCC of SSD $85
1x 1TB stock, CCC of pictures, music
$1,605

Thanks Señor nanofrog!
 
Internal
1x 240GB OWC SSD in optical $630
This is fine (tad large = significant chunk of cash).

4x 1TB WD Blacks RAID 0 for FCP, AEscratch $340
(media has to live in the scratch disk)
This is where I get nervous. You really want to keep your primary data on a separate disk or array (allows both to be used simultaneously = better throughputs), and can offer less risk for the data (depending on what level gets used). For performance reasons, the array makes better sense.

I want the fastest and cheapest raid array possible as I will be working with 5 hd streams from a live taped tv show.
Also I like to get very creative and work with a lot of layers in AE +100
including comps and pre comps inside the composition.
That's why I think RAID 0 is better for me than 10.
You're performance needs can be handled with ~200MB/s (less actually), which is attainable with a 2x disk stripe set, or 4x disk RAID 10.

  • Please understand, this is for the primary data, and you use a separate stripe set for scratch only (2x disks; externally in the PM enclosure). Use the same disks as the 10, and the performance will be the same for both arrays.

Now which of the following do you prefer?
1. Faster with no data security = more time necessary to fix issues (repair the array, restore data, and re-perform any necessary work to replace what got lost between the last backup and the failure)?

2. Or sufficient speed for the workload listed combined with redundancy?
Level 10 can take the failure of 2x disks without loosing data. In a degraded state (disk/s fail), the performance will drop until you fix it, but the rebuild is much less work and faster than what all is involved with fixing a busted stripe set.​

This is the main point of what I've been recommending. Pick one, and go from there.

If you choose #1, then by all means go with what you've listed. :)

If you'd prefer some balance (and less work when a failure occurs = when, not if with any situation), then go with #2 = what I've been recommending. :eek: :p

Up to you.

But, what R, W speeds do you think I can get with the 4 64MB caviar blacks?
Those disks are good for ~113MB/s sustained reads and writes each. So 4x = 452MB/s in a stripe set or 226MB/s in a level 10 array.

External
NewerTech MAXPower PCIe esata w/port multipier $80
Sans Digital TR4M 4 Bay $130
2x 2TB WD Blacks JBOD CCC of internal RAID 0 $340

This 2 drives might work better in JBOD as you said in case of 1 disk failure. I will be upgrading to a second hd bay box when I have the budget.
which I predict will be real soon :D

1x 1TB WD Black, pictures, music, TM, CCC of SSD $85
1x 1TB stock, CCC of pictures, music
$1,605
This is OK (may need some tweaks, depending on what you actually chose, such as converting the 2x 1TB units to a stripe set for scratch if you go with a level 10 for your primary data).
 
You're performance needs can be handled with ~200MB/s (less actually), which is attainable with a 2x disk stripe set, or 4x disk RAID 10.

  • Please understand, this is for the primary data, and you use a separate stripe set for scratch only (2x disks; externally in the PM enclosure). Use the same disks as the 10, and the performance will be the same for both arrays.

Now which of the following do you prefer?
1. Faster with no data security = more time necessary to fix issues (repair the array, restore data, and re-perform any necessary work to replace what got lost between the last backup and the failure)?

2. Or sufficient speed for the workload listed combined with redundancy?
Level 10 can take the failure of 2x disks without loosing data. In a degraded state (disk/s fail), the performance will drop until you fix it, but the rebuild is much less work and faster than what all is involved with fixing a busted stripe set.​

This is the main point of what I've been recommending. Pick one, and go from there.

If you choose #1, then by all means go with what you've listed. :)

If you'd prefer some balance (and less work when a failure occurs = when, not if with any situation), then go with #2 = what I've been recommending. :eek: :p

Up to you.


Those disks are good for ~113MB/s sustained reads and writes each. So 4x = 452MB/s in a stripe set or 226MB/s in a level 10 array.


This is OK (may need some tweaks, depending on what you actually chose, such as converting the 2x 1TB units to a stripe set for scratch if you go with a level 10 for your primary data).

I want to go with yours but want to see how mine works! :confused: :D
I know yours is better because of the dreaded "when" not "if" something happens, and I know it does and it hurts. But the Dark Side calls me! :eek:
Nooo Dark Side!

Keep in mind that I already bought all this stuff because I'm on a dead line now that the Hex took this long. and I needed everything to arrive at the same time since I'm driving from Mexico to Texas to spend the night, pick everything up and drive back.

OK how about this setup?

Internal
1x 240GB OWC SSD in optical $630

4x 1TB WD Blacks RAID 10 for FCP, AE Media $340
Will this give me the same read speeds of RAID 0? in theory it should.
and if it does, I'm dandy.

This 1TB disks I got are 6 Gb/s and the newertech card too so will I see highter speeds using them in the enclosure than internal?

External
newertech maxpower PCIe esata w/port multipier $80
Sans Digital TR4M 4 Bay $130

2x 2TB WD Blacks RAID 0 Scratch $340
maybe in RAID 1 if FCP and AE can't scratch faster than 113MBps with the hexacore, Do you know?

Like you said before, I really don't need backup of scratch because I can always output again.
this are 3 Gb/s drives

1x 1TB WD Black 64MB, pictures, music, TM, CCC of SSD $85
1x 1TB stock (I think it comes with a WD black but 34MB), CCC of pictures, music
it's ok to RAID these 2 together right?

With the 4 disk RAID 0 won't the fact that the read, write speed is 452MB/s let me keep media in scratch and still have good R, W times? FCP likes to keep everything together by default and I was once told, many years ago, to leave it like that. But maybe I've been wrong all along because the through-output argument makes perfect sense. And I can always change the capture folder.
I need to be able to read like the wind because I also work with RED files.

What do you think of this now?

Thanks again
 
I know yours is better because of the dreaded "when" not "if" something happens, and I know it does and it hurts. But the Dark Side calls me! :eek:
Nooo Dark Side!
Using a stripe set as your primary array will come back and bite you in the butt. I can't stress this enough; don't do it if you value both your time and data.

Keep in mind that I already bought all this stuff because I'm on a dead line now that the Hex took this long. and I needed everything to arrive at the same time since I'm driving from Mexico to Texas to spend the night, pick everything up and drive back.
The situation is understandable, but finalize the setup before you order and get all the gear shipped. :eek: :p Lot less hassle that way (have to deal with returns due to problems/incorrect setup for your needs). ;)

OK how about this setup?

Internal
1x 240GB OWC SSD in optical $630

4x 1TB WD Blacks RAID 10 for FCP, AE Media $340
Will this give me the same read speeds of RAID 0? in theory it should.
and if it does, I'm dandy.
1. This is fine.
2. Yes, it has the same performance as a 2x disk stripe set (assuming comparisions are made with the same drive model numbers).

This 1TB disks I got are 6 Gb/s and the newertech card too so will I see highter speeds using them in the enclosure than internal?
No, as mechanical can't even saturate SATA 3.0Gb/s, let alone 6.0Gb/s. The newer SATA spec will allow SSD's to continue to increase their throughputs.

External
newertech maxpower PCIe esata w/port multipier $80
Sans Digital TR4M 4 Bay $130

2x 2TB WD Blacks RAID 0 Scratch $340
maybe in RAID 1 if FCP and AE can't scratch faster than 113MBps with the hexacore, Do you know?
You don't need to bother with a RAID 1, as you don't need redundancy; just speed, which RAID 1 is that of a single disk in this case (software implementation). So make it a stripe set (2x stripe set does help vs. single disk).

1x 1TB WD Black 64MB, pictures, music, TM, CCC of SSD $85
1x 1TB stock (I think it comes with a WD black but 34MB), CCC of pictures, music
it's ok to RAID these 2 together right?
I would keep backups as either single disks or JBOD.

But what are you backing up the Primary Data with (level 10 array, or if you're insane, stripe set)?

With the 4 disk RAID 0 won't the fact that the read, write speed is 452MB/s let me keep media in scratch and still have good R, W times? FCP likes to keep everything together by default and I was once told, many years ago, to leave it like that. But maybe I've been wrong all along because the through-output argument makes perfect sense. And I can always change the capture folder.
As such a setup would be accessing the same array for both scratch and data simultaneously, divide by 2 to get an approximate throughput (avg.) for this usage.

Separating this keeps them from interfering with one another, and if the Primary Data is on a level 10, also provides redundancy the stripe set can't.

I need to be able to read like the wind because I also work with RED files.
You'll be fine with the 10 for Primary Data, and a stripe set for scratch. This has been discussed in depth many times over.... :eek: Both arrays have sufficient throughput performance for the software you're using. ;)

Can better be accomplished?
Yes. But you're talking a proper RAID card (parity based array/s) and more disks than the MP can handle (means external MiniSAS enclosure/s to accomodate all of it). Such a setup can hit $2k+ without a second thought. :eek: But if you need it, you need it (and what you're doing is the sort of usage that it makes sense BTW).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.