Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Why is counting virtual cores at "full weighting" logical when they only deliver about 10-30% of the performance of a 'real' core? In other words 90%-70% of time virtual core isn't pragmatically getting any throughput. You are more so just trying to rationalize the end goal of wailing about adding an additional full rank of slots.

If count real cores, 6, then 6 * 2GB => 12GB . 2 x 4GB DIMMs sticks put you right at the peak power curve the top end single processor package offers. Ooops, that doesn't move the rant forward.... so it can't be true. <cough> ....

if add 1.8 "cores" (6 * .30 => 1.8 ) 1.8 * 2GB ==> 3.6GB ... just stick a another 4GB DIMM in the 4th slot. But again 16GB can't possibly be the answer since you can do that.

The answer just has to be 24GB ... the maximum that a 3680 can physically address. ( which could be done with 3 x 8GB but folks are trying to wave off that configuration. )

It is probably not absolutely necessary to have 6 slots to max out the 3680.

I would say 12 GB is a great start for most pro users with photoshop and multimedia. My point was, if software WAS using all cores like AE can be set to, you would want 2 GB of RAM per core.

Don't get mad at me, I was talking to actual Senior programmers whose names appear when you launch CS5 photoshop. I don't get paid enough to argue this with you :)

Another real world photographer is Digillyd, who for himself, needs a minimum of 24GB of RAM. He compares 6 /16/32 here.
http://macperformanceguide.com/OptimizingPhotoshop-TestResults.html

Once again, AE just gobbles up any RAM you can give it in direct proportion to how you set up each core to render.

The more cores you have, the more RAM you should have.

As far as virtual cores, they need something, but I have no idea as to the extent. But if another thread is running, it needs it's own supply of RAM, pure and simple.

It is up to nVidia to offer cards. Why is it Apple's fault ? Apple has to do all the graphics vendors cards for them?

Even Adobe has said that OpenCL is a more natural fit with what they want to do long term. (re: future versions of these software will tap into it as it is deployed and get maturity). Over a year ago was the wrong timing to put OpenCL into the current Adobe product mix. Again this doesn't mean Apple has permanently locked in something that isn't aligned with Adobe's long term strategic directions. Short term there is a gap but a long term there is no big misalignment.

In fact if Apple OpenCL to get off the ground they won't push nVidia's proprietary CUDA solutions. In the longer term it is probably going to be an expensive hiccup for Adobe to unwind this CUDA based stuff. Don't see that as particularly innovative. AMD/ATI has a better fit with the GPGPU goals of Apple and with the design criteria of the Mac Pro ( higher performance/power ratio ) .

Um, nVidia makes the GTX 285 for mac, Apple until now has always had an option for them. So yes, it is Apple's fault.

Now hopefully nVidia will make the GTX 295 for mac and Apple will include that in the store, doubtful though.

I believe that the war between Apple and Adobe has crossed into GPU allegiance.

You may be right with Adobe's claim about OpenCL - but they are just 'talking the talk.'

Mercury engine has no support for ATI and Adobe has no timeline - if ever- for building support in.

From Apple's perspective, Premiere is a direct competitor to FCP, so why would they give their pro users an excuse to jump ship by proving MP's with highly accelerated cards that perform circles around FCP?

I believe Apple will revise FCP by early next year and those ATI cards will work beautifully. It is just, well, I need a computer now to do video editing and premiere + nvidia has the performance.
 
Don't get mad at me, I was talking to actual Senior programmers whose names appear when you launch CS5 photoshop. I don't get paid enough to argue this with you :)

I'm not mad. I just simply stated that what you proposed is not based on logic, facts, or any pragmatic basis. If folks at Adobe told you to count cores then count them. If they meant count fake cores they probablywould have explicitly said fake cores. Without an adjective it means real ones.

Additionally, you don't understand the underlying reasons for the multiplier (you know because some told you. ) then throwing in fake cores is just a willy nilly act. There is no logic behind it since you admittedly don't understand the underlying factors.



Another real world photographer is Digillyd, who for himself, needs a minimum of 24GB of RAM. He compares 6 /16/32 here.
http://macperformanceguide.com/OptimizingPhotoshop-TestResults.html

What? can you read a graph?

In the medium file graph there is very little difference between 16GB and 32GB. The lines basically overlap. The increases in performance primarily come from making the RAID stripi bigger (i.e., increasing disk bandwidth not RAM bandwidth). Not sure how you deduce that you need (require) minimum of 24GB from that. It is clear in the graph that 8GB is too small, but no indication that more than 16GB is significantly buying you anything.


In the huge file graph, there is a gap but again the primary performance increase driver is better disk bandwidth. Going from stripe 2 to 3 is a bigger step than the distance between the 16GB and 32GB lines. A RAM disk is nice but "need" and nice are two different things.

At a later date, his entry for the Nehalem Pro states overtly:

"This is our new speed champ, and even though the test is still heavily dependent on disk speed, "

Again the dominating factor is disk not maximized RAM. Sinking doubling you budget on disks (perhaps going all SSD ) versus doubling budget on RAM is quite clear. The disk one gets you a bigger bang for the buck. You can get a highly performant Mac Pro that "only" has 16GB of RAM if you put your money in the right place.

Oh and I guess you missed this part in his analysis where only uses real cores versus the fake ones results in a speed increase:

http://macperformanceguide.com/Reviews-MacProNehalem-MoreIsLess.html

I assume that Adobe CS5 isn't quite so dumb out of the box to confuse the two.






Once again, AE just gobbles up any RAM you can give it in direct proportion to how you set up each core to render.

Overallocating number of cores isn't going to get you better results. Blaming that on DIMMs slots is a pure misdirection.



The more cores you have, the more RAM you should have.

Real one yes. Fake ones no.



As far as virtual cores, they need something, but I have no idea as to the extent.

Oh now you have no idea but previously logic dictated that they count 100%.


But if another thread is running, it needs it's own supply of RAM, pure and simple.

The virtual threads can only run if there are "spare" cycles on the execution units and their data is avaialble. if there are no spare cycles you don't get any added benefits to virtual. For example is there are only two multiplier units and on thread is constantly using both of them and the other "virtual" core want to also use the multiplier units then it is not going to run. You are not going to get any speed improvement. If the first "thread" stalls waiting for the floats to load then perhaps can make progress. However, you have have two very tight loops doing the exactly the same thing and all the pipeline delays are filled with code.... hyperthreading isn't going to buy you much.




You may be right with Adobe's claim about OpenCL - but they are just 'talking the talk.'

Mercury engine has no support for ATI

this blog entry clearly lays out why. It boils down to timing and maturity of OpenCL.

"Question: "Why didn't you use OpenCL then?"

Answer: OpenCL wasn't finished or ratified when this decision was made. Given a choice between doing it with CUDA or not doing it for a while because of OpenCL, we chose the former. "
http://blogs.adobe.com/genesisproject/2009/11/technology_sneek_peek_adobe_me.html



and Adobe has no timeline - if ever- for building support in.

The "if ever" is editorial comment by you, not Adobe.
In the same blog.

"Question: "Will you support OpenCL in the future?"

Answer: Clearly this is an answer for someone higher up to answer, but my hunch is that we'll certainly look at it in the future as it aligns with our goals of being open and non-propietary...... "

In another blog:
"Obviously we want Adobe apps to run as well as possible regardless of your configuration. Just as they used to optimize for both PowerPC and Intel/AMD chips, Adobe engineers continue to work closely with multiple manufacturers (Intel, AMD, NVIDIA, and others) to wring the most out of their hardware. Again, this is where standardization will help, but it does take time."
http://blogs.adobe.com/jnack/2009/11/adobe_sneak_peek_major_gpu_acceleration_fo.html


Historically Adobe has made efforts to be configuration neutral and has histoically used multiple platform frameworks to help ease the porting across platforms. Do they have a publically avallable timeline OpenCL? No. However, would it be uncharateristic for them to leverage the techonlogy? Again no.


From Apple's perspective, Premiere is a direct competitor to FCP, so why would they give their pro users an excuse to jump ship by proving MP's with highly accelerated cards that perform circles around FCP?

Because it means a profitable MP sale versus the customers walking off and buying a WindowsPC. Apple is out to make money. Sure they could make slightly more money buy selling a MP and FCP , but the MP profit is better than zero.

Besides more than few folks aren't going to move because they are defacto locked into the software. The real solution to the problem is ship a version of FCP that isn't a multicore dog in some situations. There is always going to be something competitive to FCP that is either faster or more expensive. That isn't the core market they are driving at with FCP.


It is just, well, I need a computer now to do video editing and premiere + nvidia has the performance.

So again this really is a small snapshot in time driven view of DIMMs slot number / nVidia cards ship times being an issue versus whether it really is one from a whole lifecycle of a 2010 box .
 
I'm not mad. I just simply stated that what you proposed is not based on logic, facts, or any pragmatic basis. If folks at Adobe told you to count cores then count them. If they meant count fake cores they probablywould have explicitly said fake cores. Without an adjective it means real ones.

I agree with some of what you have to say... especially when it comes to photo editing, where most often, each thread is running the same instructions on different parts of the image. In that case, hyperthreading doesn't buy you as much.

I also agree that the sweet spot on that chart looked to be towards the center with just 16GB. I would suggest just getting an inexpensive 40GB SSD and use that as your scratch disk for photoshop.

If someone were doing video editing, I might argue that 24 cores can be a real benefit. In this day and age, you can spend most of your processing cycles just decoding - especially when you talk about multiple layers of video. Some of the newer codec types are very CPU intensive, taking advantage of the various processing units in the core. When you can interleave the instructions to the various units, that's when you see the biggest performance improvement with hyperthreading.

Other than video editing or 3D animation rendering, I am hard-pressed to find a need for more than the dual-quads or the single-hex. You might consider a dual-hex if you were doing scientific computing, but if you were that serious, you should look at OpenCL, instead.
 
I would suggest just getting an inexpensive 40GB SSD and use that as your scratch disk for photoshop.

Don't do this.

Boot drive? Sure. Scratch disk? Bad idea. Especially not a tiny SSD. The scratch disk is constantly being written to, over and over. That will quickly degrade the performance of the drive and waste your money. :(
 
You guys just thank your lucky stars that they ditched FB-Dimms!

My 2006 is still ticking along though, DEFNINITELY not obsolete, though I am starting to get core envy! Contemplating upgrading to a couple of 5355s in the near future.

But I am a humble graphic designer, not a HDV Editor, so by the time illustrator or Indesign go multicore, please wake up my head which will probably be in cold storage by then (hint: its gonna be a while in case it wasn't obvious).

Does anyone know if Dreamweaver or Coda will be multi-threaded in the near future? :D
 
Especially not a tiny SSD. The scratch disk is constantly being written to, over and over.

Tiny was not the problematical adjective; inexpensive was.

There are SSD drives now with decent garbage collection approaches that will work on a Mac Pro. (they do it internally so Mac OS X doesn't have to do anything special).

(since has been string of links back to this site might as well go again.... ) http://macperformanceguide.com/Reviews-SSD-OWC-Mercury_Extreme.html

Two 50GB in a RAID 0 config of the RE versions here would not be as subject to the problems you are outlining. Those versions have a significant (28%) amount of storage is set aside for problems like wearing reduction and have decent garbage collection. It is not cheapest speed, but reliability should be looking for They are about $209 each so two would be about $418. If could put in an 'extra' 12-16GB of RAM into your Mac Pro that would cost far more ( at OWC $500-700 with 4GB DIMMs, more if using 8's . ).

Replace every 2-3 years as would high usage hard drives and probably will be happy with the results.

Having two SSDs in RAID 0 is better than 4 large HDs where only really leveraging a small subset of the hard disk capacity (so using outer tracks on higher number of disks). Saves lots of space and energy to just use the SSDs. That decreases the need to jam SSDs and drives into the optical drive slots because only have a 2 drive RAID for scratch. That leaves two drive sleds for boot and whatever else want internal without going unconventional.
 
Tiny was not the problematical adjective; inexpensive was.

There are SSD drives now with decent garbage collection approaches that will work on a Mac Pro. (they do it internally so Mac OS X doesn't have to do anything special).

(since has been string of links back to this site might as well go again.... ) http://macperformanceguide.com/Reviews-SSD-OWC-Mercury_Extreme.html

Two 50GB in a RAID 0 config of the RE versions here would not be as subject to the problems you are outlining. Those versions have a significant (28%) amount of storage is set aside for problems like wearing reduction and have decent garbage collection. It is not cheapest speed, but reliability should be looking for They are about $209 each so two would be about $418. If could put in an 'extra' 12-16GB of RAM into your Mac Pro that would cost far more ( at OWC $500-700 with 4GB DIMMs, more if using 8's . ).

Replace every 2-3 years as would high usage hard drives and probably will be happy with the results.

Having two SSDs in RAID 0 is better than 4 large HDs where only really leveraging a small subset of the hard disk capacity (so using outer tracks on higher number of disks). Saves lots of space and energy to just use the SSDs. That decreases the need to jam SSDs and drives into the optical drive slots because only have a 2 drive RAID for scratch. That leaves two drive sleds for boot and whatever else want internal without going unconventional.

Great post. I completely agree. Even if the SSD's used for scratch become dead in a couple of years, viewing it as a disposable resource in exchange for orders of magnitude better random read performance (which is vital every time you hit undo or step back in history) may be worth it for some.
 
Even if the SSD's used for scratch become dead in a couple of years, viewing it as a disposable resource in exchange for orders of magnitude better random read performance (which is vital every time you hit undo or step back in history) may be worth it for some.

Disks die also if constantly beat on them hard with seek requests. :) So it isn't really a new factor.

Forgot to mention those that large photo (and compressed ) data files will help shorten the SSDs lifetime though. So "tiny" can beome a factor. Likewise if have photoshop or other program set up to keep large scratch files. (so longer history/undo leads to scratch growth.) In the example, a 50GB pair would be good for a sub 50GB scratch space. If scratch space need to cover gets quite large then disks are better because $/GB is going to start to matter more. Folks you need > 100GB of scratch space are much closer to flipping over to using disks. At least until the next flash $/GB drop.


Compressed data (jpeg ,etc. ) presents a problem because many of the SSD controlers cheat the wearing limit by compressing data before storing it. So if can get a 50% compression rate can cut the number of writes in half.
You can compensate for that by allowing for more free space above your high water market on your SSD set up.

There is a variant of MLC flash now called "enterprise grade" MLC.

http://www.channelregister.co.uk/2009/10/19/microns_34nm_nand/

300,000 erase/write cycles allows you to do 300 erases of a single cell every day for about 2.7 years. Presuming you take off weekends from beating on scratch disk, that is about 3 years. ;)
 
Compressed data (jpeg ,etc. ) presents a problem because many of the SSD controlers cheat the wearing limit by compressing data before storing it. So if can get a 50% compression rate can cut the number of writes in half.
You can compensate for that by allowing for more free space above your high water market on your SSD set up.

Right... I had read that recently... is it the OCZ drives that are using that technique? Is it a controller based feature or a vendor based implementation? Do we know what popular drives are using this technique?
 
What? can you read a graph?

In the medium file graph there is very little difference between 16GB and 32GB. The lines basically overlap. The increases in performance primarily come from making the RAID stripi bigger (i.e., increasing disk bandwidth not RAM bandwidth). Not sure how you deduce that you need (require) minimum of 24GB from that. It is clear in the graph that 8GB is too small, but no indication that more than 16GB is significantly buying you anything.


In the huge file graph, there is a gap but again the primary performance increase driver is better disk bandwidth. Going from stripe 2 to 3 is a bigger step than the distance between the 16GB and 32GB lines. A RAM disk is nice but "need" and nice are two different things.

At a later date, his entry for the Nehalem Pro states overtly:

"This is our new speed champ, and even though the test is still heavily dependent on disk speed, "

Again the dominating factor is disk not maximized RAM. Sinking doubling you budget on disks (perhaps going all SSD ) versus doubling budget on RAM is quite clear. The disk one gets you a bigger bang for the buck. You can get a highly performant Mac Pro that "only" has 16GB of RAM if you put your money in the right place.

Oh and I guess you missed this part in his analysis where only uses real cores versus the fake ones results in a speed increase:

http://macperformanceguide.com/Reviews-MacProNehalem-MoreIsLess.html

dude, you must be single with the way you argue by demeaning other people's points. Yes, I can read graphs, thanks.

About the RAM, ya of course, not much difference between 16 / 32 BUT for AE there would be. And AE allocates ram to virtual and real cores. And part of my entire argument was that Photoshop is NOT efficient with cores. AND that is what the programmers said too. It is simply not possible to split up tasks between cores as easily as it is for rendering.

Also, see what he said here (not sure if CS5 fixes this):

Photoshop CS4 blindly allocates 3 “threads” per CPU core. For a 16-core machine (dual CPU), this means that it’s allocating 48 threads, vs 24 threads for an 8-core machine (single CPU). Each of these threads requires memory of its own. That is our working theory at least.

Again, AfterEffects totally uses memory for virtual cores.

Sure Adobe may use OpenCL, I shouldn't say never. True. However, how long did it take for them to use Cocoa?

One of Job's points with arguing against Adobe with the flash issue was that Adobe was slow to adopt - sometimes never adopting - Apple's technology. We simply see this occurring again.

I am connected to Adobe for life probably because of graphic design but Apple wins my heart for video editing.

However, I'm jumping ship because Apple has let go of the ball with FCP for the time being. I will return when FCP gets revamped.
 
I would say 12 GB is a great start for most pro users with photoshop and multimedia.

Yes. Most if not for practically everybody,outside the realms of some really,really,really,really niche segments. Like...uuh...cant think of many,if any.


Another real world photographer is Digillyd, who for himself, needs a minimum of 24GB of RAM. He compares 6 /16/32 here.


No.

He dont need anything. As cool (and many times extremely usefull) tests he does,they have absolutely nothing to do with the real world photography,if they are even remotely connected to printing medium.

You can shove a hasselblad 50 file with 16 bits in you computer and start to work on it.
But the sad truth is that at the present,the difference wether you are working with a 06MP,09 iMac or 10MP is marginal.

If you work (unless it is 100+ layers sufferfest,after witch we all can wonder if you workflow is ok...) for print,let alone electronic media, you will finish you job about the same time. There is very small difference between the platforms you are working with.

Thanks to limitations of the programs itself.
 
Right... I had read that recently... is it the OCZ drives that are using that technique? Is it a controller based feature or a vendor based implementation? Do we know what popular drives are using this technique?

Minimally the Sandforce controllers are doing it. (this page and the next.)

http://www.anandtech.com/show/3690/...andforce-more-capacity-at-no-performance-loss

Not 100% about some of the others because some of them are more vague about what they are doing. This article says Sandforce have patented part of the process.

http://www.theregister.co.uk/2009/04/13/sandforce_launches/

although a bit of a chuckle about compressing data as going to storage in a generic sense since mainframes were doing that 15-20 years ago. The patent office takes just about everything these days. Or perhaps better put the extremely narrowly differentiated. Cuts both ways I guess in that only need another slight differentiation to get around it.

I suspect the others are not because you have to throw silicon at a hardware based , real-time stream compression logic to make it work. The compression is going to have limitations since it is fixed. The "clever" parts where Sandforce is looking at all 1's or all 0's and doing no-op won't happen much in jpg.

In short, if you see someone who says their write lifetime is hugely different than others using the same kind of Flash ... take that will huge grain of salt if looking to use that as a scratch drive. Either the flash cells are way different/better or the over provision needs to be radically different.
 
After Effects CS5 needs the right combo of Memory and Cores to go fast.

When you enable multiprocessing, you need to specify how much memory per core. Once you do, AE tells you how many cores it will use to render.

Check out these results for a 12 core Westmere with 24G of RAM rendering a sample project created from Total Training:
.75G per core x 24 cores = 126 sec
1G x 17 = 34 sec
1.5G x 11 = 32 sec
2G x 8 = 34 sec

So in this case 11 cores x 1.5G each is the sweet spot. Can't wait to try this test on a 6 core Westmere -- though I will need three 8G memory sticks to reach 24G -- since it only has 4 memory slots.

I'm really looking forward to you testing the 6-core with 24GB RAM now that the 8GB sticks have been confirmed to work. I'm having a hard time deciding between the 6-core and the 12-core. If the 6-core beats the 12-core when they both have 24GB RAM, I'll get the 6-core.

Does anyone know if C4D can see hyperthreaded cores like AE can? That's also something that I need to take into consideration since the two applications I use the most are AE and C4D.
 
Does anyone know if C4D can see hyperthreaded cores like AE can? That's also something that I need to take into consideration since the two applications I use the most are AE and C4D.

Applications are going to see the hyper threaded cores as real cores so you will be able to use them as such. I've used C4D and AE on Intel processors since they reintroduced hyper threading.
 
No.

He dont need anything.

Ya, you are right I suppose. I was using my dual 2.0 G5 with 3.5 GB ram for doing:
3d animation (I'm not good or anything, but I was rendering stuff in Strata)
CS4 suite
Final Cut pro

I was batching 100's of 5dmkII 21 MP RAW images and also converting the 5d's h.264 HD footage.

It did fine. My PC quad work computer running windows 7 with 4 GB RAM faired much worse than my G5 for the footage. Slightly faster in RAW conversion.

7.5 year old G5 compared to a 2 year old PC quad.

The main thing is, I got a lot of work done and wasn't annoyed too much. FCP just always needed to render, but that's why I'm gonna try CS5
 
Yes. Most if not for practically everybody,outside the realms of some really,really,really,really niche segments. Like...uuh...cant think of many,if any.





No.

He dont need anything. As cool (and many times extremely usefull) tests he does,they have absolutely nothing to do with the real world photography,if they are even remotely connected to printing medium.

You can shove a hasselblad 50 file with 16 bits in you computer and start to work on it.
But the sad truth is that at the present,the difference wether you are working with a 06MP,09 iMac or 10MP is marginal.

If you work (unless it is 100+ layers sufferfest,after witch we all can wonder if you workflow is ok...) for print,let alone electronic media, you will finish you job about the same time. There is very small difference between the platforms you are working with.

Thanks to limitations of the programs itself.

Maybe I'm not 100% clear on the points you're making (confused, actually) but I'm a Nikon D90 shooter and I can't wait to get my Hex with 16 GB memory. At home, I've got a core duo iMac and that thing chokes in Lightroom and Aperture. Chokes. I don't even bother anymore, which pi$$3s me off b/c I love photography. Lightroom and Aperture (I was trying both at one point) have left my old iMac in the dust.

For Final Cut Pro and Color (which I use everyday and have been for years), I'd recommend *no less* than 12 GB RAM if you're doing real work. Any less than that, start bouncing between Color, FCP and MPEG STreamclip and watch the wheel spin as your RAM gets eaten up.

In other words, I'm getting 16GB RAM for my HEX and yes I'm going to need it. I might even go up to 24GB if I can scrape up a few more bucks.
 
Ya, you are right I suppose. I was using my dual 2.0 G5 with 3.5 GB ram for doing:
3d animation (I'm not good or anything, but I was rendering stuff in Strata)
CS4 suite
Final Cut pro

I was batching 100's of 5dmkII 21 MP RAW images and also converting the 5d's h.264 HD footage.

It did fine. My PC quad work computer running windows 7 with 4 GB RAM faired much worse than my G5 for the footage. Slightly faster in RAW conversion.

7.5 year old G5 compared to a 2 year old PC quad.

The main thing is, I got a lot of work done and wasn't annoyed too much. FCP just always needed to render, but that's why I'm gonna try CS5

I don't think rendering and batch processing is anywhere comparable to real workflows in video or photography. For example, I'm doing complicated cutting, keying and Color roundtripping in Final Cut Studio. The 2008 Mac Pro I work on really starts to bog down after a few hours of this work and I need to reboot frequently. I wouldn't want to try that on a G5.

I understand what you're saying, though. Just making a point about ways you can really push a system to its limits, despite what some will tell you about who needs what in order use certain programs. They're not seeing the big picture. Kind of like photographers who take pictures of brick walls and proclaim themselves experts on cameras and photography.
 
Maybe I'm not 100% clear on the points you're making (confused, actually) but I'm a Nikon D90 shooter and I can't wait to get my Hex with 16 GB memory. At home, I've got a core duo iMac and that thing chokes in Lightroom and Aperture. Chokes. I don't even bother anymore, which pi$$3s me off b/c I love photography. Lightroom and Aperture (I was trying both at one point) have left my old iMac in the dust.

For Final Cut Pro and Color (which I use everyday and have been for years), I'd recommend *no less* than 12 GB RAM if you're doing real work. Any less than that, start bouncing between Color, FCP and MPEG STreamclip and watch the wheel spin as your RAM gets eaten up.

In other words, I'm getting 16GB RAM for my HEX and yes I'm going to need it. I might even go up to 24GB if I can scrape up a few more bucks.

Let´s try again then.

At present,with current iteration of FCS+LR+PS,speed differences between 06MPs,09iMacs and 10MPs are marginal in everyday video & photo production. Ok?
Small. Almost non existant.

Why?

PS still sucks at multithreading. Most of the normal actions still utilize 1-2 cores,very few scale up to 4+ cores. Fast clock speeds count here,where 4x3.00 MP,4x3.33 iMac hold their ground against 10MPs.

LR scales a bit better,but is very poorly threaded as well. It can utilize 4 cores (while peaking from time to time to 6) but,in general, meh.
The program just dont scale beyond 4 cores properly.
So again,no matter if you use older MP/iMac.

FCS. Hmm.How to put this?
It is even WORSE than the previous programs.
FCP uses max 2 threads,as Shake,Color the same,motion a bit more(?).
Thats it.
You can tune the Compressor to allocate all aviable cores when exporting,but the program is so damn buggy,that if you manage to export something out of it,pfffew.
AND FCS is memory wise capped to 4gibs,so the "minimum 12GBs!!" is a moot,as well.



So in short : It is in the programs where you have your underlying problems,and no amount of throwing away your cash to get some new,fancy,blingbling 2010 Macpro will change that.
That and that you might have some constraints with your iMacs I/O bandwidth if you are solely operating your lightroom library with your internal drive. Wich would be...dumb.
Hell, LRs speed difference between a macbook & 8 core nehalem MP is about 1.5-2.5x...
Get external fast disks (fw800,raid box) for 300$ bucks and you would be sorted untill the developers wake up and gets us properly done programs.
Untill that,save your money.
 
Changing my order form 12-core 2.66 to 6-core 3.33

[post moved to a new thread - apologies]
 
Although i presume that the 24 threads would then dominate again with 48gigs of ram, but at this point i think i can honestly say I dont see myself dropping the $3000 it would cost for that much ram.

Just have to pray that the 6 core mac pro can accept 8 gig mem chips for the future.

A. OWC has confirmed already that 8gb modules work fine in the 4/6 core machines

B. why do you keep saying it would cost $3000 ?? the only package that even comes close to that on owc is 64gb while 48 costs $2160 and you don't even need that much.

C. while you may be up and coming in terms of motion, until you are SOOO busy the seconds matter or billing out for all your time or projects, you do not "need" any of these new machines and the cost associated with them. Spending over $4000 in your case would be silly.

Everyone on here endlessly discussing how many seconds faster this or that machine is has apparently got more time than sense unless they are clearing over $100k/yr in work that requires the machine to begin with.

Either you are super busy really working, and then you can afford whatever you like, or you are not so busy and you have the time to deal with a slower machine. Personally, I am very busy but I also don't like blowing money that could instead keep me in the latest top canon model, new other gear, new car, etc.

What's more important, a few seconds faster, even minutes, or no problem having the latest lenses, lighting, misc, and budget to stay on site extra days, drive dependable vehicles, enough cards to not "need" to download any of them for a week or two?

Only last year did I opt to replace my 2003 era dual 2 G5. I did however several years ago get a lot more out of it by wiping it clean, and starting over using a 15k os/boot drive plus a 10k work drive. Moving all archive outside the machine and only running up to cs3 on it because the newer versions aren't designed for the old ppc.

Only 18 months ago running into 1gb+ psd files became a big issue on the g5. This is due in part to the larger 1dsmk3 and 5dmk2 files in 16 bit. However I use the mpp for all raw processing, further allowing the g5 to just do it's cs3 thing. Our play camera is a the fun little 7d so I don't risk dropping the work camera into the ocean or off a cliff. The 5dmk2 is my own fun stuff camera and back up for work.

I have an 8 core on one coast that is the primary machine when there and a 30" on each coast for each work environment. Now it looks like a 3.3 6 core makes the most sense to replace the dual2 g5. If the 12 were also 3.33, then I might go for that just because, but I also know that in three years, something for $4000 will be better and a few thousand now, earning interest, will be smarter use of the $ long term.

I think it's more important to spread your budget than dump it all into one machine. Instead a powerful up to date mac book pro is money well spent in case you need a backup or ability to work on the road.

Anyway, good luck, but keep your desires in line with realistic needs. I know it's tough.
 
Does anyone know what's the better pick between the 2,93 mhz 12 core and the 3,33 hexacore? If one would use Photoshop, that uses mostly just one core , would the 2,93 12 core still be faster?

According to cpubenchmark.net http://cpubenchmark.net/high_end_cpus.html
the 2,93 X5670 is about 1000 points faster than the 3,33 W3680 , would this mean the 12 core is still faster than the 3,33 hexacore in single threaded apps also?
 
No.

You need complicated, multi-threaded apps to really leverage the the 12 cores.

By the sounds of it, 12 cores would be a waste of money for you (unless you really need the RAM slots).
 
No.

You need complicated, multi-threaded apps to really leverage the the 12 cores.

No you don't. You can also have 12 single threaded apps running at the same time. Or you can have 1 app in the foreground and 1-2 batch jobs going at the same time on a workstation. This happens all the time on servers because there are multiple users. Each one of those users is trying to do something different. Ta da! lots of concurrent workload.

Yes there is big problems if primary workstation usage is one application at a time in serial.

What happens is that folks don't want to run compressor in the background while they edit. Or batch encoding job while compose something.
Usually because run into either Memory or disk bandwidth problems. That is just all the more likely if have spent huge fraction of budget buying max GHz and gone cheap on the disk I/O .
 
Does anyone know what's the better pick between the 2,93 mhz 12 core and the 3,33 hexacore? If one would use Photoshop, that uses mostly just one core , would the 2,93 12 core still be faster?

There is a giant price gap between the two. Large enough to max out the RAM in the hex with Apple RAM prices and put in a much faster disk system and still have money left over. (hex with Apple 4x4GB and 4x1TB CTO cheaper than 2,93. with just 6GB and 1x 1TB standard config. )



According to cpubenchmark.net http://cpubenchmark.net/high_end_cpus.html
the 2,93 X5670 is about 1000 points faster than the 3,33 W3680

Kind of hard to find the quickly find the configs that were used for benchmarking but suspect this is somewhat an apples to oranges comparison.
The memory and disk configs probably very across the samples and it isn't really illustrative of "CPU" only differences.



, would this mean the 12 core is still faster than the 3,33 hexacore in single threaded apps also?

It can be if left at the standard out of the box configs. There are lots of times where having 6GB of RAM is better than just 3GB. However, with the extra money you can fix that.

A more sensible comparison would be the 2.66 12 core vs. the 3.33 hex.
With the lower price gap not going to be able to pack quite as much RAM into the hex to where the 2.66 won't be able to over take it once get up over the 16-24GB range. 32 and over it is over.


However, the real comparison if look at total overall system costs is more so the 2.4 8 versus the hex. Since the 2.4 starts out lower can throw more memory and disk (or GPU) at it for the same total system price. If your workload requires more than 3GB to work well then the 2.4 works out better. If it is 12GB then the 2.4 not hitting a slow disk is faster than 3.33 that is hitting the disk much harder.


In short both the 12 x 2.93 and 6 x 3.33 models are priced out of the $/performance market and squarely in the max spend bucket.

If seriously want to optimize the box to running single threaded apps and get more value for money would get the quad 3.2 GHz model and drop the difference between hex and it on more RAM and better disks. If you only want one core.... buy fewer of them.
 
Thanks for the reply, but I think I might have been unclear , I am not interested about the price difference, I am wondering what CPU is fastest in single threaded apps like Photoshop the 2,93 12 core or te 3.33 6 core?

If the price would have been exactly the same, if both CPUs were one dollar each, what CPU would be fastest in Photoshop doing single threaded stuff ? Is it as simple as the ghz? I suspect not.


There is a giant price gap between the two. Large enough to max out the RAM in the hex with Apple RAM prices and put in a much faster disk system and still have money left over. (hex with Apple 4x4GB and 4x1TB CTO cheaper than 2,93. with just 6GB and 1x 1TB standard config. )





Kind of hard to find the quickly find the configs that were used for benchmarking but suspect this is somewhat an apples to oranges comparison.
The memory and disk configs probably very across the samples and it isn't really illustrative of "CPU" only differences.





It can be if left at the standard out of the box configs. There are lots of times where having 6GB of RAM is better than just 3GB. However, with the extra money you can fix that.

A more sensible comparison would be the 2.66 12 core vs. the 3.33 hex.
With the lower price gap not going to be able to pack quite as much RAM into the hex to where the 2.66 won't be able to over take it once get up over the 16-24GB range. 32 and over it is over.


However, the real comparison if look at total overall system costs is more so the 2.4 8 versus the hex. Since the 2.4 starts out lower can throw more memory and disk (or GPU) at it for the same total system price. If your workload requires more than 3GB to work well then the 2.4 works out better. If it is 12GB then the 2.4 not hitting a slow disk is faster than 3.33 that is hitting the disk much harder.


In short both the 12 x 2.93 and 6 x 3.33 models are priced out of the $/performance market and squarely in the max spend bucket.

If seriously want to optimize the box to running single threaded apps and get more value for money would get the quad 3.2 GHz model and drop the difference between hex and it on more RAM and better disks. If you only want one core.... buy fewer of them.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.