This picture seems to suggest otherwise, although I don't know much and I could be wrong.
edit: this is what I was referring to:
![]()
The bigger chip, nearest to radeon card is ESB2 (Enterprise SouthBridge), which is actualy old ICH7 chip
This picture seems to suggest otherwise, although I don't know much and I could be wrong.
edit: this is what I was referring to:
![]()
The 2009 SP Mac Pro has 4 RAM slots instead of the 8 slots of previous Mac Pros and the 2009 DP Mac Pro.Is this pretty much guaranteed? That we can put 4GB or 8GB DIMMs in these slots in the near future? That would certainly make this RAM restriction a non-issue. I don't foresee needing more than 8GB for the next two years or so.
Hi,
I am thinking about either buying the new 2.26 or 2.66Mhz, 6GB RAM (maybe 8) Mac Pro. Would these be better for apps such as Digital Performer 6, East West, Vienna Symphonic Instruments, Native instruments, Finale, Pro Tools and other intensive music apps than last year's Mac Pros? I will also use Adobe Creative Suite CS4.
My understanding is that the Quad core supports 24GB of ram to keep the triple channel memory working. And that taking it to 32GB would degrade performance as it did when adding a 3rd module to a dual channel set up.
For the Limit on the Quad core change that to 6GB triple channel in 2GB sticks or 12GB triple channel with 4GB sticks and i see the only reason that these 4GB sticks are not offered is that there so expensive your cheeper buying the Octo core Macpro and filling it with 12GB of ram in 2GB modules.
Is it safe to assume that the 8 Core 2.26Ghz is actually SLOWER both in Multi-Thread and Single-Thread application when compared with the previous genereation 2.8Ghz 8 Core???
I am trying to figure things out here and based on the graph alone, that is my assumption..
Please correct me if I am wrong..
It is entirely possible if you map down to what the root cause of the scalability problem is. In fact it makes alot of sense. The 2.26 is a more balanced machine. In other words, there is a closer match to its memory speed than the 2.93.
Typically, in multiprocess boxes you don't see linearly improvements when you can't efficiently parallelize the workload and/or keep the workload fed. The older tech Xeon get worse speed ups because the cores are taking hits trying to get through the memory bottleneck of the shared bus for all 8 cores. The 2.93 has more of a memory bottleneck problem than a 2.26 processor will if they share the exact same memory speeds. More bottleneck problems leads to lower than ideal speed up.
The counter intuitive result would be for the cpu speed to go up 2.93 , 3.0 , 4.0, etc. while keeping the memory system exactly the same and the speed up number would not go down at some point (or at least flat). Likewise, keeping the memory system constant and add 2x, 3x. 4x number of cores sharing it and the single-vs-parallel numbers not go down.
It depends on the exact application. For some, the single 2.26 *WILL* be faster. For most, it will be the dual 2.8. (For example, with a properly-compiled H.264 encoder, the single 2.26 will be faster, because the new processor includes new instructions that make video encoding faster.)
Then we will see snow leopard, wich will focus on multiple processors use (aka Grand central). And gaming is slowly going toward multiple proc. (PS3 and Xbox360 are the best examples)
Multithreading help games more and more, but Hyperthreading only hurts games performance![]()
The ps3 and xbox are not great examples, and ignoring the technicalities, who wants 'specialist' consoles as the development platform for games that are then ported to pcs?
There's a big list of games that have 'multiple cpu support' splashed across the advertising blurb, which has little or negative effect on framerate and performance.
Or like most software, use any extra power as an excuse to write bad, bloated (and ported) code as easily and quickly as possible
It's not hugely simple to have multiple cores work at making your tasks hugely quick. To simplify it quite horrificly with games as an example, once any 'data has been crunched', it still has to be 'synced up and dumped' onto your display, which is a pretty bad bottleneck.
We've had multiple cpu's and gpu's (although the manufacturers seem incompetant at writing drivers) for a long time, and little to show for it.
My opinion for whoever is tempted to get an old octo core at 3ghz instead of a pumped out 2.94ghz quad or a 2.26 octo nehalem is to go for the nehalem.
Here's a graph of all the unique systems that have been posted so far.
Microarchitecture, but even that isn't usually enough in this case.How can lower clocked 8 virtual cores beat higher clocked 8 real cores ?