Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

highdefw

macrumors 6502
Apr 19, 2009
259
0
So if I have a 2009 8 core MP... I should use only 6 out of the 8 slots?
So 4GB x 6 =24GB ram max?
 

Inconsequential

macrumors 68000
Sep 12, 2007
1,978
1
So if I have a 2009 8 core MP... I should use only 6 out of the 8 slots?
So 4GB x 6 =24GB ram max?

In theory yes.

As for the chipset it's still X58 and ALL that has change is that they are slotting different CPUs into the same board designs.

I put money on it.
 

xgman

macrumors 603
Aug 6, 2007
5,671
1,378
This thread has gotten slightly off topic. There is a performance advantage to triple channel. The single cpu 6 core version of the Mac Pro by spec can only take 12GB in triple channel mode. I had Intel double check in their spces as to whether populating the 4th slot draws the memory down to double or single channel mode and two different intel techs independently said single after verifying. This was on a comparable board that Apple is suing in the 2010. Even if it were double mode that would be a deal killer for me. What we needs to know now is whether 8GB 1333 ECC sticks will work fine in these board or not. Vendors are starting to stock them for this, so I am guessing they will be fine. If they do, and that's a big if, and if they don't cause instability, then there is the solution. I'll return by 12 GB's for 24 GB and stick with triple channel. The techs at Intel told me the reason they have the 4th slot in some of these server boards and even some of the desktop versions is to accommodate people who populate with smaller sizes of ram that may need to run another stick to simply have enough to do the job while realizing that performance is sacrificed. In any case I would never consider a board like this if I were building any sort of PC, server or not. I strongly suggest that if you are after performance (and who is not really) don't go more than 3 slots in this board. Also maybe 12GB with a fast SSD to act as swap wouldn't be so bad if need be.
 

mattmower

macrumors regular
Original poster
Aug 12, 2010
116
18
Berkshire, UK
This thread has gotten slightly off topic. There is a performance advantage to triple channel. The single cpu 6 core version of the Mac Pro by spec can only take 12GB in triple channel mode. I had Intel double check in their spces as to whether populating the 4th slot draws the memory down to double or sidle channel mode and two different intel techs said single.

Ouch. Thank you for the confirmation -- I think :)

Okay here's what may be a stupid question. Do you require exactly 3 modules for triple channel mode?

What I mean to say is, is it using the 4th slot that drops it to single-channel mode? Or not having exactly 3 used?

I'm waiting to hear whether 8GB modules will work before I make a purchase because while 12GB would be fine for now I'm not sure I will feel the same way 2 years down the line.

M.
 

Demigod Mac

macrumors 6502a
Apr 25, 2008
836
280
I purchased 3x2 GB sticks in anticipation of my Pro.

So,

if I arranged 7 GB like this:

2 GB
2 GB
2 GB
1 GB (factory RAM)

it would drop down to single channel mode? :(
 

skiffx

macrumors 6502a
Feb 5, 2008
681
10
So can somebody answer this, will 16gb (4x4) perform faster or slower than 12gb (4x3) ?
 

Ryan P

macrumors 6502
Aug 6, 2010
362
235
So can somebody answer this, will 16gb (4x4) perform faster or slower than 12gb (4x3) ?

What will happen will depend on the workload. If the machine isn't in need of more than 12 GB's of RAM it is going to slow down as it drops out of triple channel mode. This may or may not be a noticeable slowdown.

If however the machine does need more than 12 GB of RAM, and you don't have it, it is going to use your hard disk or SSD for additional RAM. In either case this is going to cause a major slow down but especially in the case of a hard disk.

I'm personally considering doing 3 SSD's in a RAID 0 Array with an external PCIx8 controller card for my boot\swap disk if I end up purchasing a Hex and it only supports 12GB in triple channel. I'm thinking this will minimize the slow down when I push things too far.
 

skiffx

macrumors 6502a
Feb 5, 2008
681
10
What will happen will depend on the workload. If the machine isn't in need of more than 12 GB's of RAM it is going to slow down as it drops out of triple channel mode. This may or may not be a noticeable slowdown.

If however the machine does need more than 12 GB of RAM, and you don't have it, it is going to use your hard disk or SSD for additional RAM. In either case this is going to cause a major slow down but especially in the case of a hard disk.

I'm personally considering doing 3 SSD's in a RAID 0 Array with an external PCIx8 controller card for my boot\swap disk if I end up purchasing a Hex and it only supports 12GB in triple channel. I'm thinking this will minimize the slow down when I push things too far.

Shame it cant work in triple channel mode on the first 3 modules and go into single mode only when it uses up the first 3 (12gbs) and goes to 4 (16gb).
 

Ryan P

macrumors 6502
Aug 6, 2010
362
235
I just took a look at the DDR3 bandwidth specs and DDR3 1333 gives a memory bandwidth of 10667 MB/sec. So if you compare that to a OWC SSD at 270 MB/sec you would need 39 of those in Raid 0 and a PCI express card with 21 PCI Express 2.0 lanes to match it in RAM bandwidth.

That would be hard to pull off today but you could go with something like 16 OWC 50GB SSD's + Areca ARC-1261ML-4G PCI-Express x8 SATA II Controller Card + Raid 0 and end up with 4000 MB/sec access to 800GB of storage for $4500.

Eventually the lines between storage and memory are going to get blurred....
 

barefeats

macrumors 65816
Jul 6, 2000
1,058
19
Though triple channel blows away double channel on when you run memory stress tests like stream64 and DLT64, real world apps don't saturate the memory bus. So if you use 4 sticks in the 6 core or 8 sticks in the 12 core, you won't lose performance.
 

xgman

macrumors 603
Aug 6, 2007
5,671
1,378
To answer some of the above, YES adding any memory to the 4th slot will slow you down, period. Likely to single channel according to Intel. You paid top dollar for this thing NOT to be slowed down right? Therefore stick with 3 slots and go 2x3 or 4x3 or hopefully supported 8x3. Compromising on this is silly for a $3500+ computer. Downside is that 8x3 = almost double what 4x4 =, but that's the way Apple decided to leave things, most likely so as not to have to provide a more fully upgraded motherboard at this time.
 

xgman

macrumors 603
Aug 6, 2007
5,671
1,378
I just took a look at the DDR3 bandwidth specs and DDR3 1333 gives a memory bandwidth of 10667 MB/sec. So if you compare that to a OWC SSD at 270 MB/sec you would need 39 of those in Raid 0 and a PCI express card with 21 PCI Express 2.0 lanes to match it in RAM bandwidth.

That would be hard to pull off today but you could go with something like 16 OWC 50GB SSD's + Areca ARC-1261ML-4G PCI-Express x8 SATA II Controller Card + Raid 0 and end up with 4000 MB/sec access to 800GB of storage for $4500.

Eventually the lines between storage and memory are going to get blurred....

But for now, a SSD swap file will simply be faster than a normal hard drive swap file for when you run out of whatever amount of memory that is your max at any given time. Still no where near ram, but faster none the less.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,219
3,821
Ouch. Thank you for the confirmation -- I think :)

Okay here's what may be a stupid question. Do you require exactly 3 modules for triple channel mode?

Where did this "triple channel mode" come from. Most of this thread is a bit whacked because the terminology is way off.

There is not a "triple channel mode memory controller" on the Nehalem and Westmere (5500 and 5600 series ) Xeon. There are three memory controllers. Yes plural; as in more than one. Each one of these controllers can be attached to one, two , or three banks of DIMMs slots.

So on the Mac Pro there are two controllers attached to one bank/slot each and one controller that is attached to two banks/slots.

----- memory controller 1 --- [ 1 ] --- [ 4 ]

----- memory controller 2 --- [ 2 ]

----- memory controller 3 --- [ 3 ]


So if you do not put memory slots attached to all three controllers then you will not get all three controllers involved. If you just put one DIMMs into the first slot they you will get just one controller active. Likewise if only use the first two slots then will only activate two of the controllers; one will be dormant because there is nothing attached to it.

What Intel does is change the interleave (how the memory is layed out) and memory clock speed depending upon how you fill up the slots. Fill slots 1 through 3 and you get a 3 way interleave and the higher clock speeds. As you fill in more of the banks the clock speed drops. As you don't fill in groups of 3 the interleave drops below a way weave.

Since programs often ask for addresses in sequence if "words" at addresses 4 8 12 are located behind three different controllers the processor can start all of those requests in parallel (or at very least a pipeline fashion since they take many cycles ) because the work is delegated out to three different controllers. In a two way interleave ( say a two way interleave where <1> and <2> and <3> and <4> are paired) then cannot get as much memory requests going in parallel. Could get 4 and 8 , but if 12 is also assigned to memory controller 1 it will have to wait until get 4 dispatched before can dispatch 12.

It appears the "single channel mode" and "triple channel mode" being talked about here is really single/triple/etc interleave. If so that misses an important point of "single mode". There are two types of interleave. One is at the 'micro' level (you can assign word addresses ) so that a single core interacts with multiple controllers. The other "interleave" is that there are multiple threads/processes that interact with different much larger regions of memory (each of which is behind a different controller). You can still get three memory controllers running in parallel if have three different thread/processes accessing three different regions of memory. A smaller effect, but it is still present in most normal situations.

Also no "new" optimizations required for either interleave. Both of those will be leaveraged by apps that "think" they are interacting with a single controller. There is no visible difference to them other than some memory requests coming back faster .... which with dynamic execution on the Xeons doesn't really hurt anything.


You also loose interleave if mismatch sizes. So if 2 1GB DIMMs and 1 2GB DIMMs you cannot split things up 3 ways because the sizes don't match. Likewise one 8GB and one 4GB is worse than 3 4GB DIMMs. While theoretically looks like loosing 30% by switching from three way to two way in reality for real apps ( as opposed to synthetic benchmarks which are too synthetic and small ) the real loss will be in the 2-10% range in memory bandwidth (not overall throughput).


For the 5500 (Nehalem) series Xeons once you add any memory to the second bank on any of the controllers the speed drops to 1066.

"As soon as you add a second DIMM to any memory channel the speed drops to 1066 MHz for all DIMMs "
http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon



So all the folks ranting about how need to fill all possible slots and Apple was lame for not supporting 1333 in the 2009 models were blowing lots of smoke. All the vendors drop down to 1066 if fill more than three DIMMs slots. The speed drop off is even worse if go to 3 banks of slots. 800 memory is better than no memory at all so if needed > 32GB of RAM you simply take hit. It is still approx 10x faster than hitting anything on a SATA or SAS bus.


For the 5600 (Westmere) one of the incremental improvements is that can now in certain configs fill bank 2 and still retain 1333 memory speeds.

See figure at bottom of page 4.
https://globalsp.ts.fujitsu.com/dmsp/docs/wp-westmere-ep-memory-performance-ww-en.pdf
Note that it is still the case if there were 3 banks present that filling any of those slots will drop all the speeds back down to 800.





I'm waiting to hear whether 8GB modules will work before I make a purchase because while 12GB would be fine for now I'm not sure I will feel the same way 2 years down the line.

8GB will likely work since they worked on the 2009 models. Apple didn't provide an 8GB option because with their 30+% markup on memory prices they know that would put that upgrade in the stratosphere pricing where extremely few folks would buy it. OWC has had 8GB modules for the 2009 for long time. Even if you had to go to 1066 8GB modules they'd be worth it versus the alternative of hitting a SAS/SATA channel. However since a couple of years into future, there is decent likelihood can get 1333 models at reasonable prices then.

The "use 4GB now and then 8GB a couple of years from now" is why the four slot configuration makes sense for a very wide spectrum of users. Sure there are a subset of folks who need to pack the machine to the gills with DIMMs but they are not the majority.


P.S. What is more lame is not that the $3,600 Mac Pro doesn't have more DIMM slots, but that there they is no docs in Apple's knowledge base that covers this stuff. Get all kinds of funky quasi disinformation floating around the internet when Apple should be writing this stuff up (like Dell, Fujistu, and others do) so that do get voodoo filling in the blanks information wise.

P.P.S. When the next round of Sandy Bridge era Xeons come along and 6 cores are more mainstream distributed throughout the core of the series' line up 4 memory controllers will be in place ( just like present in the current 6500/7500 series ) and then the 4 slots will be make even more sense and Apple would not have to change the design. Four will the the natural "bank" size and the Mac Pro will already have that.
 

seek3r

macrumors 68020
Aug 16, 2010
2,200
3,129
Would there be too much increase on the power draw if they used 6 or 12 slots, instead of 4 or 8?

I have heard that RAM is taking a large part of of the power a computer uses these days. I know when I configure a Dell server, once I fill a certain amount of memory slots, they require me to upgrade the PS.

Nothing like having a huge bank of memory slots, even if you don't fill them.

2q2095e.jpg


(not my picture, from http://www.prgmr.com)

On the other hand, the last dell blades I saturated with ram were quad socket amd 6-cores (that's 24 cores/blade) so the ram/core ratio with less dimms would have been painful! That's why you need those huge banks of slots :) - this btw is actually a problem I'm running into now with a cray cx1 system I'm provisioning)

Back on topic, I think apple did 4 slots because 3 would have been perceived as too few and they wanted to up sell more than 4 to the DP machine. 4 gives the options for those who really need another slot to add more ram with a reasonably small performance drop. At the moment on the quad you can easily get 8gb/core at a decent price with 4 slots.
 

xgman

macrumors 603
Aug 6, 2007
5,671
1,378
The short answer to your well thought out description and regardless of the correct terminology, the readers digest version is that in the 3.33 board at least, fill the first 3 slots with the same ram = good performance. Fill the 4th one in addition = less good performance.

By the way, the notion of filling all slots was more applicable on the dual channel memory based 2008 Mac Pro, where it was proven that filling all slots gave better benches than filling less than all of them, even when the total ram was the same.

As far as "triple Channel" terminology:

When operating in triple-channel mode, memory latency is reduced due to interleaving, meaning that each module is accessed sequentially for smaller bits of data rather than completely filling up one module before accessing the next one. Data is spread amongst the modules in an alternating pattern, potentially tripling available memory bandwidth for the same amount of data over storing it all on one module.

A 2009 Mac Pro test:
MEMORY RIDDLE: WHEN IS SIX MORE THAN EIGHT?
We were able to clearly illustrate the bandwidth advantage of three memory modules per memory bank in the Nehalem Mac Pro using DigLloydTools (DLT) stress test which does a memmove() to all of unused physical memory. We put 12 GB (6 x 2G) in first. Ran the test. Then installed 16GB (8 x 2G) and ran the test. See chart:



The architecture can only be used when all three, or a multiple of three, memory modules are identical in capacity and speed, and are placed in three-channel slots.
 

Attachments

  • neh04_mem.gif
    neh04_mem.gif
    7.8 KB · Views: 102

VirtualRain

macrumors 603
Aug 1, 2008
6,304
118
Vancouver, BC
Anandtech has some great articles on RAM (more than you will ever want to know)...

Inner workings of RAM... http://www.anandtech.com/show/3851/

Performance benchmarks... http://www.anandtech.com/show/2792

The fact is that dual/tri-channel or 1066/1333 is all fairly irrelevant... they all perform within a couple of percentage points of each other in most real-world applications due to Intel's large L3 cache size which ensure that cores rarely experience a cache miss. You'll only ever see significant differences in synthetic benchmarks which ignore cache.
 

skiffx

macrumors 6502a
Feb 5, 2008
681
10
Where did this "triple channel mode" come from. Most of this thread is a bit whacked because the terminology is way off.

There is not a "triple channel mode memory controller" on the Nehalem and Westmere (5500 and 5600 series ) Xeon. There are three memory controllers. Yes plural; as in more than one. Each one of these controllers can be attached to one, two , or three banks of DIMMs slots.

So on the Mac Pro there are two controllers attached to one bank/slot each and one controller that is attached to two banks/slots.

----- memory controller 1 --- [ 1 ] --- [ 4 ]

----- memory controller 2 --- [ 2 ]

----- memory controller 3 --- [ 3 ]


So if you do not put memory slots attached to all three controllers then you will not get all three controllers involved. If you just put one DIMMs into the first slot they you will get just one controller active. Likewise if only use the first two slots then will only activate two of the controllers; one will be dormant because there is nothing attached to it.

What Intel does is change the interleave (how the memory is layed out) and memory clock speed depending upon how you fill up the slots. Fill slots 1 through 3 and you get a 3 way interleave and the higher clock speeds. As you fill in more of the banks the clock speed drops. As you don't fill in groups of 3 the interleave drops below a way weave.

Since programs often ask for addresses in sequence if "words" at addresses 4 8 12 are located behind three different controllers the processor can start all of those requests in parallel (or at very least a pipeline fashion since they take many cycles ) because the work is delegated out to three different controllers. In a two way interleave ( say a two way interleave where <1> and <2> and <3> and <4> are paired) then cannot get as much memory requests going in parallel. Could get 4 and 8 , but if 12 is also assigned to memory controller 1 it will have to wait until get 4 dispatched before can dispatch 12.

It appears the "single channel mode" and "triple channel mode" being talked about here is really single/triple/etc interleave. If so that misses an important point of "single mode". There are two types of interleave. One is at the 'micro' level (you can assign word addresses ) so that a single core interacts with multiple controllers. The other "interleave" is that there are multiple threads/processes that interact with different much larger regions of memory (each of which is behind a different controller). You can still get three memory controllers running in parallel if have three different thread/processes accessing three different regions of memory. A smaller effect, but it is still present in most normal situations.

Also no "new" optimizations required for either interleave. Both of those will be leaveraged by apps that "think" they are interacting with a single controller. There is no visible difference to them other than some memory requests coming back faster .... which with dynamic execution on the Xeons doesn't really hurt anything.


You also loose interleave if mismatch sizes. So if 2 1GB DIMMs and 1 2GB DIMMs you cannot split things up 3 ways because the sizes don't match. Likewise one 8GB and one 4GB is worse than 3 4GB DIMMs. While theoretically looks like loosing 30% by switching from three way to two way in reality for real apps ( as opposed to synthetic benchmarks which are too synthetic and small ) the real loss will be in the 2-10% range in memory bandwidth (not overall throughput).


For the 5500 (Nehalem) series Xeons once you add any memory to the second bank on any of the controllers the speed drops to 1066.

"As soon as you add a second DIMM to any memory channel the speed drops to 1066 MHz for all DIMMs "
http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon



So all the folks ranting about how need to fill all possible slots and Apple was lame for not supporting 1333 in the 2009 models were blowing lots of smoke. All the vendors drop down to 1066 if fill more than three DIMMs slots. The speed drop off is even worse if go to 3 banks of slots. 800 memory is better than no memory at all so if needed > 32GB of RAM you simply take hit. It is still approx 10x faster than hitting anything on a SATA or SAS bus.


For the 5600 (Westmere) one of the incremental improvements is that can now in certain configs fill bank 2 and still retain 1333 memory speeds.

See figure at bottom of page 4.
https://globalsp.ts.fujitsu.com/dmsp/docs/wp-westmere-ep-memory-performance-ww-en.pdf
Note that it is still the case if there were 3 banks present that filling any of those slots will drop all the speeds back down to 800.







8GB will likely work since they worked on the 2009 models. Apple didn't provide an 8GB option because with their 30+% markup on memory prices they know that would put that upgrade in the stratosphere pricing where extremely few folks would buy it. OWC has had 8GB modules for the 2009 for long time. Even if you had to go to 1066 8GB modules they'd be worth it versus the alternative of hitting a SAS/SATA channel. However since a couple of years into future, there is decent likelihood can get 1333 models at reasonable prices then.

The "use 4GB now and then 8GB a couple of years from now" is why the four slot configuration makes sense for a very wide spectrum of users. Sure there are a subset of folks who need to pack the machine to the gills with DIMMs but they are not the majority.


P.S. What is more lame is not that the $3,600 Mac Pro doesn't have more DIMM slots, but that there they is no docs in Apple's knowledge base that covers this stuff. Get all kinds of funky quasi disinformation floating around the internet when Apple should be writing this stuff up (like Dell, Fujistu, and others do) so that do get voodoo filling in the blanks information wise.

P.P.S. When the next round of Sandy Bridge era Xeons come along and 6 cores are more mainstream distributed throughout the core of the series' line up 4 memory controllers will be in place ( just like present in the current 6500/7500 series ) and then the 4 slots will be make even more sense and Apple would not have to change the design. Four will the the natural "bank" size and the Mac Pro will already have that.

So in real world application terms what are the numbers we are talking about here as a loss % in performance if you use 4x4gb as opposed to 3x4gb ?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,219
3,821
The short answer to your well thought out description and regardless of the correct terminology, the readers digest version is that in the 3.33 board at least, fill the first 3 slots with the same ram = good performance. Fill the 4th one in addition = less good performance.

That is only true if keep your normal working set applications' memory footprint is at or below the amount of memory represented with 3 slots. As soon as your apps require ( not "nice to have" memory but memory using on day to day production basis) then 4th one will be better.

The tests where the amount of workset set memory isn't constant change the problem.

By the way, the notion of filling all slots was more applicable on the dual channel memory

Dual (and to me also triple ) channel memory has to do with a single memory controller dealing with multiple banks of memory. You still
have that in the Nehalem/Westmere case. Each single controller can have multiple banks by which can pipeline read/write requests to the different banks in sequence. Those requests can still be interleaved in a pipeline fashion.

However, there is also an interleave you can do across controllers. Labeling them both by the same label is a mistake, IMHO. You are only going to confuse folks because the underlying properties are not the same. One is primarily pipelining (filling the delays with next in series of requests) and the other can be more pure concurrency (making concurrent retrievals if have multiple paths to load L3 cache or at worst pipelining across single/multiple issue channels. ).

There is a limit to how much you pipeline the requests and at some point just have to go parallel/concurrent. With more than 4 cores concurrent is increasingly essential because 4-8 request streams will saturate a single pipeline delivery stream.

It is a decidedly different architecture where cores are match more more 1-to-1 with controller(s) that are assigned specifically to the working set of memory they are primarily addressing.



based 2008 Mac Pro, where it was proven that filling all slots gave better benches than filling less than all of them, even when the total ram was the same.

Again if do a test that only spans the first of a dual pairing the subsequent ones with larger memory would have minimal improvement. That problem changes sizes.




As far as "triple Channel" terminology:

When operating in triple-channel mode, memory latency is reduced due to interleaving,

Lifted from wikpedia. Again rather dubious description since presented as alternative/evolution from "dual channel" when the interleave implementation and number of interleave dimensions are different.


A 2009 Mac Pro test:
MEMORY RIDDLE: WHEN IS SIX MORE THAN EIGHT?
We were able to clearly illustrate the bandwidth advantage of three memory modules per memory bank in the Nehalem Mac Pro using DigLloydTools (DLT) stress test which does a memmove() to all of unused physical memory. We put 12 GB (6 x 2G) in first. Ran the test.

If you ran that 16GB workset set on the 12GB configuration the 12GB config would have come out slower than on he 16GB one and substantially even more slower than the 12GB sized problem's performance.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,219
3,821
deconstruct60 said:
for real apps ( as opposed to synthetic benchmarks which are too synthetic and small ) the real loss will be in the 2-10% range in memory bandwidth (not overall throughput).

So in real world application terms what are the numbers we are talking about here as a loss % in performance if you use 4x4gb as opposed to 3x4gb ?

The overall system throughput gain or loss depends upon the apps. If there is significant disk activity that can be averted by the OS caching more of the disk in memory could see an increase in overall system performance even though the basic memory bandwidth speed is decreased. RAM memory bandwidth even at the reduced level is still 10-100x better than disk access times so only need a modest size reduction in disk accesses to completely make up for a 10% or even 30% slowdown in memory. [ There are some threads here where folks did exactly that. Construct benchmarks which increase disk activity to swamp out the differences in memory speeds to "prove" more memory is better. Whether that is "real world" or not depends upon if that is really normal day-to-day usage of the application. ]


In reality, a not so small amount of your cores time is spent doing NOTHING. Just waiting for memory or disk to provide results so that can continue on doing something. That is exactly why stuff like Hyperthreading( core SMT ) works. While the one thread is stalled waiting for something to arrive another thread can possibly get work down because what it was waiting on has finally arrived.

It is a correct system balance ( processor speed, memory bandwidth, disk bandwitdth) as a whole need to find to max out performance. Blindly just adding more in any single dimension (core Hhz, memory interleave, etc.) doesn't necessarily give optimal results.

What the issue more so boils down to is that if you over configure memory, you can get a drop off in performance if you go too far. If you don't have enough memory in your Mac Pro, then putting more in is better.
 

xgman

macrumors 603
Aug 6, 2007
5,671
1,378
That is only true if keep your normal working set applications' memory footprint is at or below the amount of memory represented with 3 slots. As soon as your apps require ( not "nice to have" memory but memory using on day to day production basis) then 4th one will be better.

.

never mind. see below.
 

trankdart

macrumors member
Jul 28, 2010
60
0
Los Angeles, CA, USA
...Personally, I think Apple has artificially crippled the 6 core. I hope I'm wrong.

I think 8GBx3 will work fine, at least eventually. There have always been issues with these DRAM modules about the actual number of chips on the stick. For example if the 8GB sticks are just 4GB sticks with twice as many DRAM chips covering both sides of the module, the electrical load might increase enough to overwhelm some buffers or line drivers or other "glue" chips somewhere. But if memory density increases and they can get 8GB on a stick using the same number of physical chips and therefore roughly the same electrical load, I don't think it will then fail because Apple deliberately crippled something.

Isn't Apple's motto "Don't Be Evil"? No wait, that's Google. Apple is "Don't Violate Jobsian Ethics.":D
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.