Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Gulftown does use the same socket (LGA1366), but likely won't work as a drop-in replacement, as Apple won't release updated firmware for the current models to do this (the firmware would need an update in it's CPU microcode)

Going by past performance the chances are small that a drop in replacement will work.

The Woodcrest dual core 2006 Mac already had the microcode to run the Clovertown quad core CPUs of 2007. Both CPUs have the same 65nm manufacturing tech.

In 2008 they introduced the die shrink to 45nm on the same LGA 771 socket and the microcode changed. There was no firmware update to the 2006 and 2007 Macs. So using the better CPUs wasn't an option for the older Mac Pros.

The Gulftowns will be using the same LGA 1366 sockets as the 2009 Nehalems but they will have a die shrink from 45nm to 32nm. So nanofrog is right that they will definitely need new micro code. As demonstrated above Apple have never made such updates available retrospectively.
 
Going by past performance the chances are small that a drop in replacement will work.

The Woodcrest dual core 2006 Mac already had the microcode to run the Clovertown quad core CPUs of 2007. Both CPUs have the same 65nm manufacturing tech.

In 2008 they introduced the die shrink to 45nm on the same LGA 771 socket and the microcode changed. There was no firmware update to the 2006 and 2007 Macs. So using the better CPUs wasn't an option for the older Mac Pros.

The Gulftowns will be using the same LGA 1366 sockets as the 2009 Nehalems but they will have a die shrink from 45nm to 32nm. So nanofrog is right that they will definitely need new micro code. As demonstrated above Apple have never made such updates available retrospectively.

Is the firmware stored on a socketed NVRAM chip? If so, could one buy a 2010 NVRAM firmware chip to replace their 2009 version?

Alternatively, is there any utility to extract/flash the firmware in a Mac Pro thus allowing someone with a 2010 machine to share their firmware with 2009 owners?
 
If we go by the latest Rumors https://www.macrumors.com/2009/10/1...ary-exclusive-of-six-core-gulftown-processor/ we also should expect the refresh in the first quarter perhaps in February or March. The report on RAM support suggests that the number of RAM slots will not change but the chipset may. It is rumored to include Intel's 10 Gigabit Ethernet technology. In my understanding you cannot run the Ethernet bridge in an existing chipset with ten times the speed unless it is an unreleased feature. The take aways are:

The Mac Pro hardware will not change except for CPUs, chipset and Ethernet ports

RAM support will be improved which may be part of the miocrocode considering that memory is controlled by the CPU


I have wondered in the past if you can actually copy the firmware on a Mac. But with the chipset change imminent I believe that it may not be feasible.
 
Going by past performance the chances are small that a drop in replacement will work.

The Woodcrest dual core 2006 Mac already had the microcode to run the Clovertown quad core CPUs of 2007. Both CPUs have the same 65nm manufacturing tech.

In 2008 they introduced the die shrink to 45nm on the same LGA 771 socket and the microcode changed. There was no firmware update to the 2006 and 2007 Macs. So using the better CPUs wasn't an option for the older Mac Pros.

The Gulftowns will be using the same LGA 1366 sockets as the 2009 Nehalems but they will have a die shrink from 45nm to 32nm. So nanofrog is right that they will definitely need new micro code. As demonstrated above Apple have never made such updates available retrospectively.
It's different with the Gulftowns though, as they actually use the same chipset/s (X58/Tylersburg family) as the current models. Intel produces 24 & 36 PCIe lane versions for both SP and DP systems (i.e. 24S/36S & 24D/36D respectively).

They can also be run in tandem. That is, 2x chipsets, even on an SP model to obtain additional PCIe lanes (master + slave config). So far, I've not seen an SP board that does it though, as other board makers opted for the nF200 instead to create an enthusiast/workstation model (it appeals to graphics users on both ends for SLI and CrossFire).

But with the addition of 10G Ethernet (peaks at 1250MB/s), it may occur in the next model, as it's going to need PCIe lanes. PCI located in the chipset, is only good for 133MB/s, so it just doesn't have adequate bandwidth to run it.

The microcode in the existing systems is written for 45nm, so the VID setting is higher. Though it still falls within the 32nm spec (near the upper limit), so that aspect may actually work. It's the core count that may cough up the hairball here, as the coding is different this time (anything that used the 5000 chipsets could use dual or quad cores). I'm not so sure this time.

I do think it's technically possible for a drop in/swap, I'm not expecting Gulftowns to work without additional microcode. Maybe we'll get a nice surprise, and it will. It does have enough going for it, but would require a test subject. Not an inexpensive proposition, and as you mentioned, Apple doesn't update firmware to allow the use of the newest CPU's, even if the hardware is compatible to push the sales of the current line offered.

Those on the PC side will be in much better shape, as the microcode will likely end up available in an update. One advantage to the PC side. ;) Better support from other board makers. :D
 
Another potential issue for new micro code would be the increase in RAM capacity that will be supported on the 2010 model. Memory management is now located in the CPU. But the CPU architecture will not get any changes they will just shrink the die. So the huge increase in memory support must be a software issue. That should also be a micro code issue. Or is my thinking flawed there?
 
Another potential issue for new micro code would be the increase in RAM capacity that will be supported on the 2010 model. Memory management is now located in the CPU. But the CPU architecture will not get any changes they will just shrink the die. So the huge increase in memory support must be a software issue. That should also be a micro code issue. Or is my thinking flawed there?
It's already capable of 8 & 16GB DIMM's, as it was planned from the beginning. It's just due to cost, memory makers wait to offer them. So Nehalem architecture is capable of 144/288GB (SP & DP respectively). Of course this is achieved in servers with the max # of DIMM slots (9 per CPU via 3 DIMM's per channel interleaved, for a total of 18 when using 16GB sticks in DP systems).

Apple bases their specification on what's available, and what they're willing to use (multiplied by the DIMM count). Even the 4GB sticks were pricey, so Apple chose to use 1 or 2 GB sticks, for cost reasons, hence the 8/16Gb configs. They can choose to base the specification either way (expected parts or currently available), but in the end, it's still able to run the largest sticks per DIMM. So the max is 64 & 128GB in SP and DP MP's respectively, and will hold for the Gulftown systems as well.

EDIT: Fixed the DIMM/CPU possible with the IMC on the die.
 
So the 4 GB memory support restriction in the 2009 Nehalem is just one more bull sh°tting by Apple?
Sort of. They based it on the fact that 4GB sticks are the largest currently available. Then they'll revise it again each time a larger capacity DIMM arrives. So when the 8GB sticks hit, the max capacity will be updated, and again when the 16GB parts finally show up. :rolleyes:

It's a simple answer for those who can't do simple math. :eek: :p

A rather conceited POV IMO. :rolleyes: :( :mad:
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.