machinehead said:
Good point. Even if the performance of the 90nm 2.0 GHz G5 chips is the same, the wattage is said to be about two-thirds of the older 130nm chips. That would a big plus for reducing the room heating effect, which is quite noticeable in the summer.
Just wanted to expand on this because it's likely to lead to questions.
You are correct that the total wattage consumed by a 90 nm chip will be less than a 130 nm chip if everything else is the same (processor design, GHz rating, etc). That's the total energy being consumed - which means it's also the total energy that must be dissipated into the room.
The reason why you need more sophisticated cooling on the 2.5 G5 is two fold:
1. Energy consumption increases by greater than linear amounts with clock speed. So, the energy consumed by the 2.5 is more than 25% greater than the energy consumed by the 2.0 - when both use the same process conditions. You can look up the exact figures on the IBM site.
2. Energy DENSITY becomes the limiting factor on 90 nm chips, not total energy consumption. Let's use your ratio above (90 nm uses 2/3 of the energy of 130 nm). I'll make up figures because I'm too lazy to look them up, but they'll give an idea. We'll stick to a single chip speed for comparison.
Let's say that the 90 nm uses 60 W compared to 90 W for the 130 nm. And let's say that the chip area is proportional to the square of the line width (it isn't, but it will give an idea of what I'm talking about). So, the 90 nm has an area of 8.1 units while the 130 nm has an area of 16.9 units - or just over twice the area.
So, you have 60/8.1 or 7.4 watts per unit of area for the 90 nm chip vs only 5.3 watts per unit of area for the 130 nm chip.
In cases where heat transfer from the chip to the heat sink is limiting, this number (watts per unit area) is what really matters. So, it's actually easier to cool the 130 nm chip.
But, two factors enter into the equation to override that simplest analysis.
1. Eventually, you exceed your ability to remove the total heat content (the heat sink becomes too big, etc). When you reach the limit where total heat matters, going to a smaller chip is beneficial.
2. As you ramp up clock speed, you eventually generate such a high energy density that you can't remove the heat well enough with conventional technology - such as the G5/2.5. Even if the G5/2.5/90 doesn't generate more heat overall than teh G5/2.0/130, it's much more concentrated - and therefore harder to remove.
I doubt if I've done a very good job of explaining this, but perhaps readers will get it. The bottom line is that it's not a simple matter of 'which chip generates less heat'. It's a matter of considering both total heat generation (watts) and energy density (watts per unit area).