Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
the key issue at hand here, is whether there's much in the last couple of CPU offerings or the next couple coming down the pike that will compel people to upgrade their systems.

I am very curious to see Apple's PR strategy for the Gulftown Mac Pro.
I doubt that it will sell very well.
 
Maybe it'll introduce features many people have been asking for. There's a list of them they could pick and choose from. We've done iterations of that list a dozen times on this forum.

Alternatively, maybe it'll be aimed at people upgrading from the 06 Mac Pro, surely the most frustrated group of Mac Pro owners.
 
I am very curious to see Apple's PR strategy for the Gulftown Mac Pro.
I doubt that it will sell very well.

The point will be 50% more cores and more multithreaded rendering power. Perhaps their decision not to introduce W5580 and W5590 are connected to that plan.
 
I'm going to buy a Mac Pro sooner rather than later, but is a Hexa-core Mac Pro worth waiting for, compared to an Quad-core?
It's not that I really NEED it right now, I've been waiting for little over a year now, so a couple of months more or less..
I primarily be using it for HD-video editing, video encoding, and a bit of gaming.
I'm just so much in doubt, as I have been saving up for almost two years now (I'm a student) :confused:
 
MacApple21, I'm in the same situation as you are. I've been saving up money too, but the use I'd make of the MP would be kinda Photoshop stuff (huge photos btw, that's the reason I need a MP).

As for the matter, I'm waiting, even though I need it for what I'm studying and also for my current job as a photographer. It's been said Apple might upgrade the Mac Pro around march next year, pretty close to the Q2 date Intel gave for the hexacore. Just as MacRumors posted, Apple was the first company to announce Nehalem powered computers, even before Intel had officially announced them.

I was quite reluctant to accept multicore CPU's as the future, just as some here seem to be, but we must accept it's the future, not only us as customers, but also the software companies. Taking advantage of multicore computing as well as OpenCL is the future of software, not only Proffesional software, but gaming as well.

Hexacore processors don't seem to be really a big step forward, but indeed it is. 2 more cores give us quite a bit more power for HD video processing and so on, problem is, as always has been, if software will or will not take advantage of those 2 extra cores. As always, programs such us Final Cut will definitely take advantage of them.

We can be sure Hexacore will give us a little advantage over Nehalem, just not that much due to software companies being reluctant to use all cores power. Anyway I'm waiting, not just because of Hexacores, but USB 3.0, Sata 6.0 gbps, and, let's hope, new GPU's.

Things are about to change in software...wether it takes 1 or 4 more years just depends on software companies, Intel will go on releasing 8, 12, 16, 32 and so on multicore CPU's, being them used by any program, or not.

BTW, sorry for my English, just too many years without writing it. Spain, you know...
 
It's not power bills that are the problem, it's the cooling. You just can't cool those chips enough to keep them at 4 ghz in any sort of supported configuration. It's just not possible.

Of course it's possible. What are you talking about? Chip manufacturers have just barely started paying attention to this problem. The first round solution looks pretty OKish too. Just wait till they actually get serious - I guess you'll be surprised.

Even kids at home have solved this issue already - secondarily. So it's very odd that you would say that "it's just not possible".

Power bills are honestly the smallest issue with going 4 ghz.

So then why did you say it wasn't possible at all?
 
MacApple21, I'm in the same situation as you are. I've been saving up money too, but the use I'd make of the MP would be kinda Photoshop stuff (huge photos btw, that's the reason I need a MP).

As for the matter, I'm waiting, even though I need it for what I'm studying and also for my current job as a photographer. It's been said Apple might upgrade the Mac Pro around march next year, pretty close to the Q2 date Intel gave for the hexacore. Just as MacRumors posted, Apple was the first company to announce Nehalem powered computers, even before Intel had officially announced them.

I was quite reluctant to accept multicore CPU's as the future, just as some here seem to be, but we must accept it's the future, not only us as customers, but also the software companies. Taking advantage of multicore computing as well as OpenCL is the future of software, not only Proffesional software, but gaming as well.

Hexacore processors don't seem to be really a big step forward, but indeed it is. 2 more cores give us quite a bit more power for HD video processing and so on, problem is, as always has been, if software will or will not take advantage of those 2 extra cores. As always, programs such us Final Cut will definitely take advantage of them.

We can be sure Hexacore will give us a little advantage over Nehalem, just not that much due to software companies being reluctant to use all cores power. Anyway I'm waiting, not just because of Hexacores, but USB 3.0, Sata 6.0 gbps, and, let's hope, new GPU's.

Things are about to change in software...wether it takes 1 or 4 more years just depends on software companies, Intel will go on releasing 8, 12, 16, 32 and so on multicore CPU's, being them used by any program, or not.

BTW, sorry for my English, just too many years without writing it. Spain, you know...

I agree, there might not be as big a difference between Quads and "Hexas" now as there will be in the future. I have sort of decided to wait too, if my needs doesn't change within the near future.
As you hinted at, the extra cores might pay of in the future, may it come sooner rather than later.
Also as you pointed out the GPU options could very well get better (ATI 5780) soon, and it would be sad to have spent a lot of $ on ex. 285.
I just hope software will catch up with hardware soon, and that the Gulftown variant :apple: might use will run higher clock speeds than 2,4 Ghz

P.S. Your english is not that bad :)
 
Of course it's possible. What are you talking about? Chip manufacturers have just barely started paying attention to this problem. The first round solution looks pretty OKish too. Just wait till they actually get serious - I guess you'll be surprised.
My guess, is goMac is thinking in terms of a single system, not the enterprise market. A simple search will reveal that they indeed look at power, and even break it down into things like IOP/Watt.

If you look at data centers full of racks, it adds up FAST. Then take the HVAC power to remove the heat into consideration @ 24/365 operation, the monthly power bill is by no means, "chump change" in terms of cost. It hits thousands of USD per month. In my experience, the business side takes a hard look at that. ;)
 
Of course it's possible. What are you talking about? Chip manufacturers have just barely started paying attention to this problem. The first round solution looks pretty OKish too. Just wait till they actually get serious - I guess you'll be surprised.

Even kids at home have solved this issue already - secondarily. So it's very odd that you would say that "it's just not possible".

You're thinking about it in terms of "kids at home".

Intel could barely put out enough chips rated at 3.6 ghz... What do you think the yields would be on 4 ghz chips? Even getting a 3.6 ghz CPU that could overclock stably to 4 ghz without some bizarre cooling system was hard.

The P4 was already an inefficient chip. Clock for clock, the Core series beats the pants off it.

You're saying kids at home in their garage could put extreme cooling systems in their machines and overclock to 4 ghz. That's great and all, but that doesn't translate to a chip Intel can put on the shelf at Frys that's rated at 4 ghz. It's simply not a path that can be sustained...
 
I was a long-time CPU designer at AMD, and one of the guys who designed the first Opteron/Athlon 64.

The problem is you can't keep upping clock speed - while it's technically possible, it makes no engineering sense. First, dynamic power consumption increases linearly with clock speed. Second, to increase clock speed you end up reducing voltage swings and scaling down device lengths. This causes static power dissipation to increase as well, due to leakage. Given X watts of power budget, if I have to choose between doubling the number of cores or increasing clock speed by 33%, I'll probably double the number of cores.

You also end up fighting secondary and tertiary effects - even though your smaller devices can theoretically switch faster, they have a harder time driving increasingly large interconnect impedences. You start getting much more crosstalk between switching wires. As you start driving much higher current you start getting much more voltage drop on the power grid, and you start having to dedicate more and more of your metal to grid. You also start having to spend much more of your power budget just to distribute the clock.

So in order to increase the clock speed "efficiently" you have to simplify the architecture a lot to avoid long wires, high fanout, etc. The trade-off stopped making sense probably around the time Opteron reared its head - its one of the reasons we did what we did.
 
I was a long-time CPU designer at AMD, and one of the guys who designed the first Opteron/Athlon 64.

The problem is you can't keep upping clock speed - while it's technically possible, it makes no engineering sense. First, dynamic power consumption increases linearly with clock speed. Second, to increase clock speed you end up reducing voltage swings and scaling down device lengths. This causes static power dissipation to increase as well, due to leakage. Given X watts of power budget, if I have to choose between doubling the number of cores or increasing clock speed by 33%, I'll probably double the number of cores.

You also end up fighting secondary and tertiary effects - even though your smaller devices can theoretically switch faster, they have a harder time driving increasingly large interconnect impedences. You start getting much more crosstalk between switching wires. As you start driving much higher current you start getting much more voltage drop on the power grid, and you start having to dedicate more and more of your metal to grid. You also start having to spend much more of your power budget just to distribute the clock.

So in order to increase the clock speed "efficiently" you have to simplify the architecture a lot to avoid long wires, high fanout, etc. The trade-off stopped making sense probably around the time Opteron reared its head - its one of the reasons we did what we did.

Thanks for the insights! No doubt this is true... but how do you propose that AMD or Intel are going to sell more and more cores to people who can't seem to fully leverage what they've already got?
 
I was a long-time CPU designer at AMD, and one of the guys who designed the first Opteron/Athlon 64.

The problem is you can't keep upping clock speed - while it's technically possible, it makes no engineering sense. First, dynamic power consumption increases linearly with clock speed. Second, to increase clock speed you end up reducing voltage swings and scaling down device lengths. This causes static power dissipation to increase as well, due to leakage. Given X watts of power budget, if I have to choose between doubling the number of cores or increasing clock speed by 33%, I'll probably double the number of cores.

You also end up fighting secondary and tertiary effects - even though your smaller devices can theoretically switch faster, they have a harder time driving increasingly large interconnect impedences. You start getting much more crosstalk between switching wires. As you start driving much higher current you start getting much more voltage drop on the power grid, and you start having to dedicate more and more of your metal to grid. You also start having to spend much more of your power budget just to distribute the clock.

So in order to increase the clock speed "efficiently" you have to simplify the architecture a lot to avoid long wires, high fanout, etc. The trade-off stopped making sense probably around the time Opteron reared its head - its one of the reasons we did what we did.

Exactly.
 
Thanks for the insights! No doubt this is true... but how do you propose that AMD or Intel are going to sell more and more cores to people who can't seem to fully leverage what they've already got?

That's the problem. Ideally, software will improve to permit higher utilization of additional units (cores, execution units, etc.). Asymmetric cores (where some cores perform specific functions in order to increase their utilization) may become common. More power-efficient circuits and technologies.

That's why engineering is so hard :) The ideal would be that every transistor is performing useful work all the time, but it's just not possible.
 
Don't forget turbo boost when you see the 2.4ghz. It should go anywhere from 1 to 4 steps higher (ie from 2.53 to 3.06 depending on thermal load).
 
You're saying kids at home in their garage could put extreme cooling systems in their machines and overclock to 4 ghz. That's great and all, but that doesn't translate to a chip Intel can put on the shelf at Frys that's rated at 4 ghz. It's simply not a path that can be sustained...

Yeah, that's what I'm saying. And I understood what cmaier pointed out when I said it. IMO, the cooling solutions that have been employed so far are juvenile and antiquated. Corrugated substrates, different material compounds, and etc. are just now starting to be considered and implemented. These will be developed further I guess. As they do we will likely see 8 and 16 core chips operating at 6GHz or above just in the next few years. The first round implementation has already delivered 5GHz successfully.

https://forums.macrumors.com/posts/8512307/
http://www.youtube.com/watch?v=ioCZojN4A0g

Also in the opposite direction:
http://www.youtube.com/watch?v=l8FUmS1h-5U
http://www.youtube.com/watch?v=UlZ_IoY4XTg


.
 
I'll be ordering 2 when they come out. :p

That's good. I'm sure it will be a heck of nice machine.
I always say that if Apple is happy, then I'm happy. Looking at the record profits for the last quarter, I'd say that Apple is very happy.
.
 
I guess what I am implying is that for the market these machines are intended for, there will never be enough cores or enough computing power.

In the DCC market a faster machine does not mean the projects get completed faster, it usually ends up meaning the quality continually improves due to more iterations enabled or more complex things done in the render.
 
I guess what I am implying is that for the market these machines are intended for, there will never be enough cores or enough computing power.

In the DCC market a faster machine does not mean the projects get completed faster, it usually ends up meaning the quality continually improves due to more iterations enabled or more complex things done in the render.

You can't prove it by me.
My video editing times have not improved greatly.
My only hope is that applications will be written for Snow Leopard that will make my computer faster... I hope this happens in my lifetime.
.
 
Yeah, that's what I'm saying. And I understood what cmaier pointed out when I said it. IMO, the cooling solutions that have been employed so far are juvenile and antiquated. Corrugated substrates, different material compounds, and etc. are just now starting to be considered and implemented. These will be developed further I guess. As they do we will likely see 8 and 16 core chips operating at 6GHz or above just in the next few years. The first round implementation has already delivered 5GHz successfully.
.

5GHz x86? Or just some RISC thing? If it ain't x86, it don't count. (Yes, I know you can hit 5GHz easily on x86 with suitable cryo cooling - I just don't know what you are specifically referring to).

There are many existing cooling technologies that could allow an existing x86 to operate at 5GHz (at least for awhile - more on that in a second). There are things developed by DEC in the early 1990's that never made it to the mainstream, various active cooling techs, heatpipes, diamond sheets, etc. None of them are practical for consumer/commercial desktops at typical price points, though.

Of course, one of the problems with running existing chips at such high speeds is that the current is proportional to speed (because with each clock cycle, all of the capacitance of the wires and transistor gates gets charged or discharged). In a typical x86 processor, some of that current tends to be unidirectional (flows in one direction substantially more often than the other). This exacerbates electromigration, which results in eventual faults due to voids or high resistance portions of the metal. There are also transistor failure modes that are exacerbated (hot carrier effect, etc.) This significantly reduces the device lifetime.

In one of those sad facts of life, the things engineers do to increase lifetime of the devices in light of these effects tend to result in having to slow the devices down.

One other annoyance is that at high clock speed the clock becomes much more of a nuisance. Ideally the clock would arrive at each state element simultaneously (this isn't quite true, but it simplifies the discussion). If logic proceeds from A to B to C ,and the clock arrives later at B than at A and C, then you have less than a cycle to get your B->C work done, and more than a cycle to get your A->B work done. Presumably your cycle time is set by the time it takes your work to get done, so A->B has extra wasted time, and B->C won't get done in time. So you slow your clock down so B->C can get done in time, and A->B has even more wasted time. But, of Course, you might also have an A->C path, a C->B path, etc. It becomes a huge hassle to manage all that. So more and more effort goes into getting the clock to arrive exactly when you want, but you can only get so close before you've spent half your power budget and chip area on distributing clocks.

It's a nightmare, I tells ya!
 
It's the IBM Power6 fitted to the Hydro-cluster spec. It's the IBM 801 derived RISK as developed under the DARPA VLSI Program. Although most people no longer consider the RISC (Really Invented by Seymour Cray :D) or CISC distinctions meaningful when discussing modern CPU architectures. The two once distinct, architectures have merged and overlapped on so many levels.
http://www-03.ibm.com/press/us/en/pressrelease/21580.wss


Good overview of ancient tech BTW, serious kudos! Well, maybe not quite ancient but certainly already long ago considered and engineered for. I'm not saying it no longer applies just that solutions have already been developed. Your thinking about the power coefficient for example doesn't seem to consider lower temperatures that can be achieved by alternate chip-level cooling technologies which are indeed "practical for consumer/commercial desktops at typical price points" relatively speaking. By 1990 we were as well already past some of the troubles you're sighting here with OoOE paradigms. The very terms "RISC" and "CISC" are antiquated when considering any common DT or Enterprise CPUs to be sure.

"The POWER6 processor is built using IBM’s state-of-the-art 65 nanometer process technology. Coming at a time when some experts have predicted an end to Moore’s Law, which holds that processor speed doubles every 18 months, the IBM breakthrough is driven by a host of technical achievements scored during the five-year research and development effort to develop the POWER6 chip. These include:
  • A dramatic improvement in the way instructions are executed inside the chip. IBM scientists increased chip performance by keeping static the number of pipeline stages – the chunks of operations that must be completed in a single cycle of clock time -- but making each stage faster, removing unnecessary work and doing more in parallel. As a result, execution time is cut in half or energy consumption is reduced.
  • Separating circuits that can’t support low voltage operation onto their own power supply “rails,” allowing IBM to dramatically reduce power for the rest of the chip.
  • Voltage/frequency “slewing,” enabling the chip to lower electricity consumption by up to 50 percent, with minimal performance impact.
  • A new method of chip design that enables POWER6 to operate at low voltages, allowing the same chip to be used in low power blade environments as well as large, high-performance symmetric multiprocessing machines. The chip has configurable bandwidth, enabling customers to choose maximum performance or minimal cost.
The POWER6 chip includes additional techniques to conserve power and reduce heat generated by POWER6 processor-based servers. Processor clocks can be dynamically turned off when there is no useful work to be done and turned back on when there are instructions to be executed.
Power saving is also realized when the memory is not fully utilized, as power to parts of the memory not being utilized is dynamically turned off and then turned back on when needed. In cases where an over-temperature condition is detected, the POWER6 chip can reduce the rate of instruction execution to remain within an acceptable, user-defined temperature envelope."

And the Power6 is the fruit of engineering efforts which began in 2004/5. Shortly after it's official release on June 8, 2007 (at 4.7 GHz - bumped to 5.0 GHz in May 2008) it was slated to operate at 6GHz without any troubles to speak of. The TBR in 2010 Power7 IBM chips will ship at 4.0GHz, 8 cores, 4 threads per core, on a 45nm process as a first introduction. This isn't the single-thread focused Power6. It's a true multi-core chip, which should stack up very, very well against Sun's 16-core rock and what will likely be an eight-core version of Itanium around in 2010. No doubt 6GHz systems will follow shortly after these initial releases as per historical product maps and rollouts. In another 5 years or so we will likely be having this same discussion regarding 64 core units operating at 8 to 10 GHz. ;)



.
 
Tess -

A few things. I am aware of what's "practical" and not for desktops. Remember that the price point on desktops has decreased substantially in the last 20 years, as has tolerance for noise, etc. In the 1990's, dissipating 10W per square cm was considered quite a challenge. In 1996 I developed a 100W processor that was considered impossibly hot. In 1998 I developed a 65W processor for Apple (the Exponential x704) that was considered very very very hot for the time, and was difficult to air cool. Obviously cooling technology has greatly improved since then, but we're still at the point where it's difficult to achieve any kind of practical cooling solution (i.e.: one that you can sell a lot of due to price, noise, reliability, environmental factors, etc.) for more than 100W.

Also, it's not SISC, it's CISC. And while CISC has adopted many of the ideas originally pioneered by RISC, i assure you that at a physical level (and thus a heat level), RISC is quite different. There's a reason Intel hasn't been able to get design wins with x86 in the iPhone. A 5GHz powerpc is quite a different thing than a 5GHz x86 processor. And if I had to choose between a 5GHz PPC and a 3GHz x86, I'll take the latter.

The bullet points from the POWER6 stuff are marketing gibberish. Certainly stuff that's been common for a very long time. All of these tricks have also been used by x86 since at least the late 1990's.

Also, you'll note that systems running Power chips (and Itanium, etc.) cost a heck of a lot more than we'd want to pay for Mac Pros. A nice fraction of that is the systems engineering to deal with the cooling.
 
Tess -

-snip-
Obviously cooling technology has greatly improved since then, but we're still at the point where it's difficult to achieve any kind of practical cooling solution (i.e.: one that you can sell a lot of due to price, noise, reliability, environmental factors, etc.) for more than 100W.

Again, http://www.youtube.com/watch?v=ioCZojN4A0g


-snip-... while CISC has adopted many of the ideas originally pioneered by RISC, i assure you that at a physical level (and thus a heat level), RISC is quite different. There's a reason Intel hasn't been able to get design wins with x86 in the iPhone. A 5GHz powerpc is quite a different thing than a 5GHz x86 processor. And if I had to choose between a 5GHz PPC and a 3GHz x86, I'll take the latter.

True, agree. Though while early RISC designs were significantly different than contemporary CISC designs, by 2000 the highest performing CPUs in the RISC line were almost indistinguishable from the highest performing CPUs in the CISC line.

The bullet points from the POWER6 stuff are -snip- certainly stuff that's been common for a very long time. All of these tricks have also been used by x86 since at least the late 1990's.

Which was exactly my point. ;)


Also, you'll note that systems running Power chips (and Itanium, etc.) cost a heck of a lot more than we'd want to pay for Mac Pros. A nice fraction of that is the systems engineering to deal with the cooling.

Maybe only due to marketing and segment targeting though? As it is with most things. :) Surely you saw what IBM did with their Power architecture already. Wasn't that just a result of catering to the DT market? Just a case in point.
 
It's already been said, multicore is the future. A very tough future for software developers by the way. But what's obvious is that the cost of CPU's must be somehow controlled, you cannot pay 1,000$ for a CPU+cooling when talking about home computers, it's just insane.

Hexacore don't seem that much of an upgrade. But, same happened when we wen't from 3.0 ghz P4 to 3.2 ghz P4. The advancements in multicore computing are, as is normal, slowing down. C2D was a huge step forward. Nehalem was a big step forward. Hexacore Gulftown...well, for sure, it won't be a step backwards, but we depend on the software, which is the new situation we have now, different to what we saw in "The Ghz Era". Probably, multicore CPU's will mean in the near future more expensive programs, as in order to get all the power 4 cores, 6 cores, 8 cores, etc, can develop, they'll have to spend more time with each CPU architecture.

That's only my guess of course. I'd rather spend more money in buying a fully optimized Photoshop CS "X" which uses all the power Gulftown can develop, than just keep on with this little steps software developers are making.

Oh, and for those who say Mac Pro with Gulftown won't be a big step forward over Nehalem...Have you seen new iMacs? Apple always grab our attention whichever way. Wether it will be with new powerful GPU's, the possibility of, for example, putting x2 AMD 5870, or USB 3.0, or Sata 6GB, or whatever Steve Jobs factory can develop as a surprise, be sure, if they consider Gulftown not being enough, they will add those, and even more things to make the upgrade much more appealing to us. Trust me, it's always been that way.

And other thing, it's been too long since Mac Pro's last upgrade. Remember Apple always make the difference.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.