Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It's already been said, multicore is the future. A very tough future for software developers by the way. But what's obvious is that the cost of CPU's must be somehow controlled, you cannot pay 1,000$ for a CPU+cooling when talking about home computers, it's just insane.

I at least was talking about sub-$10,000 units as the Mac Pro currently classifies. What's the Bare-bones 2.93 octad currently going for? Almost $7,000 right?


Hexacore don't seem that much of an upgrade. But, same happened when we wen't from 3.0 ghz P4 to 3.2 ghz P4. The advancements in multicore computing are, as is normal, slowing down.

Depends how you look at it. The Nvidia Tesla CUDA systems are sporting hundreds of cores already. Could that not be the future?



.
 
Tess -

Nah, it's not just positioning. Lots of things work great in the lab, or in $25,000 workstations where people buy $2000 annual service contracts, but don't work in mass produced desktops. Heck, we could increase x86 by 30% or more clock speed right now (ignoring the reliability and damage issues I already mentioned) by water cooling. Water cooling has been available forever. No one does it (other than a few gamers) on the desktop because it's expensive, unreliable, loud, etc. Peltier junctions are another thing people could use. Expensive, etc. Aerosol cooling is another one. DEC and Sun did it in the 1990's. It's gone nowhere.

And, as for on-chip optimizations, I again point out that RISC is easy to do low power, but you pay for it in performance (that's why everything uses ARM, etc.). I don't care what clock speed powerpc runs at because they have a far easier problem to solve. They assume the compiler is doing a lot of the hard work. In CISC, even though there's a RISC-like thing there doing the computations, there's this whole chunk of logic that has to behave like a compiler does for RISC. It also means that you end up having to deal with all sorts of weird addressing modes, self-modifying code, etc. which means all sorts of additional hardware critical paths. Saying that RISC and CISC are the same is like saying that a Lotus is like a Porsche. Yeah, they both go fast. They both use some of the same fancy engine technologies. But there's a lot more going on in the Porsche that has to be dealt with, including about 500lbs of weight.

x86 clock rates will slowly climb, particularly "phony" clock rates ("turbo," etc.) The actual, average, per-core clock rate will slowly climb, but it will not hit 5GHz any time soon (in several years, of course it will).

AMD will probably go a different route. 2-4 x86 cores running around 3-4GHz, and a bunch of special purpose RISC-like cores running faster as needed.
 
That's only my guess of course. I'd rather spend more money in buying a fully optimized Photoshop CS "X" which uses all the power Gulftown can develop, than just keep on with this little steps software developers are making.

For once, I'd like to see a new version of any software application, consume less disk space and run faster than it's predecessor. Credit to Apple for appearing to pull this off with SL... Now let's see some apps follow suit. However, i fear that if our hopes are pinned on ever increasingly bloated apps then we are doomed!
 
Adobe and Microsoft have pretty much already said they're rewriting their apps in Cocoa for Snow Leopard and full 64-bit goodness.  has not said they are, but essentially implied as much by rewriting nearly all the shipping apps in OS X in Cocoa, to be followed by the iLife apps and then the pro apps. It's only a matter of time. Unfortunately, Final Cut Studio just got upgraded and it's on a two year cycle, so it looks like we're in for a long wait.

Independently of Snow Leopard (though it offers nice technologies to build into graphics and video apps and therefore more incentive), software developers have been getting rid of legacy code to do things in the  approved fashion. It's just that right now we're in an awkward transition phase.

A quick note on technology transition phases: These have become increasingly tolerable, fast, and less problematic. Witness the shift from OS 9 to OS X, then the shift from PPC to x86, and now the shift from 32-bit to 64-bit. They do suck, but they've been sucking less each time. Partly because of emulation and compatibility efforts, but  seems to have learned how to pull these off really well.
 
It's only a matter of time. Unfortunately, Final Cut Studio just got upgraded and it's on a two year cycle, so it looks like we're in for a long wait.

Someone please just shoot me!
They are also saying that there may not be a QTX Pro until 10.8.
.
 
What do you need QTX Pro for, anyway? Most of the functionality is duplicated in QTX or otherwise available to you in 7 for free if you upgraded to SL. Whatever you're using QT pro for, there are better or free tools available...

My understanding is that QT Pro as a product is discontinued.

Yeah, it's lame about FCS...it was a real disappointing upgrade except to the people who needed exactly the features they added. I'm kind of hoping that the next upgrade is coming faster since that one was so minimal...but...it's not real likely.

In order to upgrade FCS, not only do they need a cocoa rewrite, but they have to rip out everything referencing the QT classic framework and replace it with hooks for the new QT X framework, adding functionality to QT X as needed.

On the plus side, FCS 8 will be probably 2-5 times faster than what we're used to, as well as less bloated, better able to use multiple cores, OpenCL accelerated, and maybe include native support for more formats.
 
It's already been said, multicore is the future. A very tough future for software developers by the way. But what's obvious is that the cost of CPU's must be somehow controlled, you cannot pay 1,000$ for a CPU+cooling when talking about home computers, it's just insane.

Honestly, as a software developer, I think the whining about software support has been overblown.

Open activity monitor and look at how many threads your software that you are using today has. It may not be maxing out your CPU on all the cores. But there is definitely work being spread around.

This isn't 1999, people. Computer scientists have been threading for a long while now. We may not be getting 6 ghz out of your 3 ghz Core 2 Duo, but your second core is not just sitting there taking a nap. Not to mention, if you run more than one program at a time (which I'm assuming most people do), you are likely taking great advantage of multiple cores.


I'm working with a Tesla this quarter on the academic side of my life. Yes, this is considered the future. That's why CS students are working with them these days. :)
 
And if I had to choose between a 5GHz PPC and a 3GHz x86, I'll take the latter.
Why? Most compute loads aren't bound by number of cores, I'd expect the PPC to be much faster than the x86 in ordinary operation. Besides, we don't have to limit ourselves to PPC and x86. I'd bet a 5GHz MIPS R10k would destroy both.
 
Why? Most compute loads aren't bound by number of cores, I'd expect the PPC to be much faster than the x86 in ordinary operation. Besides, we don't have to limit ourselves to PPC and x86. I'd bet a 5GHz MIPS R10k would destroy both.

Because single-thread speed on two architectures cannot be compared simply by comparing clock speed. Each instruction on x86 does more than each instruction on PPC, so at identical clock speed an x86 will perform faster (though will likely burn more power) than a PPC.

At Exponential, when I was designing a 500MHz PPC, a 333MHz Pentium blew our chip away in terms of actual performance.

And MIPS R10000 is even weaker per-instruction than PPC.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.