Tesselator,
Very interesting reading your information on MT/HT technology. I do remember almost 7 years ago, getting my Pentium PC with HT...pretty proud they were, I think I paid a 10-15% premium at the time.
I am curious though, with Snow Leopard and Windows 7 on the way, will software developers be able to code their games/applications to take advantage of BOTH the main CPU's and the GPU's?
Beats me. It looks like some will. I assume most won't. A lot depends on the framework and tools available I guess.
I have read some on this recently and it seems like a great idea, especially when the graphics system isn't being taxed and the CPUs are. Just curious, using your analogy as far as locomotion, it seems irrelevant where the extra cores or processors are, the architecture needs to change to take advantage.
Yeah, I was speaking in general truths. Another thing to consider is that as NEW apps are created their internal architecture can MUCH more easily be fitted to the new concepts - multiple computing resources. For example if PS was to be created today would the developers treat the API in the same way or would they choose a different internal structure where the internal processing pipeline was created with multi-recourse computing in mind. Currently it's not and if they change it what are the pros and cons? How many 3rd party developers will have to rewrite their products from the ground up? How much increased speed will actually be achieved? Etc. As it is right this minute the answers to those questions do not warrant a rewriting of PS. The same is probably true for many applications. With OpenCL and "better schedulers" (whatever that might mean) this may change but I remain a skeptic for the most part.
I must admit, I was (I guess I still am, with my experience with HT 7 years ago) optimistic as well with Snow Leopard on the horizon, to take advantage of these multi core machines. Makes me less concerned about missing anything with the last generation 3.0gx8. All I do is video/audio/photo editing and graphic layout with our Mac (business). It is always a write off, but sense has to be made to spend more money. Faster rendering, less waiting, quicker and more efficient software always means more time to make more money
Is there a way to take advantage of these technologies or is it all HYPE?
Wait and see is the only reasonable answer to that. It's certainly not all hype. But usually the hype is much greater than the reality. This is the computing industry. They need to hype to generate expectations and excitement in order to sell their goods.
I know the integration of the caches to the CPU's and better/faster and more efficient memory (RAM and quicker seek times on the HD's) will improve as time moves forward, but are we capped with CPU (horse) power for the time being? Are these speed improvements only attributable to the memory systems, mother boards, faster hard drives, etc?
No I don't think we're "capped". And when one technology reaches it's limits another will be introduced. There are already several ready and waiting. But they will wait a bit longer.

Remember these are companies we're talking about and their goals are maximum yield from the least amount of effort - as it is with any publicly held company or corporation. Also is to consider that there are powers in the status quo that do NOT want equal playing fields. For example we (the US and UK) have long limited what technologies we will allow to be exported. The same powers do indeed limit what we're "allowed" to have and have access to. Some of the quote-unquote nutty speculations you hear about what the government has in secret are true. I saw the work that was done on the "StarWars" project in the early 60's and lat 50's. This wasn't introduced for public awareness until the mid-80's. some 25 years later. And top NASA and University scientists across the western world all said there was no such thing and we wouldn't be able to achieve it for many years to come. LOL They were saying these things all the while it already existed - I know 1st hand. But I do digress.
Is does seem like a snake oil pitch, the multi core, Hyper threading, virtual core sch-peel from the chip manufacturers, if what you say is true (and I have no reason to doubt you).
I dunno. Is it snake-oil if it's 5% true? About 10% or 20%? There's some truth in it for sure. How much is the question. Since this is exactly the same technology already introduced some (as you say) 7 years ago then we should already know pretty much what to expect. I don't believe it will be wildly different. Right? First we had single processors, then we had multiple processors, then we had HT both in single and multi-processor systems, then multi-core processors, and now multi-core with HT. At each point along the way the OS's and apps had to be retuned for the new architecture. We're on the edge of seeing what this round or retuning will be like. I say it won't be much different than the retuning we saw for the first round of HT - which was "better" but nothing astounding or ground-breaking.
You didn't really address my points (which I maintain are accurate), but I'll take a stab at the strawmen you replaced them with...
Huh? Your whole "point" was based on pure fiction unless you're beta-testing 10.6 and know something we don't - and even then it would still be partial fiction as the products and developments you're making preemptive claims about don't exist or haven't been released yet. So you're coming from pure speculation in the first place. That's cool - I like to speculate - but we can't claim our speculations are absolutely accurate. Call a spade a spade.
Actually, its older than that. Age isn't the issue. Hardware has changed drastically over the years, and the scheduler algorithms have to catch up. Ideally, you'd schedule nehalem differently than a P4. Multiple cores/CPUs on a desktop machine are still a relatively new concept, and the software has to catch up, it's just life.
Age is the issue in that maturity comes with age. Code and architecture maturity. Multi-processors have been around and very common for 25 years that I know of. Windows NT 3 had affinity settings which would allow up to 16 processors. I got my first dual in the EARLY 80's, multi-resource computing is MUCH older than that. It not relatively new at all unless you wish to compare mechanical computers from the 16th and 17th centuries. Multi-processor desktops have existed pretty much since there were desk-tops and their popularity increased right along side main-stream electronic computing.
I said that 8 real cores would beat 4 real + 4 virtual. That will always be true all other things being equal. The HT cores share pieces with the "real" cores, there will *always* be contention that doesn't happen with distinct cores. Otherwise, I agree.
Yeah, I know what you said. But comparing the same number of VCs with that number of PCs isn't much fun and really not fair. It's a feature of a processor and not a replacement for other processors. Sometimes it's an advantage, sometimes not, and sometimes it's a disadvantage - when we consider stability issues.
The kernel didn't get thread load balancing until Leopard, it's still a 'new thing' for our Macs. It's not really the couple percent they use idling, it's the context swaps that happen when your app has to share with 400ish system threads. Carving them up in a more thought out manner will help tremendously. Building your system from the start with an eye toward multi-core operation is going to be more efficient than bolting it on later.
I'm not sure what you're talking about but Mac OS has had scheduling and migration with dynamic task assignment since Mac OS had Multi-tasking. You really can't have one without the other. And those are "thread load balancing" so you've lost me.
Some applications work well multi-threaded, others do not, no argument there. The point was that for appropriate apps, writing multi-threaded code is not orders of magnitude more difficult. There are plenty of single threaded apps still out there that could benefit greatly from a re-write with a multi-threaded approach. (and yes, some others never will)
No one said it was always too difficult or even always very difficult. I brought up the fact that it's often not worth it as the code will either suffer poorer execution speeds or not enough speed increase will be realized to justify the effort. It's simple matter of company cost to profit analysis. And of course the fact remains that most of the applications we use today simply cannot be multi-threaded in any significant way. They need the result from operation one in order to calculate operation two. Simple as that.
I'd much rather have a 20 GHz CPU than 8 x 2.5, but that's not likely to happen anytime soon, So I'll be happy with what I have
Yes, until we can do for ourselves we have to be satisfied with (or at least accept) what we're handed - or do without.
Isn't that graph a bit off though?
It can be correctly assumed to be "off" in either direction depending on the application base you're testing. For example if you're only testing rendering engines then it scales almost linearly. Eight processors or physical cores will be 750% ~ 800% faster than a single core/processor. In the opposite direction if you test apps that cannot be or still are not multi-threaded at all then the graph will look almost completely flat where 8 cores are relatively the same speed as a single core. How they came by their numbers I dunno but I can see some truth in it.