That is not true at all. The OS is what controls things such as thread priority and management. If the OS is more efficient at this it will automatically improve the performance of applications that are multithreaded.
The coders need to improve applications first, if that's not done, OS-level optimization is much more useless.
marketing trick ? would you prefer them going back to the stupid idea of pushing clock frequency farther and farther ? i seriously prefer multi core architectures .. after all i like to either have many small programs open (which means many threads) or have big programs open using many of threads
not only is it better in the second case but also in the first because of the reduced latency since an actual software thread can be running on actual hardware without an context switch (which are eating an aweful lot of cpu cycles on normal 1 threaded cpus)
also it's an aweful lot better to have smaller cores than big beasts like the G5 with it's 5 instruction ILP where it is notoriously difficult to write compilers for
The marketing trick I have in mind is that chip-makers present their products in a way that makes people think multicore processors will give them double the performance for double the cores, which is not true. I don't see what's so stupid about higher clock frequencies, an increase in frequency can be easily translated to better performance and is also easier to understand for customers. Chip makers started (not the only reason, of course) using the multicore approach because they ran into material problems that prevented them from easily gaining higher frequencies.
Reduced latency is a valid point although it's questionable how much (if at all) it affects the end-user. I agree with you, multitasking is a good use for multiple cores, but that's now, using 2, 4, 8 cores. As someone mentioned, there are plans for 32, 64 and more cores on a single processor, which gets increasingly pointless.
you know why ? because many developers have been lazy and took the easy road if they don't have to ... aka."let the chip maker make up for it with mhz"
also another thing about amdahls law:
the important thing is "time spent in code" not code... normally 90% of the time is spent in 10% of the code ... which most likely is loops ... which then can most likely made parallel
that said there are plenty of features coming in the next generations of intel CPUs which can easily increase performance a lot if they are used... like the crossbar interconnection between the cores and afaik a shared level 3 cache on die
don't forget the cache either.. with more cores you very likely get more on-die lvl 1 and lvl 2 cache as well which can improve performance considerable especially with lazy code who puts multidimensional arrays in contiguous blocks
Heh, lazy coders are a fact, of course. Everyone knows the good old "It doesn't matter how good the hardware is because the software boys will piss it away", but I think that's the hard cold truth we have to accept.
You're right again about the Amdahl's law, although again, the more cores we add, the more pointless they are. We need to stop at a reasonable limit.
Can't argue about cache either, but what you say is merely a consequence, more cache could easily be added to processors with less cores too, right now it's just not (financially) worth it.
All in all, I probably have gone a bit out of line, yes. My comments here were directed mostly at the mainstream market. Some people commented very specific professional use where multicore cpus will certainly come in more handy (although a lot can be done, we need better memory performance to utilize 16 and more cores, monolithic design takao mentioned wouldn't hurt either), while home users will still profit much more more from actual architecture improvements and higher frequencies then from a large number of cores.
OFFTOPIC: Pleasantly suprised at the number of people with great knowledge on this topic

.