I've already explained that this analogy is invalid and is based on a flawed simplification of how processors work. If a two-core processor achieves a score 1000, you simply cannot assume that one core would achieve half of that score - it does not work that way. You don't double the processing power by adding another core. This score is given to the whole architecture, and it depends on my factors. If you want to learn more,
read this white paper.
You're both half right. And what both of you are talking about is also covered in that white paper you mentioned.
You're right that you can't simply divide the benchmark score by the number of cores. Nor multiply them. Because the overhead in managing x number of cores increases exponentially.
But, knowing why this overhead exists means there's a number of things you can deduce.... one of which what holmesf and tbrinkma was trying to say. (although how tbrinkma tries to reason it through is completely wrong; only his conclusion is correct.)
What holmesf and tbrinkma was trying to say:
: for the average application, which is primarily single-threaded
: a dual core CPU that scores 1600 in an arbitrary multi-core benchmark versus
: a quad core CPU that scores 1600 in an arbitrary multi-core benchmark,
the actual difference in runtime of the average application will be significantly better on the dual core CPU than the quad.
To which you said:
Wha-? "Larger performance advantage than indicated by the benchmark"? Keep dreaming...
Let me try to explain it in simpler terms for other readers.
Suppose that you have 1 core. It scores 1000 on it's own.
Make it a dual core. Now it scores 1300. (300 improvement with 1 additional core)
Make it a quad. Now it scores 1500. (200 improvement with 2 additional cores)
Suppose you have a different core. It scores 1200.
Make that core a dual. Now it scores 1500.
You post that to a forum. Mass mayhem, cheering, and booing ensues.
Somebody inevitably shouts "but they get practically the same score!"
Had you ran the average application on both devices, you'd be using just one or two cores, unlike the benchmark. And given that breakdown, you'd see that for a single threaded app, the dual core CPU has a 20%. And for a two-threaded app, the dual core has a 15% advantage, despite having roughly equal benchmark scores.
Inevitably, somebody will also then chime in and say "20% and 15% is insignificant!" At which I'd have to point to how big of a deal the world made when Intel went from Core to Core 2 for the same 15-20% gain at the same clock. And then once again from Core 2 to Nehalem, also at the same clock. And likewise, it's the average difference between a Cortex A8 and A9 at the same clock as well.
The example I gave above utilizes the data from the 1Ghz range of data from 5.2.3 in your white paper, at a constant clock rate. If you then add in the fact that the A6 accomplishes scores like this at 60% of the clock speed, then it becomes something actually impressive in its own right not because it performs fast, but because the significantly slower clock speed is a win for battery life.
With that said, I expect somebody to finally make good on their word to ship an A15 soon, and I expect it to be also impressively fast. Because realistically, no matter what the fanbois on each side of the debate says, nobody actually makes good use of that much processing power in our pockets because there's too many programmers who are still accustomed to the excess of resources they had on the desktop. I'm annoyed at how many times I've tried an app from friend and watched it go all sluggish and crash, and then had to email back "hey, somebody on your team's got a memory leak in their crappy threading calls in class xyz and therefore your app is waste of my battery. fix it."