I think it confirms what many non-biased folks here have been saying, that the best choice of machine depends on the applications you use
Maybe I can provide some insight as to why.
Taking advantage of multi-core systems requires overhead to manage the processes. The same is true with multi-CPU systems. This is why when you double the core amount you will not see a doubling of the speed across the board. Some applications are written to take advantage of multiple cores or are written where the OS can easily farm out the tasks to multiple cores.
To make an analogy, think of it in terms as the management of a small office. In this office, let's say that you have a manager (the OS) and 8 employees (the cores).
Say I have a project that requires 8 tasks (A through H). In this case, each one of these tasks is reliant upon the prior task being completed. So A must be completed before B, and B must be completed before C, and so on. In this case, you could have each person do one task, however, they would be waiting on each other to complete the prior task. So in essence one employee could complete the 8 tasks as quickly as 8 employees. And in reality, the one person could probably do it quicker since they would not need to wait on the prior task person to pass them the information. If the manager were inexperienced in this type of situation, he would probably assign it to one person and wait for that one person to be done.
Now take the example above, but you have a manager that has more experience with similar type taskings. He knows that within each task, there are subtasks that other individuals in the office could help with saving the main employee some time and effort to complete the tasks. This manager would then farm out the project to one employee, but also have other employees assist the one to complete all 8 tasks. The more experienced the manager, the more he can involved the other employees.
In the examples above the more experienced and effective the manager, the faster the 8 tasks sequential tasks can be completed. However, no matter how experienced the manager, he will not be able to drop the time it takes to 1/8 of the total time required. Through his experience, he may be able to drop it some more but will not hit that level because the tasks required are not conducive to farming them out to the 8 employees.
At the other end of the spectrum is a project that requires 8 tasks that can be accomplished at the same time. Each one is not dependent on the others but at the end the 8 results much be placed together in a package.
In this case, the manager can easily divide the tasks out among the 8 employees. Each employee completes their tasks and returns them to the manager who then compiles the results into the final package or tasks that out to one individual. Either way, the the tasks are completed in 1/8 the time plus the time to compile the results.
In the real world, there is combination of the two types of projects. The OS can only go so far. The way the application is written can greatly aide the OS in using the multiple cores that are available. Regardless, there is always overhead in both the app and OS to take advantage of multiple cores. So while it sounds neat on paper, the actual through put of multi-core systems can end up being somewhat disappointing at times.
Anyhow, my attempt at a layman's explanation. Granted it glosses over some issues but I think gives a decent explanation of why you won't see doubling of speed when you double the number of cores available.