One question I - and I'm sure many other non CS/CE types do too - have is how possible is the parallelization for everyday computing tasks so that we can take the better advantage of multicore CPUs?. It seems nVidia's claim relies heavily on that possibility and without it the benefit on power seems moot.
There is lots of academic research on this. The general answer is that beyond a certain point, adding more cores does no good because communication overhead takes over, and because the problem cannot be divided any further.
Many types of problems are easily made to run in parallel (finite element analysis - e.g. weather forecasting, material fatigue analysis, circuit simulation, etc.) Other examples include certain types of decoding and encoding, 3-D graphics rendering, etc.
But other types of problems are not easily broken into small bits that can operate in parallel (anytime you can't perform the next step in a calculation without waiting for the answer to the previous step, you have a problem breaking it into parallel parts).
Even bits that can run in parallel often benefit from high "single-thread" speed to overcome a bottleneck. If you are Pixar and want to render a 3-D movie, you want lots and lots of cores, but you also want each core to run at full voltage and frequency if they can.
So the question is complicated.
Some researchers postulate that you want "heterogenous" cores, with some cores running slow for parallel operation, and at least one core running very fast to handle bottlenecks (I've heard a former CTO at a major CPU design firm say this, as well as a university professor).
To get a feel, if you have two cores on your laptop or desktop, load up "activity monitor" and watch the CPUs. Much of the time they are each running at a low percentage. Even if you had one core instead, you would still be running at a low percentage, and the CPU would throttle down its voltage and frequency. So no power savings there (though heat is easier to dissipate because the surface area is bigger - so that's a major advantage to two cores even though the battery drains just as fast).
If one core is at 100%, I usually find that the other is at a pretty high percentage too (handbrake, e.g.). In that case, again, I am not benefitting re: battery savings vs. only having one core, since the two cores are operating at full frequency. The arguable savings would be to say that to get identical performance from a single core, I'd have to increase the frequency. That is undoubtedly true for workloads like handbrake, but such workloads are not common on things like smartphones or tablets.