Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Your serious? The iPhone is garnering 50% of all the smartphone profits, and this is achieved with 1 phone.

there is no denying Android has gained a wave of momentum and popularity. Doesn't matter how they did it, point is they did it. I see a lot more Android phones then i use to a year ago. There are a lot of iphones out there, but to dismiss/deny that Android has grown (by 1000%? dont quote me, they grew big time) is naive.
 
They do. And they voted with their wallets, hence Android's popularity. More cores means higher performance and better battery life. It's just better in every way.

No it doesn't. It means worse battery life. When you double the number of cores you double the number of transistors and wires that switch in each cycle. This doubles "C" in the power consumption equation: P=CV^2f.
 
there is no denying Android has gained a wave of momentum and popularity. Doesn't matter how they did it, point is they did it. I see a lot more Android phones then i use to a year ago. There are a lot of iphones out there, but to dismiss/deny that Android has grown (by 1000%? dont quote me, they grew big time) is naive.

Actually it does matter. A good many Android phones are free, not to mention there are a ton of Android handsets on the market, the iPhone has 2 models, the 3Gs, and 4G.
 
No it doesn't. It means worse battery life. When you double the number of cores you double the number of transistors and wires that switch in each cycle. This doubles "C" in the power consumption equation: P=CV^2f.

Normally I'd be surprised at such ignorance but then again this is MacRumors, so...


17521-screen_shot_2010_12_09_at_4.59.10_pm_super.png


http://www.infoworld.com/d/hardware/nvidia-wants-pack-more-cores-in-tablets-smartphones-592
Nvidia is looking to pack more CPU cores into mobile devices like smartphones and tablets as a way to improve performance while preserving battery life.
Here you go.

And here's the full PDF from nVidia themselves if infoworld is "too unreliable."

http://www.nvidia.com/content/PDF/t...-Multi-core-CPUs-in-Mobile-Devices_Ver1.2.pdf

Also, that little equation of yours has absolutely nothing to do with this. And it's wrong.
 
Last edited:
Uh huh. Believe that propaganda. And how, exactly, is my equation wrong? I used it for 10 years designing microprocessors at AMD, and it seemed to work there. It also worked when I worked at Sun designing SPARCs and at Exponential designing PowerPCs. I'm pretty sure the laws of physics also apply to ARM cores.

Hmm. Even Wikipedia seems to agree with me (Entry CMOS, under dynamic dissipation). And google seems to think the equations right: http://www.google.com/search?q=p+cv2f&ie=UTF-8&oe=UTF-8&hl=en&client=safari

Instead of ignorantly arguing against my correct physics, you should have argued that multicore allows f to be reduced, counteracting the increase in C. Of course, I was ready for that argument,but at least it would have been a technically valid theory.





Normally I'd be surprised at such ignorance but then again this is MacRumors, so...


17521-screen_shot_2010_12_09_at_4.59.10_pm_super.png


http://www.infoworld.com/d/hardware/nvidia-wants-pack-more-cores-in-tablets-smartphones-592

Here you go.

And here's the full PDF from nVidia themselves if infoworld is "too unreliable."

http://www.nvidia.com/content/PDF/t...-Multi-core-CPUs-in-Mobile-Devices_Ver1.2.pdf

Also, that little equation of yours has absolutely nothing to do with this. And it's wrong.
 
You posted P=CV^2f

Which is read as P equals CV to the power of 2f. Not the same as what you just typed into Google there.

Maybe if you had worked for ten years at AMD you wouldn't have came up with that incredibly thick post. I don't care what f and CV and whatnot stand for, it's pretty much common sense that two cores decrease the overall workload on the processor. "Propaganda" my ass. nVidia knows a ton more about this than you do.
 
Uh huh. Believe that propaganda. And how, exactly, is my equation wrong? ...

You're arguing with a wrong person. Once he argued that the Samsung phone's SAMOLED screen has great colors, better than the iPhone's display and someone showed him a webpage with measured results which showed the Samsung colors to be inaccurate. This was a report with specific results from measurement, written by a Princeton physics PhD who run his own display tech company, and yet mKTank decided to attack the guy's credentials and just brushed it off as a biased test.

Afterward I just put him on my ignore list. Some people aren't just arguing over with.
 
You're arguing with a wrong person. Once he argued that the Samsung phone's SAMOLED screen has great colors, better than the iPhone's display and someone showed him a webpage with measured results which showed the Samsung colors to be inaccurate. This was a report with specific results from measurement, written by a Princeton physics PhD who run his own display tech company, and yet mKTank decided to attack the guy's credentials and just brushed it off as a biased test.

Afterward I just put him on my ignore list. Some people aren't just arguing over with.
Baww he has a different opinion im gonna tell everyone about it even though im wrong

But ignoring me was smart on your part, I suppose. Saves you from defeat.
 
You posted P=CV^2f

Which is read as P equals CV to the power of 2f. Not the same as what you just typed into Google there.

Maybe if you had worked for ten years at AMD you wouldn't have came up with that incredibly thick post. I don't care what f and CV and whatnot stand for, it's pretty much common sense that two cores decrease the overall workload on the processor. "Propaganda" my ass. nVidia knows a ton more about this than you do.

Lol. It's not like I can use equation editor here.

And the laws of physics don't care about your "common sense.". Unless you can reduce f or V to compensate, adding a core increases C and hence increases dissipated and consumed power. Most workloads are linear and can't be parallelized to the degree needed to compensate for the increased C, so in most cases two cores increases, not decreases, power consumption.

And if you don't think I worked at AMD, google me. Or, to save you time, here's a paper I wrote for the IEEE Journal of Solid State Circuits:
http://ieeexplore.ieee.org/iel3/4/13972/00641683.pdf?arnumber=641683

You'll see my AMD affiliation in the bio.
 
Lol. It's not like I can use equation editor here.

And the laws of physics don't care about your "common sense.". Unless you can reduce f or V to compensate, adding a core increases C and hence increases dissipated and consumed power. Most workloads are linear and can't be parallelized to the degree needed to compensate for the increased C, so in most cases two cores increases, not decreases, power consumption.

And if you don't think I worked at AMD, google me. Or, to save you time, here's a paper I wrote for the IEEE Journal of Solid State Circuits:
http://ieeexplore.ieee.org/iel3/4/13972/00641683.pdf?arnumber=641683

You'll see my AMD affiliation in the bio.
Doesn't change the fact that you're wrong. That you're calling nVidia's statements 'propaganda' probably means you didn't spend enough time doing whatever you thought you were doing at AMD.
 
No it doesn't. It means worse battery life. When you double the number of cores you double the number of transistors and wires that switch in each cycle. This doubles "C" in the power consumption equation: P=CV^2f.
No. Think of cores as having finer grain control over the same number of transistors. So if you have a 100 transistors (extremely simple example), having 1 core means you power manage and execute threads with 100 or 0 (all or nothing).

2 cores gives you the flexibility to use 0, 50, or 100, so you can imagine that over time your power consumption is not only better, your thread execution capability is improved too.

@mKTank, you don't have to pull up a Tegra sheet to prove this. Intel's been doing multi-core architectures for power management and thread efficiency for a while now. Intel's power consumption for processors have been generally trending lower, not higher, even though we see more cores and fast core clock speeds.

Edit: cmaier, I'm not sure how you've missed such a fundamental advancement in processors while having worked at AMD. I hope that you've been joking in your posts..
 
Doesn't change the fact that you're wrong. That you're calling nVidia's statements 'propaganda' probably means you didn't spend enough time doing whatever you thought you were doing at AMD.

Why don't you stick to rebutting my point instead of personal attacks? Deciding that you know more than someone with a phd and years of experience while avoiding my point is not going to convince anyone.

The nvidia image you showed doesn't dispute my point. It starts with the assumption that when going to dual core they could lower the voltage and the frequency. This is shown on the figure. What they don't address is that most workloads are not easy to split between two cores and that single threads usually need to run at full speed.

No. Think of cores as having finer grain control over the same number of transistors. So if you have a 100 transistors (extremely simple example), having 1 core means you power manage and execute threads with 100 or 0 (all or nothing).

2 cores gives you the flexibility to use 0, 50, or 100, so you can imagine that over time your power consumption is not only better, your thread execution capability is improved too.

@mKTank, you don't have to pull up a Tegra sheet to prove this. Intel's been doing multi-core architectures for power management and thread efficiency for a while now. Intel's power consumption for processors have been generally trending lower, not higher, even though we see more cores and fast core clock speeds.


Except that's not what happens. Two cores doubles the number of transistors, because each core is a full core. Superscalar, on the other hand, works the way you are saying, where individual execution pipelines may be shut off if not needed.
 
Last edited by a moderator:
Why don't you stick to rebutting my point instead of personal attacks? Deciding that you know more than someone with a phd and years of experience while avoiding my point is not going to convince anyone.

The nvidia image you showed doesn't dispute my point. It starts with the assumption that when going to dual core they could lower the voltage and the frequency. This is shown on the figure. What they don't address is that most workloads are not easy to split between two cores and that single threads usually need to run at full speed.

I'm sorry if I offended you, but I want you to be clear on my stance. I cannot provide a direct argument from myself to your point because I am not a physics major. Like I said, I don't even know what P=CV2f is to begin with. But it is more statistically sound to consider what nVidia says true over what you are saying. Not to mention, like I said, it's just common sense when you think about it.

Also it's not difficult to split the workload between two cores. Programs have been doing it for a decade now, I'm sure Apple could figure out how to make iOS behave properly with two cores. After all, it is running a mobile version of OSX, which is already dual-core ready, is it not?
 
Except that's not what happens. Two cores doubles the number of transistors, because each core is a full core. Superscalar, on the other hand, works the way you are saying, where individual execution pipelines may be shut off if not needed.
You do understand that:

1. transistor power consumption is a function of its consumed area on die?
2. Dual core processors generally aren't twice as large as their single core predecessors.

Also, superscalar is how we have hyperthreading within a single physical core. It's got nothing to do with multiple physical cores.
 
You do understand that:

1. transistor power consumption is a function of its consumed area on die?
2. Dual core processors generally aren't twice as large as their single core predecessors.

Also, superscalar is how we have hyperthreading within a single physical core. It's got nothing to do with multiple physical cores.

1. Sort of. A weak function. The correct function takes into account how often the transistors switch and what voltage they switch at and how much capacitance they switch. Die size thus gives you a feel for power because a larger die is likely to have a larger capacitance being driven because wires are longer. But that's not relevant tom your point.

2. No, but the core area is doubled. Pads, cache, etc don't necessarily double. But none of that means anything. Most power is consumed in the core, at least in chips with small, ARM-like caches.

Two chips that are the same size may, and usually do, have completely different power dissipation. It's an indicator but it doesn't tell you the answer.


I raised superscalar only in the context of your prior post- only superscalar allows you fine-grained control over transistors within a core. (ie divides 100 up into small bits) while multiple cores doubles 100.

Also, superscalar does not imply hyper threading. Most superscalar processors do not have hyper threading.
 
1. Sort of. A weak function. The correct function takes into account how often the transistors switch and what voltage they switch at and how much capacitance they switch. Die size thus gives you a feel for power because a larger die is likely to have a larger capacitance being driven because wires are longer. But that's not relevant tom your point.

2. No, but the core area is doubled. Pads, cache, etc don't necessarily double. But none of that means anything. Most power is consumed in the core, at least in chips with small, ARM-like caches.

Two chips that are the same size may, and usually do, have completely different power dissipation. It's an indicator but it doesn't tell you the answer.


I raised superscalar only in the context of your prior post- only superscalar allows you fine-grained control over transistors within a core. (ie divides 100 up into small bits) while multiple cores doubles 100.

Also, superscalar does not imply hyper threading. Most superscalar processors do not have hyper threading.

Hey. Do you hear that? That's the sound of my brain exploding.
 
Apple will need to innovate further … there is only so much kool-aid to go around. The competition came out strong this year with Dual-Core CPU's.

Yeah. because Apple NEVER innovates. -_-;

The strength of the Apple product is reliability, function, price. People like to bitch about all of it, except the price. "WELL IT DOESN'T PLAY HIGH END SUPER GAMES WHILE RENDERING CGI ANIMATION PROJECTS!" So what? Dual core, single core, no core, a trillion core: what does it ACTUALLY do and how does it relate to average consumers.

People don't want to pay $800 for something that does 90% of what they don't ever do. They'd rather pay $499 for something that does 95% of what they need it for.
 
Yeah. because Apple NEVER innovates. -_-;

The strength of the Apple product is reliability, function, price. People like to bitch about all of it, except the price. "WELL IT DOESN'T PLAY HIGH END SUPER GAMES WHILE RENDERING CGI ANIMATION PROJECTS!" So what? Dual core, single core, no core, a trillion core: what does it ACTUALLY do and how does it relate to average consumers.

People don't want to pay $800 for something that does 90% of what they don't ever do. They'd rather pay $499 for something that does 95% of what they need it for.

They could pay 299 for something that does 100%. Also, strength of the Apple product...reliability, sure. Function? No. Price? Hell no.
 
Only worthwhile statement made in this entire post. IGNORED!

Glad to help out :D Ignoring him allowed me to read the discussion between cmaier and neko girl with less annoyance.

Hey. Do you hear that? That's the sound of my brain exploding.

From my limited understanding the gist of the discussion is

neko girl: if two chips are of the same size, they'll consume roughly the same amount of electricity and you could control the amount of power used by turning off a core at a time

cmaier: not necessarily, there are many other factors you have to consider and power consumption is still bigger on dual core chips at the end and it's impossible to precisely control the amount of power used by your chip in fine units so dual core still uses up more power
 
cmaier: not necessarily, there are many other factors you have to consider and power consumption is still bigger on dual core chips at the end and it's impossible to precisely control the amount of power used by your chip in fine units so dual core still uses up more power

Just as an aside, at http://www.ecse.rpi.edu/frisc/theses/MaierThesis/Chapter1.html if you look at Table 1.3 you can see way back in 1996 I was looking at power consumption vs. die area. These are obviously old chips, but if you did a modern comparison you'd find the same thing - die area is a first guess at power consumption, but the correlation is fairly weak. As shown in that Table, the number of "devices" (transistors) is probably a better guess (but also not close to perfect)- obviously the number of transistors close to doubles when you go to dual core (see previous caveats about shared caches, etc.) The difference is that for each transistor that switches in a cycle, you create heat/use power. The more transistors you have, the more that are likely going to switch.
 
... The difference is that for each transistor that switches in a cycle, you create heat/use power. The more transistors you have, the more that are likely going to switch.

One question I - and I'm sure many other non CS/CE types do too - have is how possible is the parallelization for everyday computing tasks so that we can take the better advantage of multicore CPUs?. It seems nVidia's claim relies heavily on that possibility and without it the benefit on power seems moot.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.