Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
you keep thinking that.

"When Apple announced iPhone 5, it had claimed that the A6 chip featured in the new iPhone will furnish up to 2x power and performance. New Geekbench results have now surfaced which show iPhone 5 scoring 1601 in that particular benchmark. Since iPhone 5 hasn’t yet hit the shelves, it is hard to discern whether or not these results are fake or real, but they are plausible. However, what they do reveal is that iPhone 5 beats Samsung Galaxy S3 by a narrow margin. The latter scored 1560 in the test. The Galaxy S3 with Jelly Bean (Android 4.1) is also said to score above 1700, which is weird for a CPU test…

It is rather interesting that the OS running on a given handset significantly affects Geekbench tests. For instance, while a Galaxy S3 running Ice Cream Sandwich scores lower than iPhone 5, one running Jelly Bean beats the iPhone 5. Most of the Snapdragon S4 handset we tried score in the 15xx range."

http://www.ubergizmo.com/2012/09/geekbench-results-iphone-5-beats-galaxy-s3/

yeap weird
 
0.001% of iPhone owners read MR. That's roughly how many people care about this issue. On Android side, those either buy Nexus devices or they install custom ROMs. In short - not an issue.

Ironic. You have to buy a Nexus device to insure that you'll get timely update for a device you forked out a bit of money for. :confused:

What if I don't want a custom rom, and the reliability and stability of the actual update like my neighbor has on his Nexus 7? What happens if you're running a custom rom and need to have it sent in for warranty purposes, Does this void your warranty?

Another reason why Apple users enjoy the fact that their devices "just work" and can assure you won't be stuck on 6.0 for nearly 2 years.
 
The Geekbench for the iPhone 5 has prompted many highly optimized, overclocked SG3s to submit their scores to improve the average.. It's working :p

It's a shame that the Geekbench results show the exact clock frequency used then.

Some of the over-the-top numbers from the SGS3 (2300+) is shown to be run on overclocked phones - and there is no hiding it since the frequency is stated in the results.

The numbers around 1800 though is not and has been run on the standard configuration.
 
Ironic. You have to buy a Nexus device to insure that you'll get timely update for a device you forked out a bit of money for. :confused:

What if I don't want a custom rom, and the reliability and stability of the actual update like my neighbor has on his Nexus 7? What happens if you're running a custom rom and need to have it sent in for warranty purposes, Does this void your warranty?

Another reason why Apple users enjoy the fact that their devices "just work" and can assure you won't be stuck on 6.0 for nearly 2 years.

The really ironic part is that both iOS and Android offer exactly one phone model that gets timely update. So, there is absolutely no advantage for iOS there. Lately more Android vendors (the smaller ones) actually went with stock Android.
 
The really ironic part is that both iOS and Android offer exactly one phone model that gets timely update. So, there is absolutely no advantage for iOS there. Lately more Android vendors (the smaller ones) actually went with stock Android.

What causes this slow update process is they have to develop the update for a particular phone, test it out work out most of the kinks and then get it approved by the carrier. Just multiply that by how many phones a manufacture makes and rank them from most popular to least. Which is a bummer for some people who don't get a popular phone and get left out of the update all together. This is a major problem with android. I think having the huge number of devices is an advantage and disadvantage

But then again there are people who don't pay attention to things like os updates whether its an iphone, android, or whatever it is they use.
 
So wait a second. The iPhone 5 is the thinnest, lightest, and fastest smart phone on the market with the largest App store?

Srsly guys this phone was a disappointment like everyone said it was.

Sorry to disappoint you. The iPhone 5 is not the world's thinnest smartphone, the Huawei Ascend P1s is (6,7 mm vs. the iPhone 5's 7,6 mm - www.gsmarena.com/compare.php3?idPhone1=4409&idPhone2=4910). It is also not the fastest smartphone. The Galaxy S III has scored well above 1800 points in the same test as shown in the article (GeekBench) link: http://db.tt/aj2IGJqq
The screenshot is from my own Galaxy S III, so I can personally guarantee that it is not a fake.
 
Look ma first post!

Lot's of people registering accounts just to troll mr in this thread, lol.
 
Sorry to disappoint you. The iPhone 5 is not the world's thinnest smartphone, the Huawei Ascend P1s is (6,7 mm vs. the iPhone 5's 7,6 mm - www.gsmarena.com/compare.php3?idPhone1=4409&idPhone2=4910). It is also not the fastest smartphone. The Galaxy S III has scored well above 1800 points in the same test as shown in the article (GeekBench) link: http://db.tt/aj2IGJqq
The screenshot is from my own Galaxy S III, so I can personally guarantee that it is not a fake.

This thin phone stuff is BS anyways... but can you slide the huawei though a 7.6mm thick hole? No? Cause of a damn camera bump... None of this matters cause the Oppo Finder actually measures 6.65mm at its thinnest point, and more importantly 7.1mm at its thickest. Then again, it's not 4g, china only etc, but then again the thin crap doesn't matter really. Most of the measurements use the thinnest part of the phone and don't count a big bump, ie like the RAZR etc which advertise thinnest point... Which is retarded.

As if Apple would end up building one of these and calling it the thinnest phone, by measuring all the parts but the camera.

iphone-telephoto-lens-1a71.jpg


Oh wait... that's what android does.
 
Funny.

No benchmark on Galaxy SIII US and JP versions, which also uses a custom designed dual-core Cortex A15 SoC.

And no benchmark on LG Optimus G, which uses a custom designed quad-core Cortex A15 SoC.

LG Optimus G is using Krait cores just like the JP Galaxy S3. Just because Krait has A15-like elements does NOT mean it was a custom designed A15. Just like how the Apple A6 has A15 similarities does not mean it's a custom designed A15 either.

Krait, Cortex-A15, and the Apple A6 are different cores, and there does not seem to be any definitive non-NDA documentation describing their differences.

nit-picky, I know. But for some of us who've actually done CPU logic design, it matters.
 
*Sigh* Have you even read that paper? In the example given in the paper, two cores of the same frequency have roughly a 25% performance increase, which is way lower than your 1.9:1 ratio.

I'm going to have to go back and read that paper, because I'm fairly sure you're either misunderstanding something, or over-applying a specific situation to the entire category of multi-core processor designs.

There is, indeed, a point at which adding cores will only buy you a 25% improvement in performance at the same clock speed, but with modern processor designs that point is *well* after dual- or quad-core designs. Heck, it was past quad-core designs back in the days of the Pentium Pro. (Diminishing returns hit when you went past 4 cores back then.)

Indeed, having gone back and read the paper, I was correct with my assumption. You're misinterpreting the results of the paper, and I'm pretty sure you're reading conclusions where the paper's authors *don't* state any.

Many results were expected, like a large L2 cache size certainly helped performance a lot and the L1 cache size, optimal system frequency was typical of many of the Intel Core 2 Duos, and that after already having 2-4 core any more cores do not help performance in FFT nearly as greatly.

This says that, in their simulations, multi-core speedups tapered off notably after the 2-4 core range. Note: they were *simulating* a different CPU and system architecture, so there's limits to the conclusions which can be drawn *period*.

Moving on to the chart where I think you're getting your numbers (since "25%", or even "%" doesn't actually appear anywhere in the article text), the paper states:

Two to four cores seems to produce the best relative speedup. This can be explained by the extra overhead needed with more cores.
This is what we've *both* been saying all along.

It goes on to say:
However, this speedup does not seem as high as expected. This could be due to an imperfect simulation system or maybe the program was not run long enough to extract the full parallel aspects of the program.

and also:
that after already having 2-4 core any more cores do not help performance in FFT nearly as greatly

FFT is Fast Fourier Transform. In other words, this paper is written about their analysis results for a very specific scenario, FFT calculations. This is *not* the general case, because different algorithms (even within the same application) will parallelize differently with different constraining factors. Indeed, there are algorithms which parallelize so poorly that you hit diminishing returns as soon as you add a core to the *first*, but there are also algorithms which can be parallelized across dozens (and even hundreds) of processor cores. (This is what allows supercomputers, with hundreds of cores, to bring their power to bear on those calculations.)

In other words, you've taken the results of a very specific study and generalized it (incorrectly) across the entire domain of multi-core computing.
 
Last edited:
This benchmark makes me really question the validity of benchmarks. I mean come on. How can a dual core 1 Ghz processor score comparably to a quadcore 1.4 Ghz processor? I'd understand if the quadcores were made by nobodies whom had no idea how to make processors. But we're talking about people like Samsung, and Nvidia, who've been in the game a long time. What criteria is Geekbench using? Are all four of the cores of the quadcores being used compared to all two cores of the A6? No wonder so many say not to look at benchmarks for accurate performance evaluations.
 
Colors accuracy is a matter of calibration.

IPS is still old tech glorified backlit LCD with all the baggage: power hungry, limited saturation, bad black levels, poor angle views, thick, non bendable. It's on the way out. Currently Apple can not afford OLED so excuses are made once again, but when the time comes Apple will probably innovate some buzzy marketing name for it.

Apple cannot afford OLED? that is the most funny thing I've heard in a while... are you kidding me?
 
I've already explained that this analogy is invalid and is based on a flawed simplification of how processors work. If a two-core processor achieves a score 1000, you simply cannot assume that one core would achieve half of that score - it does not work that way. You don't double the processing power by adding another core. This score is given to the whole architecture, and it depends on my factors. If you want to learn more, read this white paper.

You're both half right. And what both of you are talking about is also covered in that white paper you mentioned.

You're right that you can't simply divide the benchmark score by the number of cores. Nor multiply them. Because the overhead in managing x number of cores increases exponentially.

But, knowing why this overhead exists means there's a number of things you can deduce.... one of which what holmesf and tbrinkma was trying to say. (although how tbrinkma tries to reason it through is completely wrong; only his conclusion is correct.)

What holmesf and tbrinkma was trying to say:
: for the average application, which is primarily single-threaded
: a dual core CPU that scores 1600 in an arbitrary multi-core benchmark versus
: a quad core CPU that scores 1600 in an arbitrary multi-core benchmark,
the actual difference in runtime of the average application will be significantly better on the dual core CPU than the quad.

To which you said:
Wha-? "Larger performance advantage than indicated by the benchmark"? Keep dreaming...

Let me try to explain it in simpler terms for other readers.

Suppose that you have 1 core. It scores 1000 on it's own.
Make it a dual core. Now it scores 1300. (300 improvement with 1 additional core)
Make it a quad. Now it scores 1500. (200 improvement with 2 additional cores)

Suppose you have a different core. It scores 1200.
Make that core a dual. Now it scores 1500.

You post that to a forum. Mass mayhem, cheering, and booing ensues.

Somebody inevitably shouts "but they get practically the same score!"

Had you ran the average application on both devices, you'd be using just one or two cores, unlike the benchmark. And given that breakdown, you'd see that for a single threaded app, the dual core CPU has a 20%. And for a two-threaded app, the dual core has a 15% advantage, despite having roughly equal benchmark scores.

Inevitably, somebody will also then chime in and say "20% and 15% is insignificant!" At which I'd have to point to how big of a deal the world made when Intel went from Core to Core 2 for the same 15-20% gain at the same clock. And then once again from Core 2 to Nehalem, also at the same clock. And likewise, it's the average difference between a Cortex A8 and A9 at the same clock as well.

The example I gave above utilizes the data from the 1Ghz range of data from 5.2.3 in your white paper, at a constant clock rate. If you then add in the fact that the A6 accomplishes scores like this at 60% of the clock speed, then it becomes something actually impressive in its own right not because it performs fast, but because the significantly slower clock speed is a win for battery life.

With that said, I expect somebody to finally make good on their word to ship an A15 soon, and I expect it to be also impressively fast. Because realistically, no matter what the fanbois on each side of the debate says, nobody actually makes good use of that much processing power in our pockets because there's too many programmers who are still accustomed to the excess of resources they had on the desktop. I'm annoyed at how many times I've tried an app from friend and watched it go all sluggish and crash, and then had to email back "hey, somebody on your team's got a memory leak in their crappy threading calls in class xyz and therefore your app is waste of my battery. fix it."
 
This benchmark makes me really question the validity of benchmarks. I mean come on. How can a dual core 1 Ghz processor score comparably to a quadcore 1.4 Ghz processor? I'd understand if the quadcores were made by nobodies whom had no idea how to make processors. But we're talking about people like Samsung, and Nvidia, who've been in the game a long time. What criteria is Geekbench using? Are all four of the cores of the quadcores being used compared to all two cores of the A6? No wonder so many say not to look at benchmarks for accurate performance evaluations.

Its just a benchmark. Again, real world usage will matter more.
And its not like these test don't favor either of the other at any point of time. Its supposed to be a static test (same metrics used for each device). So, at the end of the day, it will show what it shows.

Its not that others don't know how to make processors. Just that Apple has made advances here. They bought companies that make processors. They put it to good use. Apple was and never will be about the "specs". They don't need to be. That's "PC". Mac is about "It works!". They don't look at it from the tech side. They look at it from the consumer side. Just because its only 1Ghz doesn't mean it can't do what it needs to do very fast.
 
OK, after looking at this chart a few times, I noticed something interesting. The jump from a dual core arm cpu to a quad core arm cpu does not get anything like a 50% increase in processing power.

Comparing the S2 to the S3, which is the same family of processors, upgrading from a dual core 1.2Ghz cpu to a quad core 1.4Ghz cpu only gained Samsung about 33% more processing power.

What constraints would have caused this? Is that why Apple went with its own custom designed ARM cpu instead of a standard quad core?

You're absolutely right!

It is rare that adding more cores gets a linear increase in processing power.

The reason is that there is overhead in having more cores.
Examples of this include:
1) Lower memory bandwidth per core. You're feeding more cores. So each core can get less out of the same memory controller.
2) Smaller cache. If you don't end up with enough room for enough cache, you'll have a significant penalty from the smaller effective cache.
3) Cache coherency. Basically, when several cpus, which all have their own cache, need to access data, they have to talk to all the others to make sure they're aware of the access. Otherwise the cache will have stale data. Every core has to then talk with every other core. Hence, more cores, more talking.

Apple went off and designed their own core for many possible reasons. One of which could be the above. But not necessarily. We probably won't know what actually motivated them to do so, but I'd take a guess that it had most to do with battery life and then secondly performance.

To get a better understanding of processor design, go read the definitive book on computer architecture:
Computer Architecture: A Quantitative Approach
by Hennessy and Patterson

----------

This benchmark makes me really question the validity of benchmarks. I mean come on. How can a dual core 1 Ghz processor score comparably to a quadcore 1.4 Ghz processor? I'd understand if the quadcores were made by nobodies whom had no idea how to make processors. But we're talking about people like Samsung, and Nvidia, who've been in the game a long time. What criteria is Geekbench using? Are all four of the cores of the quadcores being used compared to all two cores of the A6? No wonder so many say not to look at benchmarks for accurate performance evaluations.

From what I recall, Geekbench utilizes all cores. So yes, it's a dual core at a lower clock beating out a quad core at a higher clock.
I don't know the details of what Geekbench tests, but like all benchmarks, the only one that matters is the one tested against what you're actually running.

Dramatic benchmark score comparisons like that arn't all that rare in the industry to be honest if you've been watching since the 90s. But I just find it amusing that anybody would think Apple's a nobody, even if they didn't recently buy two CPU design firms.

Samsung and nVidia are big players, indeed.
But I think you might want to know that Apple's been in digital design longer than nVidia.

----------

Most Qualcomm if not all are asynchronous CPUs meaning the second core is offline most of the time. The question is does the A6 operate this way? The Tegra processor is synchronous meaning both cores are online all the time. That said synchronous generally would produce higher benchmarks. On android we would run a script to force the second core online at all times. You would loose some battery but would produce better benchmarks.

Again is the A6 synchronous or asynchronous?

.......
......
...
..

Tegra, A6, OMAP, Exynos, etc are all synchronous.

Perhaps you were looking for a different word?

Symmetric, maybe?
 
In any case, this is amazing, two faster cores are always better than slower quad cores. You only need dual threaded process to maximize the performance than fine tuning 4 threads. Even if the S3 had the same or better performance, Android would have to do a single process on all 4 cores to get the same performance as a much simpler dual threaded process.

lol...CPU clock speed is not the end all. You should read up on computer architecture. Your statement isn't correct even when only accounting for Mac products.
 
Samsung and nVidia are big players, indeed.
But I think you might want to know that Apple's been in digital design longer than nVidia.


We can probably agree the big surprise is Apple went ahead to create their own microarchitecture instead of settling with the standard ARM. Samsung and nVidia might be great but they are still limited by the Cortex A9 architecture.

I'd love to read what went on behind all this. I'd guess we finally know what all those chip designers at Apple have been doing as A5 was largely a very "vanilla" ARM chip in its core.
 
Why did the GS III benchmarks magically become higher?

The Geekbench for the iPhone 5 has prompted many highly optimized, overclocked SG3s to submit their scores to improve the average.. It's working :p

Overclock CPU's and turning off of features.

Sorry but my s3 is not overclocked.. the reason is runing jelly bean.! the new android.! http://browser.primatelabs.com/geekbench2/1040992 and his become to all internationals s3 on octber

you keep thinking that.

"When Apple announced iPhone 5, it had claimed that the A6 chip featured in the new iPhone will furnish up to 2x power and performance. New Geekbench results have now surfaced which show iPhone 5 scoring 1601 in that particular benchmark. Since iPhone 5 hasn’t yet hit the shelves, it is hard to discern whether or not these results are fake or real, but they are plausible. However, what they do reveal is that iPhone 5 beats Samsung Galaxy S3 by a narrow margin. The latter scored 1560 in the test. The Galaxy S3 with Jelly Bean (Android 4.1) is also said to score above 1700, which is weird for a CPU test…

It is rather interesting that the OS running on a given handset significantly affects Geekbench tests. For instance, while a Galaxy S3 running Ice Cream Sandwich scores lower than iPhone 5, one running Jelly Bean beats the iPhone 5. Most of the Snapdragon S4 handset we tried score in the 15xx range."

http://www.ubergizmo.com/2012/09/geekbench-results-iphone-5-beats-galaxy-s3/


:)

I think it's more like some Android owners thought - hmm, wonder what my geekbench score is... I ran it earlier today on my S3 and got over 2080. Not overclocked or "highly optimised" (whatever that means), though I am running Jelly Bean which isn't officially released yet - Samsung has been leaking it like a colander. It looks to me that ICS isn't much different though.

I suspect the reports (mostly in Apple orientated sites it seems) about the iPhone 5 beating the average Android score prompted more than just me to see what scores we get. But it's not a realistic comparison until we get a similar number of iPhone 5's running the benchmark so that we get a true average across both platforms.

But anyway, undoubtedly the iPhone has gained significantly. The iPhone 4 and 4s are way behind in these benchmarks so nice to see that it's caught up. Not that the previous ones were that slow I guess. Lies, damn lies and statistics - makes for great link bait and forum fodder :).


Wild conjecture followed by cold, hard facts...as usual
 
What Saturn88 is saying is that Android has so much features from the beginning that they don't really need to upgrade their hardware, going back to version 1.0-1.6.

I'd love to believe him if I wasn't once stuck with a Donut (1.6) Android phone. I mean...man...I'd rather be stuck with the first generation iPod Touch before having to use that again.

Then again, we're replying to a person who's obviously trolling like no tomorrow. Android 1.0-1.6 was "future proof"? :|
I was saying that the most important OS features Android has since early versions:
 

Attachments

  • android.jpg
    android.jpg
    118.9 KB · Views: 212
We can probably agree the big surprise is Apple went ahead to create their own microarchitecture instead of settling with the standard ARM. Samsung and nVidia might be great but they are still limited by the Cortex A9 architecture.

I'd love to read what went on behind all this. I'd guess we finally know what all those chip designers at Apple have been doing as A5 was largely a very "vanilla" ARM chip in its core.

As if apple are the only ones to tweak and cutomise. Maybe it's a first for Apple but not a first for the industry.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.