Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I think you need to learn simple maths.

Here are some simple maths.

My iphone 5 gets a 1577 geekbench score. At 31% faster the score will be 2065.

My Galaxy S4 gets a 3200 score in geekbench and my LG Optimus G pro gets a 3000. Both use the same 600 cpu but the S4 is clocked 200mhz faster. My old Galaxy Note II gets a 2000 geekbench score and its almost a year old.

Couple that sad fact with the other sad fact that apple is keeping only 1gb of ram in the 5s and you got a pretty old phone right off the get go.

Why do you need more ram? If you have to ask that then you don't deserve a response.
 
Here are some simple maths.

My iphone 5 gets a 1577 geekbench score. At 31% faster the score will be 2065.

My Galaxy S4 gets a 3200 score in geekbench and my LG Optimus G pro gets a 3000. Both use the same 600 cpu but the S4 is clocked 200mhz faster. My old Galaxy Note II gets a 2000 geekbench score and its almost a year old.

Couple that sad fact with the other sad fact that apple is keeping only 1gb of ram in the 5s and you got a pretty old phone right off the get go.

Why do you need more ram? If you have to ask that then you don't deserve a response.

Yet, iOS is more fluid than Android ;)
 
Here are some simple maths.

My iphone 5 gets a 1577 geekbench score. At 31% faster the score will be 2065.

My Galaxy S4 gets a 3200 score in geekbench and my LG Optimus G pro gets a 3000. Both use the same 600 cpu but the S4 is clocked 200mhz faster. My old Galaxy Note II gets a 2000 geekbench score and its almost a year old.

Couple that sad fact with the other sad fact that apple is keeping only 1gb of ram in the 5s and you got a pretty old phone right off the get go.

Why do you need more ram? If you have to ask that then you don't deserve a response.

Just checked Geekbench, the latest version:

http://browser.primatelabs.com/geekbench3/29934
http://browser.primatelabs.com/geekbench3/30157

Iphone 5: 714/1295
S4: 679/1803

i.e. iPhone 5 is already faster in single thread and slower in multithread

up it by 31% and get:
935/1696

i.e. A lot faster in single thread and a little bit slower in multithread.
 
Oh please explain smarty pants

MHz scaling is not directly performance % scaling.like apple clocking the a6 10% faster clocks =10% faster performance it will scale a lot higher especially with the amount of instruction per clock the a6 does

Your quote Iphone 5 is running at 1.2Ghz (which is wrong - it runs at 1.3ghz) and a 31% increase is 150-200mhz.

1.2 * 1.31 = 1.57Ghz (or 370mhz) - if we assume linear scaling. If not, then it would have to clock in at more than 1.57Ghz.

So .. where did you get 150-200Mhz from?
 
Did you seriously just post that ridiculous link?
what pray tell is so ridiculous about a link showing android screen size statistic?
You can use a 5" screen if you are a grown man without girly hands, which is why apple will cater to you people with a 4" device.

However for us real men who have huge hands, a 4" device is simply too small.
real men? WTF?! did you seriously just post that? shall we compare the size of genitalia while we are at it? I've heard that among you "real men" that counts as the right metric.
I'm 5'10" and I have regular size hands. I don't have Michael Jordan size hands and neither do most people. I have an ipod touch 4 at the moment and I consider its size to be perfect. I'm planning to buy iphone 5s once it's out. I played with iphone 5 and I find it slightly too big for easy one handed operation but still acceptable. I also played with a number of larger screen android phones and those are way too big for my taste. as I said, I want easy one handed operation in a phone. If I want something where you need both hands just to scroll in a web page I'll buy a tablet.
 
Your quote Iphone 5 is running at 1.2Ghz (which is wrong - it runs at 1.3ghz) and a 31% increase is 150-200mhz.

1.2 * 1.31 = 1.57Ghz (or 370mhz) - if we assume linear scaling. If not, then it would have to clock in at more than 1.57Ghz.

So .. where did you get 150-200Mhz from?

Performance is not scaled linear with clock speed.Sorry I ment 1.3 GHz and apple does not need to raise its clock speed 31% to gain 31% performance

A simple speed bump of 150-200 MHz would give apple about 30-35% performance boost across the board with that MHz bump.

When I bench my cell a simple 150-200 MHz overclock is a huge gain in benchmark scores
 
Did you seriously just post that ridiculous link?

You can use a 5" screen if you are a grown man without girly hands, which is why apple will cater to you people with a 4" device.

However for us real men who have huge hands, a 4" device is simply too small.

----------



The s4 fits easily into any pocket. I'm not telling apple to make a phone with a 6" screen

Seems you have an identity crisis. You see the need to use masculine objects just to establish your masculinity.
 
11 up votes for a clueless post.

There is very little to be re-written in the kernel. First, you don't need a 64 bit kernel, you just need the ability to run 64 bit apps on top of the kernel. Apple has had that technology for many years. Second, when moving the kernel to 64 bit, the big problem is compatibiity with 32 bit drivers. Since all the iPhone drivers are Apple drivers, that's no problem; that 64 bit kernel will have no need to run any 32 bit drivers. The last one, that 32-bit apps would have to be run in emulation, that is just nonsense. Look at MacOS X. 32 and 64 bit apps run together just fine.

If your 64-bit CPU can read 32-bit instructions, and just about all of them can these days, then all you need to run 32-bit programs is an application layer, which is roughly nothing more than set of 32-bit libraries the 32-bit applications can call upon. It emulates an API environment, but actually isn't an emulator by the strictest definition of the term. Think of WINE, and you're about 3/4ths of the way there.

...though I can't think of a single reason why any ARM device needs 64-bit at the moment. Apple and the rest might be doing it to prep for the future, but no one will notice any huge differences in performance solely because of the bit jump.
 
Performance is not scaled linear with clock speed.Sorry I ment 1.3 GHz and apple does not need to raise its clock speed 31% to gain 31% performance

A simple speed bump of 150-200 MHz would give apple about 30-35% performance boost across the board with that MHz bump.

When I bench my cell a simple 150-200 MHz overclock is a huge gain in benchmark scores

That is just so wrong that it is laughable.
 
64bit doubles the memory bandwidth and therefore the transfer from and to the GPU. That's a major improvement. It's not about address space, but bandwidth.

NO. Absolutely wrong.

64-bit computing (use of 64-bit addresses) is does NOT change memory bandwidth. Bandwidth is a function of bus width and clock speed. Bus width is not dependent on the address space, and can be varied on it own.

If the bus width and clock speed is held constant, it's very probable that a move from 32-bit addresses to 64-bit addresses actually decreases the effective throughput. Most of those 64-bit addresses would filled with excess zeros which still take up bandwidth, because most apps don't use 4+ GB of memory.
 
A core i7 can run a slower clock than an P5, yet perform much faster per core. Apple could well do some of the same (better ILP and branch prediction, better caches, & etc.) and not boost clock speed much, if at all.

That is possible as well, but I was responding to this quote:

"For all we know its the same exact a6 just clocked 150-200 MHz faster to give it that magic 31% boost number."
 
If the bus width and clock speed is held constant, it's very probable that a move from 32-bit addresses to 64-bit addresses actually decreases the effective throughput. Most of those 64-bit addresses would filled with excess zeros which still take up bandwidth, because most apps don't use 4+ GB of memory.

The ARM 64-bit ISA also includes wider registers and more of them, which more than makes up the difference for typical workloads.
 
Here are some simple maths.

My iphone 5 gets a 1577 geekbench score. At 31% faster the score will be 2065.

My Galaxy S4 gets a 3200 score in geekbench and my LG Optimus G pro gets a 3000. Both use the same 600 cpu but the S4 is clocked 200mhz faster. My old Galaxy Note II gets a 2000 geekbench score and its almost a year old.

Couple that sad fact with the other sad fact that apple is keeping only 1gb of ram in the 5s and you got a pretty old phone right off the get go.

Why do you need more ram? If you have to ask that then you don't deserve a response.

The problem with your "maths" is that they don't mean anything. Sure the iPhone 5 scores lower than those phones, but consider the fact that the iPhone 5 performs the same, and sometimes better than, those phones with nearly 2x higher geekbench scores.

It's not all about the numbers. ;)
 
The problem with your "maths" is that they don't mean anything. Sure the iPhone 5 scores lower than those phones, but consider the fact that the iPhone 5 performs the same, and sometimes better than, those phones with nearly 2x higher geekbench scores.

It's not all about the numbers. ;)

This all the way.
 
From the article:



That sounds a bit strange... Why would 64-bit be better graphically? The advantage of 64-bit is just more memory address space and larger integer operations. 32-bit is already enough to cover the required pixel space and colour space.

Also, animations usually aren't integer math and most of these animations are run by the GPU anyway.

It sounds like Apple is also including a special vector engine similar to AltiVec or SSE in this chip to increase animation performance, but that wouldn't necessarily be related to it being 64-bit.

You're absolutely right that it sounds strange.
Because the article's claims are completely ludicrous.

Adding special instructions to the CPU doesn't really help much.
You know what helps increase animation performance when your animations are layers composited on a GPU? Making a better GPU and giving it more memory bandwidth.

----------

Isn't refreshing RAM one of the biggest battery costs on the ideal system?

Would it be dumb to suggest they could use the 64bit address space to direct access the storage chips?

FusionIO can license them code to do this, LLVM compiler chain knows the life time of the object and could also flag if it is only needs to be read in to Memory what could be read direct from Storage. That could also speed up all of the above as well so that it seems like the machine has more RAM without the battery cost of more RAM.

Yes, it would be dumb specifically because the storage chips are NAND Flash.

Allowing for direct addressing of the NAND flash by the CPU (and of course encouraging people to use it) decreases overall performance for at least the following reasons:
1) computation of ECC and remapping is now done by your application instead of a dedicated controller. Obviously, more work done by your CPU that it didn't need to do before means you have computing power available. Even if you added special instructions and logic to accelerate this you'd still lose compared to having a dedicated controller for a whole 'nother variety of reasons stemming from just this point.
2) your CPU has to wait for the NAND; the extra time spent context switching as you wait, if you're not hung, means that you've got less computing power available before because you're spending time doing stuff that you didn't have to do before.
3) NAND is slow. Really slow. In fact, it's possible to get in a situation where writing to NAND is slower than writing to a crappy 5400rpm spinning disk. Inadvertently letting developers who don't understand this use NAND as working memory will be unbelievably bad. If you want to get a taste of this, get a JMF601-based SSD drive, write random bits across the entire thing, and then install your OS over it. It was be abysmally slow because it'll hit all the pain points of NAND.

Basically:
1) NAND is too slow.
2) Developers wouldn't know how to properly use it.
3) Dedicated NAND controller saved the CPU a lot of work and did it in parallel; getting rid of that means the CPU has to do the work.
 
i have no doubt that on paper the iphone isn't the spec powerhouse some other androids are, but is the iphone 5 slow or something? Seems like ios is optimized efficiently on the iphone, so it doesn't need quad/octacore and 2-4 gb ram. I like android as well, but it obviously is not as software optimized as ios.
 
i have no doubt that on paper the iphone isn't the spec powerhouse some other androids are, but is the iphone 5 slow or something? Seems like ios is optimized efficiently on the iphone, so it doesn't need quad/octacore and 2-4 gb ram. I like android as well, but it obviously is not as software optimized as ios.

The biggest problem I have with some of these anti-spec arguments is that you think it somehow directly equates to the OS, rather than the apps.

Because Android phones have quad-core CPUs with 2GB ram, that must mean Android isn't well optimized.

Because the iDevices don't rely on specs as much, that must mean iOS is much better optimized.

That isn't true. The latest version of Android can run on two-three year old phones with barely a hitch. The reasons you want higher specs is for the apps. So you can work with higher resolution photos faster and better with a photo editing app. So you can have higher quality games with larger worlds and deeper gameplay that don't load as often. So you can have more tabs open in Safari. This is why you all should be more supportive about a hefty spec boost.

Right now it doesn't matter much. There isn't a single app out for a phone that uses a quad core CPU and over a GB of ram. But there eventually will be. The day is coming when tablets apps will be just as robust and capable as what you'd get on a desktop. And when that day comes, you'll want some specs.
 
Lol try using a nexus 4 or HTC/s4 Google edition phones

Ya makes your iPhone feel like a 3gs lol

Na. iOS is smoother. It may not be because of anything to do with hardware either. It may be as simple as Apple have matched the speed at which transitions happen with the speed that most people swipe?

iOS is not faster I don't think. I've watched the swipes on a lot of Android phones and the screen is closer to just changing as they move very quickly, rather than transitioning.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.