Seems to be a lot of comments on here from people who have no clue about the benefits of ARMs 64bit architecture over their 23bit architecture. Its a huge amount more than access to more than 4gig of RAM.
JVM is a compiler which translates Java bytecode into code for the processor that it is running on. It's not just some "optimisation", it's a lot more work. And Java apps find it a lot harder to run efficiently on 64 bit. That's because the languages Apple uses (C, Objective-C, C++, Objective-C++) are quite flexible what sizes numbers are, Java is much more fixed. Every variable has a fixed size and the compiler has no right to change that size. In C etc. the same variable may be 32 bit on a 32 bit implementation, but 64 bit on a 64 bit implementation.Why would they have to wait for Google? You mean to optimize the JVM itself for 64 bits? Couldn't they do that themselves?
Interestingly, others have pointed out that C code recompiled for 64 bit will probably grow in size, since pointers probably have to be 64 bit as well. Unless there's a paging mode or something. Anyone know? Haven't kept up with compilers and ARMs.
We already know the A7 is a good deal faster than the A6, which you're proving here. The real test would be how much faster a 64-bit app vs. a 32-bit one is on the 5S.
So is there also no advantage in 64 bit desktop cpu's as well? And what about the marketing behind dual core and quad core chips in smartphones? Android users FAP FAP FAP FAP all day long over the fact they have a quad core smartphone.
(edited to reflect quad core cpu's)
Read his post. He specifically says that when compiled in 32-bit mode it is between 12-30% slower on the iPhone 5S then when compiled for 64-bit.
People, READ what was said and try to UNDERSTAND what was said.
He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL. And he is RIGHT. The 64-Bit design itself doesn't make anything faster, and the phone does not have more than 4GB of RAM so the real advantage of using a 64-Bit architecture does not even come into play. If anything, then the disadvantages come into play: The A7 shovels a lot of unnecessary zeros around, wasting a lot of precious system memory along the way. And this has already been measured: Memory consumption on the 5S is much higher than on previous models.
On the desktop: Show me applications beyond video and photo editing software that actually benefits from 64-Bit implementations. There are NONE, that's a simple fact. When you don't need more than 4GB of address space in an application, you don't need to go 64-Bit. It doesn't improve anything.
A multi-tasking, multi-user operating system is a different story: It will handle a lot of concurrent processes and can significantly benefit from a 64-Bit software design. But smartphone and tablet operating systems are currently a couple of years away from that.
Today, the 64-Bit CPU in the iPhone 5S --IS-- a marketing gimmick with zero technical benefit. The A7 performs faster than its predecessors, sure, but that has nothing to do with it being a 64-Bit processor. The could have presented all those improvements with a 32-Bit A7 as well. But "64-Bits" sounds better in the spec sheet. Which only proves that Apple customers also care very much about specs, even if they pretend otherwise.
People, READ what was said and try to UNDERSTAND what was said.
He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL. And he is RIGHT. The 64-Bit design itself doesn't make anything faster, and the phone does not have more than 4GB of RAM so the real advantage of using a 64-Bit architecture does not even come into play. If anything, then the disadvantages come into play: The A7 shovels a lot of unnecessary zeros around, wasting a lot of precious system memory along the way. And this has already been measured: Memory consumption on the 5S is much higher than on previous models.
On the desktop: Show me applications beyond video and photo editing software that actually benefits from 64-Bit implementations. There are NONE, that's a simple fact. When you don't need more than 4GB of address space in an application, you don't need to go 64-Bit. It doesn't improve anything.
A multi-tasking, multi-user operating system is a different story: It will handle a lot of concurrent processes and can significantly benefit from a 64-Bit software design. But smartphone and tablet operating systems are currently a couple of years away from that.
Today, the 64-Bit CPU in the iPhone 5S --IS-- a marketing gimmick with zero technical benefit. The A7 performs faster than its predecessors, sure, but that has nothing to do with it being a 64-Bit processor. The could have presented all those improvements with a 32-Bit A7 as well. But "64-Bits" sounds better in the spec sheet. Which only proves that Apple customers also care very much about specs, even if they pretend otherwise.
So how much media transcoding and compression tasks are you doing on your iPhone?
Wow, this whole thread is a pretty hard core display of a collective failure to grasp even basic computer science principles. What's even worse is that you all feel the need to come here and publicly display this ignorance like it was something to be proud of.
Given all the tasks that are off loaded to the GPU these days, the single major factor in adoption of 64bit CPU by a long way is addressable memory. Android Phones will hit the 4GB limit very soon, Apple will not for a number of years if they continue at the current rate of RAM increases.
Besides all this, when Android goes 64bit, the only applications which will need to be ported are those using the NDK. Apps which exclusively use Java (Dalvik) will just run without change as soon as Dalvik is ported. On iOS, each of those million apps needs to be updated - a major benefit of Android.
This same hardware abstraction that means most Android apps will "just work" once Dalvik is ported to 64bit is the same reason why they tend to have higher resource requirements - a characteristic that Apple fans have been harping on about for years but it would now seem as if the shoe is on the other foot now no?
Name one way the iPhone 5S experience is improved by being 64-bit instead of 32.
That's easy. As a fanboi I like to tell my friends about my 64bit phone. That makes me feel large and therefore improves the experience. Any other questions?![]()
So once again, why do desktops have 64 bit CPUs NOW? And what benefit does a quad core Snapdragon give? Answer me these riddles please...
He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL..
A QuadCore Snapdragon? It depends on if the application is multithreaded or not. Linux which is the OS underneath Android does support threads.
The chip itself is not a gimmick, the "64-bit" marketing hype is, and people here are drooling for more...
Name one way the iPhone 5S experience is improved by being 64-bit instead of 32.
We already know the A7 is a good deal faster than the A6, which you're proving here. The real test would be how much faster a 64-bit app vs. a 32-bit one is on the 5S.
Considering iOS is based on OS X, a now 64-bit only operating system; it makes sense from a development perspective as it levels the playing field for all applications built around the Apple ecosystem.
Sure it's not necessarily going to 'make a difference' to the average consumer but it should hopefully pave the way for a better, more integrated future.
He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL.
...
The A7 shovels a lot of unnecessary zeros around, wasting a lot of precious system memory along the way.
No, we simply know that 32-bit processors have been using 256-bit data pathways for many years.
You really don't have a clue here....
interesting, i have never programmed any 32-bit processor using a 256-bit data pathway and i have done loads of embedded system programming. lpddr2 is 32-bits per channel and operates quite a bit slower then the cpu can process the data. i am obviously referring to transfers within internal cache initiated by the cpu not the external interface.
interesting, i have never programmed any 32-bit processor using a 256-bit data pathway
Never used a Intel P4 ("NetBurst") based PC 10 to 12 years ago? It had a 256-bit wide internal L2 cache bus, as well as SSE2 (which includes 128-bit wide register loads as part of the ISA).
Then you haven't used any x86/x64 chip newer than a Coppermine Pentium III (October 1999).
And, as a systems programmer of many years, I can say that the widths of the internal data paths are quite opaque to the documented ISA. I program to the ISA-defined data types, and if the micro-architects decide to increase the number of bits transferred per cycle I might see better performance - but I won't be "programming to data pathways". There are some architectural hints - like use naturally aligned data, but those are pretty obvious and universal.
In fact, I'm much happier to program to high-level OO frameworks and not even care about what the "bit movers" are doing underneath.
Most of us stopped worrying about bus widths and cycles per instruction in the mid-70's. When micro-architectures diverged from the ISA with цops in the P6 days - the whole concept of "programming to the architecture" simply collapsed.
You're either talking through your posterior, or using programming paradigms that were discarded in the latter part of the last century.