Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Seems to be a lot of comments on here from people who have no clue about the benefits of ARMs 64bit architecture over their 23bit architecture. Its a huge amount more than access to more than 4gig of RAM.:rolleyes:
 
Why would they have to wait for Google? You mean to optimize the JVM itself for 64 bits? Couldn't they do that themselves?
JVM is a compiler which translates Java bytecode into code for the processor that it is running on. It's not just some "optimisation", it's a lot more work. And Java apps find it a lot harder to run efficiently on 64 bit. That's because the languages Apple uses (C, Objective-C, C++, Objective-C++) are quite flexible what sizes numbers are, Java is much more fixed. Every variable has a fixed size and the compiler has no right to change that size. In C etc. the same variable may be 32 bit on a 32 bit implementation, but 64 bit on a 64 bit implementation.

Interestingly, others have pointed out that C code recompiled for 64 bit will probably grow in size, since pointers probably have to be 64 bit as well. Unless there's a paging mode or something. Anyone know? Haven't kept up with compilers and ARMs.

The instruction size stays 32 bit, and instructions handle 64 bit pointers just fine, so that won't change code size. There will be some changes in code size, both up and down: Since there are more registers, there will be fewer instructions to save/restore registers when the compiler doesn't have enough registers. There will be fewer instructions for existing code using 64 bit arithmetic which had to be built from multiple instructions and now is just one instruction. There will sometimes be more instructions because the processor has been simplified.

Pointers are bigger. But in reality, big amounts of data are usually images, sound, videos, and they are unchanged. And Apple added some tricks in 64-bit Objective-C; one is "tagged pointers": NSDate and NSNumber objects in 64-bit code are often not real objects (with a pointer pointing to data allocated somewhere), but the 64 bit pointer actually contains the whole data, so there is no data actually allocated. There are similar tricks in C++, where no data is actually allocated for small strings or small vectors. So you win some, you lose some.
 
Last edited:
We already know the A7 is a good deal faster than the A6, which you're proving here. The real test would be how much faster a 64-bit app vs. a 32-bit one is on the 5S.

Read his post. He specifically says that when compiled in 32-bit mode it is between 12-30% slower on the iPhone 5S then when compiled for 64-bit.
 
so actual advancements in chip technology is just a marketing gimmick?? i can understand criticizing things like maybe the bright coloring as such, but a chip with better performance? that's not a gimmick...and who he is he to be calling the A7 a gimmick with when his chip's name is "snapdragon." the only ppl impressed with that are ppl who don't know anything about computers, and/or who like dragons.
 
So is there also no advantage in 64 bit desktop cpu's as well? And what about the marketing behind dual core and quad core chips in smartphones? Android users FAP FAP FAP FAP all day long over the fact they have a quad core smartphone.

(edited to reflect quad core cpu's)

People, READ what was said and try to UNDERSTAND what was said.

He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL. And he is RIGHT. The 64-Bit design itself doesn't make anything faster, and the phone does not have more than 4GB of RAM so the real advantage of using a 64-Bit architecture does not even come into play. If anything, then the disadvantages come into play: The A7 shovels a lot of unnecessary zeros around, wasting a lot of precious system memory along the way. And this has already been measured: Memory consumption on the 5S is much higher than on previous models.

On the desktop: Show me applications beyond video and photo editing software that actually benefits from 64-Bit implementations. There are NONE, that's a simple fact. When you don't need more than 4GB of address space in an application, you don't need to go 64-Bit. It doesn't improve anything.

A multi-tasking, multi-user operating system is a different story: It will handle a lot of concurrent processes and can significantly benefit from a 64-Bit software design. But smartphone and tablet operating systems are currently a couple of years away from that.

Today, the 64-Bit CPU in the iPhone 5S --IS-- a marketing gimmick with zero technical benefit. The A7 performs faster than its predecessors, sure, but that has nothing to do with it being a 64-Bit processor. The could have presented all those improvements with a 32-Bit A7 as well. But "64-Bits" sounds better in the spec sheet. Which only proves that Apple customers also care very much about specs, even if they pretend otherwise.
 
People, READ what was said and try to UNDERSTAND what was said.

He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL. And he is RIGHT. The 64-Bit design itself doesn't make anything faster, and the phone does not have more than 4GB of RAM so the real advantage of using a 64-Bit architecture does not even come into play. If anything, then the disadvantages come into play: The A7 shovels a lot of unnecessary zeros around, wasting a lot of precious system memory along the way. And this has already been measured: Memory consumption on the 5S is much higher than on previous models.

On the desktop: Show me applications beyond video and photo editing software that actually benefits from 64-Bit implementations. There are NONE, that's a simple fact. When you don't need more than 4GB of address space in an application, you don't need to go 64-Bit. It doesn't improve anything.

A multi-tasking, multi-user operating system is a different story: It will handle a lot of concurrent processes and can significantly benefit from a 64-Bit software design. But smartphone and tablet operating systems are currently a couple of years away from that.

Today, the 64-Bit CPU in the iPhone 5S --IS-- a marketing gimmick with zero technical benefit. The A7 performs faster than its predecessors, sure, but that has nothing to do with it being a 64-Bit processor. The could have presented all those improvements with a 32-Bit A7 as well. But "64-Bits" sounds better in the spec sheet. Which only proves that Apple customers also care very much about specs, even if they pretend otherwise.

So once again, why do desktops have 64 bit CPUs NOW? And what benefit does a quad core Snapdragon give? Answer me these riddles please...
 
more registers in 64-bit mode

ARMv8's 64-bit mode can use more general purpose registers than 32-bit mode.
This could explain why just recompiling with 64-bit makes some apps faster.
This is the same situation as AMD64 (x86-64) vs IA32 (x86).
 
People, READ what was said and try to UNDERSTAND what was said.

He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL. And he is RIGHT. The 64-Bit design itself doesn't make anything faster, and the phone does not have more than 4GB of RAM so the real advantage of using a 64-Bit architecture does not even come into play. If anything, then the disadvantages come into play: The A7 shovels a lot of unnecessary zeros around, wasting a lot of precious system memory along the way. And this has already been measured: Memory consumption on the 5S is much higher than on previous models.

On the desktop: Show me applications beyond video and photo editing software that actually benefits from 64-Bit implementations. There are NONE, that's a simple fact. When you don't need more than 4GB of address space in an application, you don't need to go 64-Bit. It doesn't improve anything.

A multi-tasking, multi-user operating system is a different story: It will handle a lot of concurrent processes and can significantly benefit from a 64-Bit software design. But smartphone and tablet operating systems are currently a couple of years away from that.

Today, the 64-Bit CPU in the iPhone 5S --IS-- a marketing gimmick with zero technical benefit. The A7 performs faster than its predecessors, sure, but that has nothing to do with it being a 64-Bit processor. The could have presented all those improvements with a 32-Bit A7 as well. But "64-Bits" sounds better in the spec sheet. Which only proves that Apple customers also care very much about specs, even if they pretend otherwise.

The problem is that the word "Benefit" is subjective. What exactly does benefit mean? It may benefit me but not you. I think to say that it wont benefit consumers is inaccurate. What if the minor gains in speed provide a more fluid experience to the OS? Less lag while moving from one screen to the next? Quicker re-draw of maps... All these things would definitely benefit consumers. What if the apps YOU use are rewritten for 64 bit? giving you that slight improvement... wouldn't THAT be a benefit? Aren't there some anandtech benchmarks (posted on this thread earlier also) showing iOS7 on the 5s on 32bit and 64bit, showing better results at 64?

Smoother OS experience alone is a benefit in my book.
 
So how much media transcoding and compression tasks are you doing on your iPhone? :rolleyes:

Wow, this whole thread is a pretty hard core display of a collective failure to grasp even basic computer science principles. What's even worse is that you all feel the need to come here and publicly display this ignorance like it was something to be proud of.

Given all the tasks that are off loaded to the GPU these days, the single major factor in adoption of 64bit CPU by a long way is addressable memory. Android Phones will hit the 4GB limit very soon, Apple will not for a number of years if they continue at the current rate of RAM increases.

Besides all this, when Android goes 64bit, the only applications which will need to be ported are those using the NDK. Apps which exclusively use Java (Dalvik) will just run without change as soon as Dalvik is ported. On iOS, each of those million apps needs to be updated - a major benefit of Android.

This same hardware abstraction that means most Android apps will "just work" once Dalvik is ported to 64bit is the same reason why they tend to have higher resource requirements - a characteristic that Apple fans have been harping on about for years but it would now seem as if the shoe is on the other foot now no?

Hardware abstraction doesn't really matter for iOS though... Apple's frameworks and compiler does most of the work. The most important thing is that iOS is ready for 64bit apps from devs RIGHT NOW so they can start selling them NOW.

The REAL purpose for 64bit apps is to bring OSX and iOS frameworks closer together. Now Apple can share more optimized framework code between the two OSes, and devs can more easily write apps that play on both iOS and OSX.
 
Benefit of 64 vs 32 bit - not address space related.

Name one way the iPhone 5S experience is improved by being 64-bit instead of 32.

Sure, not only are the 64 bit registers twice the size, there are 2x as many. A register doesn't have to be used just for computational processing space, or tracking memory addresses - it's also a near ZERO latency storage area. Those double-sized and double numbered registers provide immediate non-cached memory access; thereby increasing the availablity of memory fetches, and decreasing the latency involved in a R/W operation to cache memory.

Short version - use these EXTRA registers as cacheable memory; programs run faster and burn less power.
 
So once again, why do desktops have 64 bit CPUs NOW? And what benefit does a quad core Snapdragon give? Answer me these riddles please...

A desktop is not a cellphone.
On my MacPro I run video encoding applications, LightRoom, etc.
They need more than 4GB of physical memory to hold the application and data set. They need larger pointers.

I can go on.

A QuadCore Snapdragon? It depends on if the application is multithreaded or not. Linux which is the OS underneath Android does support threads.

Remmber iOS is based on BSD, Android SE Linux.
 
He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL..

While there is some merit to this, it isn't really quite true, and, as well, you have to add your own words (architecture) to make it at least partially-true.

So, for example, the fact that the A7 has twice the number of registers is not directly related to the fact that the architecture is 64-bit. It's just that the extra registers were added into the new instruction set, which is supported only on 64-bit architecture.

As someone else stated here, it wouldn't make sense to put the effort into updating the 32-bit architecture. It would be a wasted effort, unlikely to provide a good payback for the devleopment effort.

As well, I keep seeing the same simplistic argument that a 64-bit address space is not useful until you have > 4GB RAM. That's just not true, and when you make such a statement you are displaying your ignorance of the benefits of virtual memory.

I realize that as a consumer these simplistic arguments seem to make some logical sense. But they are wrong.

There are benefits to the A7 that don't directly relate to having a 64-bit architecture. There are benefits that do. It's difficult to separate them out and attribute better performance to one or the other.

Can we at least agree that the A7 chip is faster, the competition doesn't have it, and wishes they did?

A QuadCore Snapdragon? It depends on if the application is multithreaded or not. Linux which is the OS underneath Android does support threads.

I can't imagine a modern application that would not be multithreaded. (And certainly not an OS.) That is an ancient, 10-15 year old argument. At one time, most Windows applications were single-threaded. I've long since *stopped* writing Windows apps, but pretty sure that back in the day when I was writing them, I never wrote one that was single-threaded. Let's drop this old chestnut.

So, yes, more cores are good. One for the competition.
 
Last edited:
The chip itself is not a gimmick, the "64-bit" marketing hype is, and people here are drooling for more...

Name one way the iPhone 5S experience is improved by being 64-bit instead of 32.

Bingo. I'd bet there's a few people on here that can't even define what 64-bit processing actually is.
 
can't do an experiment in seven variables


We already know the A7 is a good deal faster than the A6, which you're proving here. The real test would be how much faster a 64-bit app vs. a 32-bit one is on the 5S.

The "real test" would be to compare a 32-bit ARM with the added registers and other tweaks with a 64-bit ARM. (Or with a 64-bit ARM without the added registers and tweaks.)

Otherwise, there's no proof as to whether any A7 speed increase is due to 64-bit addressing, or due to the ISA and microarchitecture changes.
 
just for the record...

I'd like to state that every competitor of my business that has a claim or feature i do not is simply using a "marketing gimmick" Every advantage they have it not really an advantage and every weakness i have is really a strength.

Thanks for listening.
 
Considering iOS is based on OS X, a now 64-bit only operating system; it makes sense from a development perspective as it levels the playing field for all applications built around the Apple ecosystem.

Sure it's not necessarily going to 'make a difference' to the average consumer but it should hopefully pave the way for a better, more integrated future.
 
Considering iOS is based on OS X, a now 64-bit only operating system; it makes sense from a development perspective as it levels the playing field for all applications built around the Apple ecosystem.

Sure it's not necessarily going to 'make a difference' to the average consumer but it should hopefully pave the way for a better, more integrated future.

I take it that you're not a developer working with a higher-level OO language?

If your code is really that different on 64-bit vs 32-bit, you're probably not correctly using the abstractions.

The exception would be code that can really exploit in-memory operations with 8 to 32 to 128 GiB of RAM or more.

If a system has 1 GiB or 2 GiB of RAM, those algorithms will fail anyway - whether the CPU is 32-bit or 64-bit. When the Iphone entry level is 8 GiB of RAM, let's revisit this question.

"64-bit" is simply a marketing buzzword based on the "bigger is better" school of thought. Apple did it with the first G5 cheese graters, and now they're doing it with phones. Yawn.
 
He said that CONSUMERS won't benefit from that 64-Bit CPU (architecture) in the iPhone 5S AT ALL.
...
The A7 shovels a lot of unnecessary zeros around, wasting a lot of precious system memory along the way.

Sorry, but Instruments in Xcode allows developers to actually measure the benefits and amounts of memory used. Code compiled for arm64 runs measurably faster, even in non-video apps. Code that runs faster allows the CPU to sleep sooner, extending the users battery life. Most of the memory that the CPU "shoves around" in typical apps doesn't change between armv7 and arm64, as it's common data (icons, views, etc.), not code.

And no one is going to design a new faster instruction set that will become obsolete in less than a decade, which means 64 bits or bust.
 
No, we simply know that 32-bit processors have been using 256-bit data pathways for many years.

You really don't have a clue here....

interesting, i have never programmed any 32-bit processor using a 256-bit data pathway and i have done loads of embedded system programming. lpddr2 is 32-bits per channel and operates quite a bit slower then the cpu can process the data. i am obviously referring to transfers within internal cache initiated by the cpu not the external interface.
 
interesting, i have never programmed any 32-bit processor using a 256-bit data pathway and i have done loads of embedded system programming. lpddr2 is 32-bits per channel and operates quite a bit slower then the cpu can process the data. i am obviously referring to transfers within internal cache initiated by the cpu not the external interface.

Then you haven't used any x86/x64 chip newer than a Coppermine Pentium III (October 1999).

And, as a systems programmer of many years, I can say that the widths of the internal data paths are quite opaque to the documented ISA. I program to the ISA-defined data types, and if the micro-architects decide to increase the number of bits transferred per cycle I might see better performance - but I won't be "programming to data pathways". There are some architectural hints - like use naturally aligned data, but those are pretty obvious and universal.

In fact, I'm much happier to program to high-level OO frameworks and not even care about what the "bit movers" are doing underneath.

Most of us stopped worrying about bus widths and cycles per instruction in the mid-70's. When micro-architectures diverged from the ISA with цops in the P6 days - the whole concept of "programming to the architecture" simply collapsed.

You're either talking through your posterior, or using programming paradigms that were discarded in the latter part of the last century.
 
interesting, i have never programmed any 32-bit processor using a 256-bit data pathway

Never used a Intel P4 ("NetBurst") based PC 10 to 12 years ago? It had a 256-bit wide internal L2 cache bus, as well as SSE2 (which includes 128-bit wide register loads as part of the ISA).
 
Never used a Intel P4 ("NetBurst") based PC 10 to 12 years ago? It had a 256-bit wide internal L2 cache bus, as well as SSE2 (which includes 128-bit wide register loads as part of the ISA).

we are talking about mobile device processors here incase you didn't realize that.

----------

Then you haven't used any x86/x64 chip newer than a Coppermine Pentium III (October 1999).

And, as a systems programmer of many years, I can say that the widths of the internal data paths are quite opaque to the documented ISA. I program to the ISA-defined data types, and if the micro-architects decide to increase the number of bits transferred per cycle I might see better performance - but I won't be "programming to data pathways". There are some architectural hints - like use naturally aligned data, but those are pretty obvious and universal.

In fact, I'm much happier to program to high-level OO frameworks and not even care about what the "bit movers" are doing underneath.

Most of us stopped worrying about bus widths and cycles per instruction in the mid-70's. When micro-architectures diverged from the ISA with цops in the P6 days - the whole concept of "programming to the architecture" simply collapsed.

You're either talking through your posterior, or using programming paradigms that were discarded in the latter part of the last century.

funny, i thought we were talking about mobile devices/processors here, lol... by the way i have done microprocessor chip design.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.