Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm about to head back to the house (visited my mom, about 2hrs on bike). If you could explain what that means, I'd be thankful. Why would they say it's 32-bit if it's 64-bit? What does it mean for 32-bit processors and 64-bit? I'm a bit slow when it comes to these things.

Pack of Data that CPU uses is 64 bit wide in 64 bit CPU. Pack of data that CPU uses is 32 bit wide in 32 bit CPU. 32 bit CPU cannot benefit from 64 bit packs, but 64 bit CPU can use 32 bit packs.

That is the main difference. In theory it should be twice faster than 32 bit CPUs, in practice its about 5-10% faster.
 
The opposite.
That is false. ARM is RISC and x86 is CISC. RISC literally means reduced instruction set computing...

1. An A7 benchmarks faster than some big-iron supercomputers of less than 2 decades ago ("heavy-lift, as in some weighed more than a ton). The A7/8 are "mobile" because they have been detuned to not burn your hand and/or require water cooling. Otherwise, they can "lift" just as much.
So would any modern chip? Not sure how that relates to ARM vs x86.

2. The new arm64 instruction set allows both the LLVM compiler and the Ax CPU dispatch/retire unit to optimize performance much more than the very old x86 ISA. The stuff you can do with an Intel CPU is actually less, and eats more power to try.
Exactly what can an ARM chip do that a x86 chip can't? And by that I mean tasks, not what power it takes to do them. I can think of several tasks an x86 chip can perform considerably faster than ARM.
 
Last edited:
It's actually never been a problem for me at all. What are the other ram related issues that you are having?

Safari reloading tabs never happens to you? Or you just don't care?
Well, at least I and I assume a good number of other people find it extremely annoying as well.
(Google "ipad or iphone safari tab reloading".. yeah they all just want to brag about how much RAM they have)

Its also annoying that you generalize those who want more RAM as people who just want to brag to Android users. The same goes for the other person saying we're jailbreakers and widget lovers.

Many times I jump between many webpages. I might search something related to an article that I was reading. And that process alone could involve opening another set of tabs. Or I will have a bunch of tabs with related information and jump back and forth. And the tabs frequently reloading really becomes annoying. Not only does it often forget where in the page I were, but having to wait for it to load each time is just as annoying.
Its the same issue when you close safari and try to get back to an article you were reading earlier.
And it's the worst when you're trying get back to a movie, youtube video etc.

That's actually my biggest complaint regarding the lack of RAM as I use safari the most on my iPhone and iPad.
 
Last edited:
There's a reason we never went over 4GHz and it's not because they can't.

Both IBM with Power and Fujitsu with Sparc went well over 4 GHz, but both of those are multi-million-dollar systems which require significant (large and noisy) custom cooling. Cooling is the reason most mobile and desktop systems stay away from 4GHz.

It's difficult to compare a mobile and desktop processor without bias as they are both designed for specific use cases. While ARM is catching up for multimedia x86 blows it out of the water for complex floating point operations. Try benchmarking a CAD application and tell me how well any ARM chip performs.

I have. The floating point units on the A7 are quite competitive. The low number of cores and the power-limited memory size, bandwidth and latency of mobile memory chips (plus down-clocking so that an iPhone does not require a fan and water cooling, etc.) seem to be the main thing holding the iOS A7 processor design back from CAD applications.

Yes, the current known chip implementations are for specific use cases. But it's not too hard to imagine the performance increase if Apple's A7 (and newer) arm64 micro-architectures were ported to a high-temperature high-performance 20nm (or sub-20nm) semiconductor process, and put in a system with lots of cooling capability plus a high-bandwidth memory system, it would be quite competitive with Intel desktop system.

Basically, the only reason iPhones of the future will be slower than a desktop or server system is so that they won't burn your hand and run down the battery before you get home.
 
Both IBM with Power and Fujitsu with Sparc went well over 4 GHz, but both of those are multi-million-dollar systems which require significant (large and noisy) custom cooling. Cooling is the reason most mobile and desktop systems stay away from 4GHz.
So did Intel. It has nothing to do with heat, and everything to do with performance. At a certain point, having more low clock speed cores yields faster performance than fewer high speed cores. That's why all modern chips are still in the 2-3GHz range, include more and more cores, and parallel programming is such a big deal.

I have. The floating point units on the A7 are quite competitive. The low number of cores and the power-limited memory size, bandwidth and latency of mobile memory chips (plus down-clocking so that an iPhone does not require a fan and water cooling, etc.) seem to be the main thing holding the iOS A7 processor design back from CAD applications.
Care to share you results? I'm skeptical on how you measured this. Where did you read the A7 is down clocked or is that just your opinion? I've never heard of any chip doing this on purpose. Do you know why we have chips in various ranges (2.2, 2.4, 2.6Ghz)? It's not on purpose.

Yes, the current known chip implementations are for specific use cases. But it's not too hard to imagine the performance increase if Apple's A7 (and newer) arm64 micro-architectures were ported to a high-temperature high-performance 20nm (or sub-20nm) semiconductor process, and put in a system with lots of cooling capability plus a high-bandwidth memory system, it would be quite competitive with Intel desktop system.
Well yes, but obviously Intel isn't going to sit on it's hands and stop improving their designs while ARM takes the lead. ARM will always be better on power because that's what it's optimized for, x86 will always be better for performance as it's optimized for speed.

Basically, the only reason iPhones of the future will be slower than a desktop or server system is so that they won't burn your hand and run down the battery before you get home.
Heat is not the issue battery life is. They could stick a Haswell chip in there if they wanted to but it would have a 2h battery life.
 
Last edited:
Exactly what can an ARM chip do that a x86 chip can't?

The most obvious one is any subroutine that fits in 31 registers but not 16. The arm64 chip has more named CPU registers, and thus won't waste instructions and time spilling data to/from first level data cache.

However, they are both Turing-complete processors, and thus theoretically each can do anything the other can do, given enough memory, time and/or power.
 
Last edited:
I confirm this rumor.

It is confirmed then! :) This made my day!

Arn could you please update the article like so -

[Update: Confirmed] According to a MacRumors reader the iPhone 6 will feature 4GB RAM


I can't wait!

Does Geekbench for iOS and Mac even measure the samethings? I agree it's ludicrous to think the A8 is even in the same ballpark as an i5 Haswell CPU. These are Desktop CPUs designed for heavy lifting, not a mobile CPU. Also, isn't the ARM Instruction set much small than x86? The stuff you can do with Intel's is probably much greater.

What does "Desktop Class CPU" even mean? I highly doubt its means that it can replace a standard Desktop CPU. Probably referencing a single variable that is close to a CPU and then Apple calling it a "Desktop CPU".

I believe Apple will release the best Mobile CPU at the end of the year, but to think it can power a desktop is far fetched.

my understanding is that geekbench mac does test the same things as geekbench for ios. I think it is perfectly reasonable to think that the A8 has the power to run a Macbook Air, or even a MacBook Pro or iMac for that matter. However, there are is far more factors needed than just having Raw power.

So even though our ARM chips are catching up with our x86 chips in terms of RAW Power it doesn't not mean that they will be replacing them in the desktop space any time soon.

The next Macbook air coming on this holiday will be Broadwell Core M, and I am really excited
 
So did Intel. It has nothing to do with heat, and everything to do with performance. At a certain point, having more low clock speed cores yields faster performance than fewer high speed cores. That's why all modern chips are still in the 2-3GHz range, include more and more cores, and parallel programming is such a big deal.

When did Intel ever go over 4 GHz? The Pentium 4s never hit that.

(Unless you're counting turboboost. Which I suppose in a way you should... The 2.7 GHz Quad Core i7 in my MBP isn't really a 2.7 GHz CPU... it's running well over 3 GHz most of the time.)

Also, it absolutely has to do with heat. The higher you ramp up the frequency, the higher you need to crank the voltage. This results in exponential increases in power draw (and thus heat).

If heat and power weren't a problem, they'd just do 6 GHz and still use 4 cores per processor.
 
The problem with the future A8 is that it's a complete SOC with even the RAM on-board. 2gb on a laptop these days is pitiful. I guess there's some very light users of MBA here, but I returned my 4gb 2013 MBA and went with a 8gb machine when I was getting page outs and serious slow downs with just multiple tabs in safari or chrome (so much for Mavericks memory compression.) The A8 cheerleaders are also forgetting that geek bench is entirely a CPU benchmark and a very limited one at that. The HD5000 is still quite a bit faster than the latest PowerVR GPU's and Broadwell is supposed to have 2x faster GPU. Many MBA also use their laptops for work and not just surfing the web as I do on my iPad Air. For photo editing I need fast ram access and fast SSD access and I don't think the A8 will have as good I/O and multithread performance to even come close to Intel chips for the next couple of years.
 
Last edited:
When did Intel ever go over 4 GHz? The Pentium 4s never hit that.
They were never commercially available but they demo'd them. Once we hit the 4GHz mark chip makers quickly realized pushing clock speed was not the way forward.

If heat and power weren't a problem, they'd just do 6 GHz and still use 4 cores per processor.
No we wouldn't. Programmers understand this better than most. Using a single core to complete a task, or breaking that task into pieces and have multiple cores complete each piece is almost always faster.

If higher clock rates were best, why would super computers use 100000x cores and not a few sitting in a vat of liquid nitrogen? At that level heat, size, and cost are not an issue.
 
Heat is not the issue battery life is.

Have you forgotten your freshman physics? Conservation of energy?

Running down the battery fast enough will burn your hand (or require a large heat sink, fan, or water cooling to otherwise get dissipate of the heat from something other than your poor hand).
 
my understanding is that geekbench mac does test the same things as geekbench for ios. I think it is perfectly reasonable to think that the A8 has the power to run a Macbook Air, or even a MacBook Pro or iMac for that matter. However, there are is far more factors needed than just having Raw power.

Ahuh >_>

My MacBook Pro scores over 13,000 on Geekbench 3. What's the iPad Air score again?

(Less than 2,500)
 
The most obvious one is any subroutine that fits in 31 registers but not 16. The arm64 chip has more named CPU registers, and thus won't waste instructions and time spilling data to/from first level data cache.
And a real-world example of that is?

However, they are both Turing-complete processors, and thus theoretically each can do anything the other can do, given enough memory, time and/or power.
Well yes, but on a desktop time is rather important and power is not. On mobile it's the opposite.
 
They were never commercially available but they demo'd them. Once we hit the 4GHz mark chip makers quickly realized pushing clock speed was not the way forward.

*facepalm*

And overclockers had hit 6 GHz on P4s years earlier. A demo doesn't qualify as Intel "hitting 4 GHz"


No we wouldn't. Programmers understand this better than most. Using a single core to complete a task, or breaking that task into pieces and have multiple cores complete each piece is almost always faster.

What?

If higher clock rates were best, why would super computers use 100000x cores and not a few sitting in a vat of liquid nitrogen? At that level heat, size, and cost are not an issue.

What part of "if heat weren't an issue, they would both crank up the frequency and still use multiple cores" did you not get? :eyeroll:
 
And overclockers had hit 6 GHz on P4s years earlier. A demo doesn't qualify as Intel "hitting 4 GHz"
They demo'd them to show their progress with clock rates. And were the overclockers chips stable? No, they were not as they were using cooling methods the chips weren't designed for. Most of them only run for a few minutes before they crash from memory corruption.

What part didn't you understand? Here's an analogy:

You have a set of 100 numbers, you want to check which numbers are prime. You have two computers, A with a single core at 100GHz and B with 100 cores at 1GHz. Computer A uses one core to run through the list and do it's calculations sequentially, computer B gives each core one number and in parallel computes them all. Which is faster? Computer B. This has been proven many times and is the exact reason all modern chips have 8-12 (and growing) cores and the clock speed hasn't changed very much.

What part of "if heat weren't an issue, they would both crank up the frequency and still use multiple cores" did you not get? :eyeroll:
You said 4 cores not multiple which is obviously not correct as that's not how any modern super computer is designed.
 
Last edited:
Ahuh >_>

My MacBook Pro scores over 13,000 on Geekbench 3. What's the iPad Air score again?

(Less than 2,500)

then after this generation the iPad air may score ~5,000.

but my point was not that ARM chips have caught up to MacBook Pro performance, but I was claiming that MacBook Pro's do not need as much performance as they have.

----------

Skylake is on schedule#

yea right. probably wont see SkyLake until end of 2016, and some won't even be out to 2017.
 
Well yes, but on a desktop time is rather important and power is not. On mobile it's the opposite.

Exactly. Which is why a desktop should be faster than a mobile, no matter which CPU architecture is used. (e.g. a new ARM desktop chip should be faster than an Intel mobile device chip, and vice-versa, when using the same semiconductor technology generation).

And power is quite important on a desktop. Ever seen the power and cooling subsystem that a Cray or IBM Z-series requires? Nobody wants that on their desk, even if it ran their apps tons faster.

Lots of PC users just hate the fan noise (one good reason to buy Mac Pros).
 
It's significantly harder to make a legacy app compilable for 64-bit than was the armv6 -> armv7 switch, as long as you didn't mean for example OpenGL ES 1.1 vs. 2. (All armv6-based devices supported 1.1 only, while all armv7-based ones supported 2.0. This, however, has nothing to do with the pure armv6 vs. armv7 difference.)

Actually, I'd say 99.9999% of armv6-compatible legacy apps were armv7-compatible, without any need to change anything in the source (except for, maybe, the non-obligatory deprecation-related changes.) With the 32 -> 64-bit switch, you need to change almost all old for example int declarations, shifts etc. A LOT more change...

Yes, OpenGL and arm changed at the same time. And that time is a cut-off time.

The point isn't the equivalent of armv6 apps being armv7 compatible, rather that armv7 apps don't run on armv6 hardware. A7 was the first 64-bit iOS processor. At some point those devices are likely to be the oldest supported machines, and then will be pushed to their limit.
 
Have you forgotten your freshman physics? Conservation of energy?

Running down the battery fast enough will burn your hand (or require a large heat sink, fan, or water cooling to otherwise get dissipate of the heat from something other than your poor hand).
No I haven't, but I don't think you know as much about the software side of things as the hardware. See my post above with an analogy on single vs. parallel completion of a task.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.