Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Personally, I think it is very naive to think that it is happening here--if not plain old wishful thinking. What, did Samsung set up sleeper cells half a decade ago only to activate them now? Or a decade? I have seen registered users here accused of that who joined that long ago.

QFT.

And almost without fail, the ones who accuse others of being trolls or astroturfers, are themselves relative newcomers.

However, length of membership should not matter. Everyone is a newbie sometime.

The funny thing is, I bet most people here can predict everyone else's response by now. :)
 
"Online forums" is as far as you got. Come back to me when you dig deeper and find out there are no "examples" of that and that the original allegation was online reviews.

But if you want to believe MR is littered with "paid trolls" go right ahead. I find it humorous.

You're right, I didn't "dig deeper". Whatever. The fact is, Samsung has been caught doing this type of nefarious activity. I see a possibility. No where did I say that MR is "littered with paid trolls". I don't believe that at all. What's humorous is you calling me out for not digging deeper yet you read so much more into my comments than what I actually said.
 
You're right, I didn't "dig deeper". Whatever. The fact is, Samsung has been caught doing this type of nefarious activity. I see a possibility. No where did I say that MR is "littered with paid trolls". I don't believe that at all. What's humorous is you calling me out for not digging deeper yet you read so much more into my comments than what I actually said.

You're just the latest person to join in on a conversation that's been drawing out for way too damn long now. No, you probably didn't say anything about MR being "littered with paid trolls" exactly, but you brought up the subject, which will cause someone else to inevitably bring up things that have already been discussed to death well over a year ago.

Also, have you seen the new Nokia 1520? I think it's better than the iPhone 5S.
 
You're just the latest person to join in on a conversation that's been drawing out for way too damn long now. No, you probably didn't say anything about MR being "littered with paid trolls" exactly, but you brought up the subject, which will cause someone else to inevitably bring up things that have already been discussed to death well over a year ago.

Also, have you seen the new Nokia 1520? I think it's better than the iPhone 5S.

No, I didn't bring up the subject.
 
One place where data could take up a lot more space on 64-but vs 32-bit would be structures with lots of pointers. I haven't looked personally, but it would not surprise me at all if the DOM tree of a web browser would have a lot of pointers in it. Don't know if it would be in the more than 1MB range - that would be *a lot* of pointers (~250k of them) to get that high.

I would suspect that there might be some other bugs lurking in Safari/WebKit on iOS 64-bit that would be a better explanation for the issues being seen. It already sounds like 7.1 is a huge improvement in the A7 devices.

Yes, that's exactly what I was referring to. And it's basically a rounding error compared to what some people here think is going on with Safari on the new iPads.
 
EDIT (12/19/2013 15:51 GMT): this post is based on wrong results; please ignore it. I only keep it intact for historical purposes. See https://forums.macrumors.com/posts/18522600/ for the update.

Original post:

the good news is that the apps using WebView are no longer randomly crashing as they were before in iOS 7.0.x.

Unfortunately, this isn't true. See my full post & elaboration at https://forums.macrumors.com/posts/18519697/ (I can cross-post the entire comment here if needed).
 
Last edited:
You are of course wrong. For example, AArch64 has thrown out most of the predication, which limits clock speed, costs power, and achieves less than people thought 20 years ago (while leaving some special cases that gain 99% of the advantages at minimal cost in the processor). There are plenty of other changes. There are changes that can't be made in 32 bit because of compatibility reasons, but they can be made in AArch64. String processing is faster (and therefore uses less power) due to 64 bit instructions. Public key encryption / decryption is massively faster due to 64 bit instructions. And some encryption / decryption now has hardware support, so you get massive power savings there.
You're of course still wrong because you're still not reading. And you're being extremely pedantic.

Let me give you a hint. Two identical processors, one 64b, one 32b, but otherwise identical. The 64b version will already use more power by the single fact that it requires more transistors.

Sigh.

If you're still not reading and want to reply, don't bother. I'll agree with you in advance so you can say you won an argument on the internet.

Yeah, now I am being pedantic. I just get tired of people arguing while showing not to have read what they are arguing against.

Can you name an operation which is less efficient (cycles per operation) on a 64-bit processor? The A7 in particular? :rolleyes:

Cache pollution is more likely to happen with 64b processors compared to 32b processors, particularly if a program uses a lot of pointers, because you have to write twice as much data. If cache pollution causes your processor to hit ram, that will have a rather dramatic effect on performance.
 
EDIT (12/19/2013 15:51 GMT): some of this post is based on wrong results; please ignore it. I only keep it intact for historical purposes. See https://forums.macrumors.com/posts/18522600/ for the update.

Original post:

I ran iOS 7 on my iPad 3 and ran into less crashes. Those were beta releases too.

It crashes equally much, regretfully. See my measurement results linked to some hours ago.

----------

Are you sure that is an iPad Air you are using? Mine keeps 6 going quite nicely at the same time sans any problems. Daily use since early November without trouble.

...With small or mobile-specific web pages. With normal (non-mobile) / large pages, you would run in the same problem.

I've published a lot of benchmarks on this - feel free to read my stuff in the linked thread above.

----------

What do you think is causing the tab reloading on the iPad Air? Is it iOS 7 or lack of RAM? I see a low memory log almost every day in diagnostics. Honest question.

Both lack of RAM and very memory-hungry UIWebView-design.

----------

I think you're just seeing the ios 7 effect. I have an iPad 4, and since the upgrade it crashes to the black screen with white Apple logo several times a week. I can barely remember a crash at all in ios 6.

Nope, it's the same UIWebView problem plaguing earlier OS versions too.

----------

"Low memory" conditions are part of normal operation of iOS. It's intentional. iOS keeps background apps in RAM for as long as possible and throws them out when it needs their memory.

i wish it worked that way... Todayt, there's simply no protection - if for example during user scrolling you exhaust the RAM, you'll immediately crash. See my earlier benchmarks and posts here at MR.

iOS has just no protection mechanisms against these things.

----------

That's not the chip. That's ios 7.

Not iOS7 per se but all iOS versions.

----------

Except that iDevices are already in the "too little RAM" category even before you add any extra required for a 64-bit app. :\

Exactly. Planned oboscelence? Making way for the 2+ GB iPad Pro?
 
Last edited:
You are outing yourself here. AArch64 (the 64 bit ARM architecture) uses less power. The instruction set has been simplified. Twenty years of experience of what works well and what doesn't in the 16 and 32 bit instruction set lead to changes that improve the operation of the processor.

No, I meant exactly what I said.

Given engineers of the highest caliber, doing their best at designing a 64-bit and 32-bit CPU: the 64-bit design will consume more power compared to the 32-bit design. (except in the very narrow band of applications that consistently deal with numbers larger than 2^32)

I didn't say AArch64 versus AArch32.
I stated it as a general statement that applies to all CPU designs and ISAs.

And yes, it even applies to AArch64 versus "a 32-bit ISA similar to AArch64."
If one were to apply those 20 years of experience you mention to a revised 32-bit ISA, you'd see a CPU which will, in general, consume less power than an AArch64 CPU.
In fact, given that AArch64 uses 32-bit instructions, it's likely that this hypothetical AArch32v2 would only differ from AArch64 in register width. Simply powering the extra flip-flops necessary to make a wider register already makes the 32-bit design be the lower bound of 64-bit design's power consumption.

Of course, CPU power consumption is just a part of a system's power consumption, given that it ignores nuances of application code (such as that 2^32 caveat I mentioned above). So, while the statement holds true, said hypothetical AArch32v2 would not have been as beneficial to most iOS apps compared to AArch64 given what Apple did to the Obj-C runtime as mentioned before.

Added comment: Just saw that throttlemeister basically said the same thing above; Guess I should have read through the rest of the thread before replying.
----------

Can you name an operation which is less efficient (cycles per operation) on a 64-bit processor? The A7 in particular? :rolleyes:

While I can't be sure as I have no idea how they implemented it, I'd venture a guess that trying to multiply two 32-bit integers is likely to take more cycles on a 64-bit processor than a 32-bit one. It's also blurred even more since we didn't really discuss how long a cycle is.

Does it really matter? Probably not, but it's an interesting academic exercise :)
 
Last edited:
You're right, I didn't "dig deeper". Whatever. The fact is, Samsung has been caught doing this type of nefarious activity. I see a possibility. No where did I say that MR is "littered with paid trolls". I don't believe that at all. What's humorous is you calling me out for not digging deeper yet you read so much more into my comments than what I actually said.

I wouldn't think that paying a troll makes any sense. If I paid people for posts that are damaging to Apple, I wouldn't pay trolls. I would pay people posting about genuine fake bad user experiences. I would pay people posting about Macs falling apart after 14 months asking for help how to get their computers repaired, and so on. I would pay people who post Samsung adverts (has happened here). If you can guess that someone is trolling, I wouldn't pay for their posts.

Samsung didn't pay for trolling. They paid for genuinely looking but fake negative reviews of HTC phones.
 
Why would the APIs change when all the data types are fixed sizes?

Ok, I can see how that could have been unclear. Allow me to clarify. When discussing an 'API', there's two things that you might be discussing:
  1. The published, text description of the interface, or
  2. The actual code that implements the interface

I was referring to the second option. If the code that implements the interface is still 32-bit, you're not going to get any of the benefits of moving it to the new 64-bit processor. You may still pass the same data types in as arguments, and get the same data type out as a result, but the 64-bit version of the implementation will be able to take advantage of the new capabilities of the 64-bit processor, where the 32-bit version will be restricted to the capabilities of the old 32-bit ARM instruction set.
 
Let me give you a hint. Two identical processors, one 64b, one 32b, but otherwise identical. The 64b version will already use more power by the single fact that it requires more transistors.

Sigh.

That's a really simplistic view. By that logic, Intel's Haswell chips would use more power than previous generations. You are ignoring that newer processors have the ability to shut off large sections of the processor based on what it's doing. You are also ignoring things like process changes where the new feature size allows for a given transistor to be switched with even less power. You are also ignoring that new instructions can be more efficient at a task than previous instructions, allowing the processor to go into an idle state sooner.

The point is that there are many things in play in a given system. Things are never identical except for one thing. There are many many changes making it very tough to pull out the impact of having more transistors or a 64 bit processor vs a 32 bit.

As for cache pollution, just because the CPU virtual address space is 64 bit doesn't mean that the cache is going to also use 64 bit addresses. The cache only has to worry about real memory space and in a system like the iPhone or iPad, Apple can customize the cache implementation to be optimized for the RAM in the system. This isn't a general purpose CPU that can go into a system with memory slots that can be populated many ways by the user. This is a SoC that will be paired with a fixed amount of RAM for the lifetime of the product.

64 bits is just not going to materially impact the cache, especially not doubling the usage.
 
Well apparently you do so maybe elaborate a little? Oh wait you didn't elaborate because you don't have a clue in what you're talking about. You don't even work in the IT industry do ya?

I do not work in the IT industry. Why would I associate IT folks with tech savvy? Who does? Car dealerships? Burger flippers?

The 64-bitness of the A7 is more than memory at the expense of a pointer. Apple owns the entire experience:
The design of the chip
The computer language (objective C)
The API (Cocoa)
The compiler (Xcode/ARC)

To my knowledge they are the ONLY folks in the mobile industry with this sweeping control. Apple uses this to their advantage. We already know the core is backwards-compatible with 32-bit, but new applications and frameworks are encouraged to be 64-bit. Why is that?

With Cocoa, the full 64-bits are not used for memory addressing. Instead, upper bits are used for high-speed in-line retain/release counters. This gives native apps an impressive 30% boost for doing nothing but recompile. And this is before the ARM architecture advantages. Retain/release calculations only rarely have to dip into a semaphore protected area to do their thang. Apple had been patiently waiting for the move to 64-bit to carry over this feature to the mobile space.

Every bit is utilized to some degree for something and I detect little to no hit from the different memory usage looking at an app as a whole. I have iPhone 4, 4S, 5, and 5S to play with. However if you are running 32-bit apps, you get double-banged: the 32-bit frameworks are all loaded as well.
 
However if you are running 32-bit apps, you get double-banged: the 32-bit frameworks are all loaded as well.

This is probably the biggest impact of the transition to 64-bit -- the 64-bit frameworks are always going to be loaded. And if you launch just one 32-bit app, the 32-bit ones are going to be loaded as well. This alone was the biggest factor for me in deciding to make a 64-bit version of the app I work on. Memory is limited enough as it is -- I don't want to make it worse for our users if I can avoid it.
 
Let me give you a hint. Two identical processors, one 64b, one 32b, but otherwise identical. The 64b version will already use more power by the single fact that it requires more transistors.

Let me give you a hint. There is no reason why an engineering team would spend time on two such identical processors. One of them would become obsolete well before the tools, OS and apps became fully mature to take advantage of the new architecture (which takes years).

SGI/MIPS evaluated this around 2 decades ago with their 64-bit MIPS CPU (one was used in the Nintendo 64), and determined that the increased transistors, area and power of going 64-bit were less than 10%.
 
Well they need that RAM to offset the bloatware factory that is Touchwiz. I've never seen a phone use 1.3GB idle until I got my Note 3. What a tragic step back in technology: throwing more hardware at software instead of optimizing.

Right... just like:

  • Apple adding more RAM for voice control.
  • Apple adding more RAM for video recording.
  • Apple claiming Siri suddenly couldn't run on an iPhone without special audio hardware, even though pre-Apple Siri could.
  • etc.

Apple could've optimized, but they chose requiring more hardware.

Mind you, I believe in optimization as well. However, if hardware falls in price quicker than it would take to redo the code and/or a feature would work better/smoother with more RAM, sometimes it does make sense.
 
Last edited:
Right... just like:

  • Apple adding more RAM for voice control.
  • Apple adding more RAM for video recording.
  • Apple claiming Siri suddenly couldn't run on an iPhone without special audio hardware, even though pre-Apple Siri could.
  • etc.

Apple could've optimized, but they chose requiring more hardware.

Mind you, I believe in optimization as well. However, if hardware falls in price quicker than it would take to redo the code and/or a feature would work better/smoother with more RAM, sometimes it does make sense.
Uh they added RAM in order to add core functionality. Samsung added RAM to accommodate for BS nobody needs like Knox and MyMagazine. iPhone 4 running Siri was terrible (jailbreak, but real Siri). Did it work? Sure it did. But did it work reliably? I've had an iPhone 4 and I had that hack installed and Siri had a hard time understanding speech. The 4S indeed had better audio handling whether it be an audio chip or better microphones.

http://www.theregister.co.uk/2012/02/06/why_iphone_4_wont_get_siri/

iPhone has a third of the RAM, half the cores, about half the clocks, and less than half the battery capacity of my Note 3, yet it is: faster, smoother, and lasts about as long if not longer according to Anandtech. Apple knows optimization more than anybody else, especially a company like Samsung who are famous for packing on a ridiculous amount of bloat ware (400MB of it on the Note 3 to be more specific).

Sure adding fancier hardware might do the trick but it's making the most out of available resources that is the way forward, not cramming in an Octacore and a laptop battery because of the 400MB of unneeded apps that are piled on top of that already tragic JIT Dalvik.
 
Last edited:
Uh they added RAM in order to add core functionality. Samsung added RAM to accommodate for BS nobody needs like Knox and MyMagazine. iPhone 4 running Siri was terrible (jailbreak, but real Siri). Did it work? Sure it did. But did it work reliably? I've had an iPhone 4 and I had that hack installed and Siri had a hard time understanding speech. The 4S indeed had better audio handling whether it be an audio chip or better microphones.

iPhone has a third of the RAM, half the cores, about half the clocks, and less than half the battery capacity of my Note 3, yet it is: faster, smoother, and lasts about as long if not longer according to Anandtech. Apple knows optimization more than anybody else, especially a company like Samsung who are famous for packing on a ridiculous amount of bloat ware (400MB of it on the Note 3 to be more specific).

Sure adding fancier hardware might do the trick but it's making the most out of available resources that is the way forward, not cramming in an Octacore and a laptop battery because of the 400MB of unneeded apps that are piled on top of that already tragic JIT Dalvik.

So, what you're saying is that Apple, despite charging the same for their hardware, is actually giving you less than the competition and masking it under "but you dont NEED that much, so we're not giving you that much".

I don't buy it.

There is no reason Apple hasn't upped their memory options with the new devices, without requiring serious cost overhead except for the reason to keep their margins higher and planned obsolescence.

There was no reason why they aren't providing 2GB of ram now from any technological viewpoint. the excuse "but it doesn't need it" is irrelevant, when they have already proven that with each iOS iteration more is required.

Dont get me wrong, I'm well aware that Android's Manufacturers solution of throwing more hardware at a problem isn't the best answer either. (I have removed touchwiz from my device completely and the phone now is amazingly fast and smooth). But purposely limiting hardware for no significant reason just feels like cheapness and nickel and diming.
 
Well they need that RAM to offset the bloatware factory that is Touchwiz. I've never seen a phone use 1.3GB idle until I got my Note 3. What a tragic step back in technology: throwing more hardware at software instead of optimizing.

Who knows if they don't optimize the software as well? If Google releases a conjunction Note 4 or S5 with this new chipset and RAM, then your software "issues" will be solved or you can always root. My dad has the Note 3 and I notice no problems at all. I have the HTC One myself and it's very stable.
 
So, what you're saying is that Apple, despite charging the same for their hardware, is actually giving you less than the competition and masking it under "but you dont NEED that much, so we're not giving you that much".

I don't buy it.

There is no reason Apple hasn't upped their memory options with the new devices, without requiring serious cost overhead except for the reason to keep their margins higher and planned obsolescence.

There was no reason why they aren't providing 2GB of ram now from any technological viewpoint. the excuse "but it doesn't need it" is irrelevant, when they have already proven that with each iOS iteration more is required.

Dont get me wrong, I'm well aware that Android's Manufacturers solution of throwing more hardware at a problem isn't the best answer either. (I have removed touchwiz from my device completely and the phone now is amazingly fast and smooth). But purposely limiting hardware for no significant reason just feels like cheapness and nickel and diming.

iOS doesn't take nearly as much RAM as Touchwiz so yeah it doesn't need it.

Car analogy warning: I think of iPhone as a Lotus. Small engine yet the car still outperforms many V8 behemoths because it's so light. You don't get "less for the money" and it's hardly "cheap." It's just a product that was carefully put together in a way that resources weren't squandered. The iPhone 5/5S/whatever doesn't have any issues with the RAM it has because instead of upping the RAM they just made the apps more RAM-friendly. Efficiency is the way forward, not compensation.

----------

Who knows if they don't optimize the software as well? If Google releases a conjunction Note 4 or S5 with this new chipset and RAM, then your software "issues" will be solved or you can always root. My dad has the Note 3 and I notice no problems at all. I have the HTC One myself and it's very stable.
That's what ART is for but it won't be for a while. It'll certainly improve things but Android will always be plagued by the fact that everything's running as a Java VM. But yeah I look forward to seeing what ART will do.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.