Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hackers normally release a proof-of-concept of an exploit.

I guess Apple should give him a job.
 
Something like that or a class-action lawsuit is bound to happen. Miller could get in really nasty legal trouble with his stunt, and that would be deserved, considering the level of douchebaggery he displayed.

I always find it amusing to watch people call other people nasty names beneath the level of the claims they are making by far and then try to be claiming some kind of moral high ground in their reasoning for doing so. It's all the more ironic for breaking forum rules against personal attacks and flames, demonstrating disdain for the same sort of rules that you claim to be upset being broken elsewhere. Therefore, if you get in trouble over this post, just know it's well deserved. ;)
 
If it is an early version of ICS, what exactly did they test it on? I'd love to know how they'd gotten access to an unreleased build and what it was running on when tested and why we only have 3.2.1out now.

All ICS news leaks I've seen have pointed to a 4.0 release. The only references to Android 3.5 I can find was from 2010 when people were assuming Gingerbread was going to be 3.0 after a Samsung Galaxy Tab Q&A.

http://www.slashgear.com/samsung-confirm-android-3-5-honeycomb-for-tablets-02100317/

Probability says on a Nexus S, Xoom, or the SDK's emulator. Why? Because during the end of Honeycomb/early-ICS, those were the most popular dev devices within Google.

How it was tested, I'm less certain of.

I'd guess it was probably a dev unit handed out from Google if Jon or a firm was hired to do an audit. Or maybe one of his friends who worked at Google broke his NDA and let him play with it.

It's kinda like how several companies had iPads to do development on prior to the iPad announcement.

As for version numbers....

The news isn't a trustworthy source for version info. And realistically, we shouldn't care what it's called that much because it's mostly marketing.

As you mentioned, people assumed Gingerbread was 3.0 because...well... 3 is 2 + 1. Maybe it started intending to be 3.0, but then was "downgraded" to 2.3. Maybe it was always going to be 2.3 but people just assumed and somebody made the assumption, posted it as fact, and that piece of "news" took off in the echo chamber.... kinda like the iPhone 5 rumors.

Version numbers are arbitrary. They're simply a guess by somebody in management about how significant they feel the next release is coupled with how many features they can cram in and test based on the current plan and schedule, and how loudly they'd like marketing to sell the new product.
Somebody will have to check in a code change or branch so that when a build comes out internally to test, it's distinguishable from the previous one.

To me (and I must say this is a guess based on what I've seen), it looks like the plan during the Honeycomb announcement was for Honeycomb to be 3.0, and ICS to be 3.1. But code quality didn't make it there, so 3.1 became an update, and ICS became maybe a 3.2 or 3.5. If it slipped again and/or decided to pack more features, that would be the time somebody would have promoted it to be a called a 4.0 release because it simply felt like more of a new product than just a code merge.

Since most of this discussion wouldn't be out in the open, the news would have little accurate information.

I must clarify once again that this is speculation on my part as well, but is heavily based on my experiences of how software development timelines tend to work. But it's realistic. I've seen it happen. And while I have friends at Google who have shown me stuff cuz they think it's cool, I know it's wrong for me to inquire about the Honeycomb dev history. Then again, they don't work on Android directly, so it was probably hidden from them too.
 
This is complete nonsense.

All modern browsers (IE9+, Firefox, Google Chrome, Opera etc.) have JavaScript interpreters that work at a very low level yet they don't have these security flaws.

I really don't see why Safari in iOS has to be the exception and why Apple has got their panties in a twist over implementing the feature so slowly.

Um, no.
It appears you're making connections between the wrong pieces of data.

All modern browsers do use JIT compilation techniques. Correct.
Said modern browsers don't have the security flaw Charlie demoed. Incorrect but irrelevant because Charlie's exploit doesn't exploit browsers.

Charlie demonstrated the privilege of being able to execute code from memory.
After reading that statement, somebody here's going to ask, "so what? doesn't all code have to execute from memory?"
Yes. But there's a memory management system in place that says "this piece of memory is okay to execute from", "this piece of memory is data and should never be executed." This is a security feature. Read http://en.wikipedia.org/wiki/NX_bit

Anyhow, it looks like he got around it.

Normal apps can't do this.

MobileSafari's Nitro needs to be able to compile code into its memory and run it, so the sandbox is told during signing that this app should be allowed to execute from memory.

And this is why MobileSafari's javascript interpreter is way faster than the app store apps's javascript interpreter:
Apple doesn't want to give you the privilege of executing code from memory like Charlie did, because you can make un-auditable backdoors like Charlie did. But not allowing that feature means not allowing JIT interpreters.

Now... how Charlie got the same exception as MobileSafari is unknown to us. Maybe he found a debug flag to toggle that the App Store reviewer didn't know about. Maybe he found some other hole. Maybe it was as dumb as he just had to insert a special string into the entitlements file and submit it.

Whatever the case, I'm sure Apple's working on doing something about it. If it turns out to just be an app store review change, then maybe it's not a concern for the general customer but is a fun thing that jailbreakers can do to their own phone without harming anybody. We'll see.
 
Nothing to really worry about.

That's right, Apple crushed the guy.

How dare he try to help Apple by uncovering a flaw then telling them about it instead of exploiting it.

Moral of the story ? Never be honest, or try and help Apple, they will castrate you.

Nothing like revenge. Apple thrives on it.
 
That's right, Apple crushed the guy.

How dare he try to help Apple by uncovering a flaw then telling them about it instead of exploiting it.

Moral of the story ? Never be honest, or try and help Apple, they will castrate you.

Nothing like revenge. Apple thrives on it.

Where did you get that they "crushed the guy"?

This was all carefully orchestrated by Charlie Miller for maximum exposure.

This is probably the best week of his life, what with his big upcoming conference and getting his big headline on Forbes which was picked up by the normal press in addition to all the tech blogs.

"Crushed the guy", my arse.
 
You're missing a big point here in the cybercriminal economy, quality is as interesting as quantity. Targetted attacks have been in evidence since 2005, it was only in 2009 that they were sexed up with the name of spear phishing. If I can submit an app and with a payload, that means that I can send an email to someone who will download it from within their own appstore on the iOS device.

There appears to an inordinate focus on privilege escalation, the information gathered from a device can be used for extortion, IP theft or simple corporate espionage. The Chinese FIS have been doing this for years, so why do you assume that the range of jailbroken iOS devices won't have been scrutinised to see what vulnerabilities exist?

Extortion could potentially occur due to photos but how many people do you know keep photos that could get them extorted on their iOS device?

Thinking that an attacker is going to create a iOS trojan for the chance at an opportunity to extort someone with photos is a hollywood-esque stretching of imagination.

Intellectual property would be associated with an app on the iOS device so this data would only be compromised if the storage of the app wasn't integrated into the protected storage of the iOS device.

Most apps, especially the popular apps, that could potentially be associated with IP data use protected storage.

Corporate espionage would require access to more than the data that is exposed by this vulnerability.

For example, emails aren't exposed by this vulnerability.
 
That's right, Apple crushed the guy.

How dare he try to help Apple by uncovering a flaw then telling them about it instead of exploiting it.

Moral of the story ? Never be honest, or try and help Apple, they will castrate you.

Nothing like revenge. Apple thrives on it.
Actually, he sent in the exploit a couple months ago, informed them of it 3 weeks ago, and went public in the press this week when they finally found the exploit and followed their own rules about it.
 
Smartphone/Tablet OSs only represent a small fraction of the total computing device marketshare of which Windows XP still has almost 50%.

The marketshare of iOS is nowhere near that of Windows XP let alone all versions of Windows, which all represent much easier targets than iOS.

You're not looking at the critical parts of what that % really means. You could have 5 times more XP activated licenses than the total number of mobile device users. Most of people's computing practices have changed, they're no longer bound to relying on a personal computer for needs like email and SMS messages. Most people have more contact info on their phones than on their computers, it's simply more convenient. Not everyone sync's their phone and computer so there's always going to be where the phone carried daily will be more updated than a personal computer.

But why would a cyber-criminal bother going through all that extra effort when easier targets exist including easier targets in the mobile marketplace, such as Android.

That's besides the point. There's a lot of iOS users and even if it proves more challenging than exploiting Android, there's still much to gain since there are generous amount of iOS users. I'd agree with you somewhat if iOS users were like OS X users where their net marketshare was under 5% of the whole but it's much closer to 45-50% so there's something worthwhile if cyber-crims-hackers are able to take advantage of any vulnerability successfully. The best targets are those who're in denial thinking they're invulnerable, they also tend to be the most lax in terms of safe practices.

You said Miller couldn't submit his app anonymously, well I think you're absolutely wrong. It's not that he couldn't, he simply chose not to since his intent was not to be malicious, rather to demonstrate and prove a point. ID theft is a serious offense that will land him in jail, however a cybercrim may not be above using someone else's identity to submit a rigged app.

Every update from Apple includes a detailed set of release notes that are presented when the update is downloaded and is accessible via the Apple support website.

They only show you want they want you to see. Apple's not beyond sliding in hidden features/fixes they don't want the public to be aware of. It's no different from many other companies.
 
Most of people's computing practices have changed, they're no longer bound to relying on a personal computer for needs like email and SMS messages.

Neither of these data sources are compromised by this vulnerability.

Most people have more contact info on their phones than on their computers, it's simply more convenient.

Given that no other interesting data is exposed, the contacts alone are not a lot of motivation to use this vulnerability in the wild.

That's besides the point. There's a lot of iOS users and even if it proves more challenging than exploiting Android, there's still much to gain since there are generous amount of iOS users.

This vulnerability doesn't provide much to gain. See above.

I'd agree with you somewhat if iOS users were like OS X users where their net marketshare was under 5% of the whole but it's much closer to 45-50% so there's something worthwhile if cyber-crims-hackers are able to take advantage of any vulnerability successfully.

iOS has nowhere near 45-50% marketshare. It has around 60% of the mobile/tablet marketshare but the mobile/tablet segment is a small portion of the overall computing device market. Albeit, the mobile/tablet segment is growing at a rapid pace.

The best targets are those who're in denial thinking they're invulnerable, they also tend to be the most lax in terms of safe practices.

No, the best targets are the OSs with the weakest security paradigm.

If your argument was true, then iOS would have more malware than Android given the fact that iOS has a greater marketshare than android across mobile/tablet devices.

Another problem with your argument is the assumption that iOS users believe themselves to be invulnerable. It seems more pragmatic to think that security conscious users would pick the more secure OS when deciding which product to purchase.

You said Miller couldn't submit his app anonymously, well I think you're absolutely wrong. It's not that he couldn't, he simply chose not to since his intent was not to be malicious, rather to demonstrate and prove a point. ID theft is a serious offense that will land him in jail, however a cybercrim may not be above using someone else's identity to submit a rigged app.

At least with the iOS app store, the malware developers have to commit to some established falsification of identity to submit a nefarious app that is only able to partake in relatively unprofitable malicious action.

With Android, the malware developers can easily anonymously submit a malicious app that can do much more damage then this vulnerability has the potential to do to iOS. This is shown by various examples of malicious apps that have been distributed via the Android marketplace. Typically, Android malware includes privilege escalation.

They only show you want they want you to see. Apple's not beyond sliding in hidden features/fixes they don't want the public to be aware of. It's no different from many other companies.

If you look at the Apple Security Update release notes, Apple discloses the vulnerabilities that are found and fixed by individuals at Apple. This includes the disclosure of more serious vulnerabilities.
 
Last edited:
Extortion could potentially occur due to photos but how many people do you know keep photos that could get them extorted on their iOS device?

Thinking that an attacker is going to create a iOS trojan for the chance at an opportunity to extort someone with photos is a hollywood-esque stretching of imagination.

Really, then there has been no theft of celebrities phone pictures recently, and honey traps don't occur? Not hollywood, but reality.

Intellectual property would be associated with an app on the iOS device so this data would only be compromised if the storage of the app wasn't integrated into the protected storage of the iOS device.

Most apps, especially the popular apps, that could potentially be associated with IP data use protected storage.

Corporate espionage would require access to more than the data that is exposed by this vulnerability.

For example, emails aren't exposed by this vulnerability.

Divulging the phone numbers of an exec's phone would't be useful? Look at the animal rights groups that go after pharma execs, for example.

Unfortunately, not all current cyber attacks affect what you deem to be important, but still are important.
 
Really, then there has been no theft of celebrities phone pictures recently, and honey traps don't occur? Not hollywood, but reality.

All of these incidences are due to account hacking. That means weak passwords not malware.

Nothing about a honey trap requires a phone to be compromised by malware.

Divulging the phone numbers of an exec's phone would't be useful? Look at the animal rights groups that go after pharma execs, for example.

Unfortunately, not all current cyber attacks affect what you deem to be important, but still are important.

Almost always those numbers are available via much more cost effective and less illegal methods than identity theft or falsification to facilitate submitting an app to the app store.

http://consumerist.com/2007/10/how-to-find-an-executives-phone-number-or-email-address.html
 
Update 4: CNET notes that iOS 5.0.1 addresses the security vulnerability disclosed by prominent security researcher Charlie Miller earlier this week. Miller demonstrated the vulnerability by slipping an app into Apple's App Store, a move which resulted in Apple banning him from the iOS developer program.

https://www.macrumors.com/2011/11/10/apple-releases-ios-5-0-1-to-address-battery-life-issues/

And low and behold like last time it is shown to the public and Apple magicly fixes it in a matter of days. they had nearly a month before hand.
The OSX one was reported to Apple 9 months prior to the publicly telling everyone by the same guy and less than 1 week later Apple fixes.

This guy just proves why the researchers go public with the holes. It gets things fixed.
 
And low and behold like last time it is shown to the public and Apple magicly fixes it in a matter of days. they had nearly a month before hand.
The OSX one was reported to Apple 9 months prior to the publicly telling everyone by the same guy and less than 1 week later Apple fixes.

This guy just proves why the researchers go public with the holes. It gets things fixed.
Yep, SOP for bugs, unfortunately.

The issue is him breaking the rules of the AppStore to prove his exploit, and then whining about the consequences of that action as if he should be allowed some higher moral ground in Apple's eyes. Separate item.
 
And low and behold like last time it is shown to the public and Apple magicly fixes it in a matter of days. they had nearly a month before hand.
The OSX one was reported to Apple 9 months prior to the publicly telling everyone by the same guy and less than 1 week later Apple fixes.

This guy just proves why the researchers go public with the holes. It gets things fixed.

*facepalm*

Correlation != causation.

That it hit the news and a fix got released a few days later does not necessarily mean the fix was done between the two events.
 
*facepalm*

Correlation != causation.

That it hit the news and a fix got released a few days later does not necessarily mean the fix was done between the two events.

If it was the first time I might agree with you but this is yet another one in a fairly long list of times it has lined up this way. Chances are Apple would of done nothing with pushing out an update with out the public embarrassment.
 
All of these incidences are due to account hacking. That means weak passwords not malware.

Nothing about a honey trap requires a phone to be compromised by malware.



Almost always those numbers are available via much more cost effective and less illegal methods than identity theft or falsification to facilitate submitting an app to the app store.

http://consumerist.com/2007/10/how-to-find-an-executives-phone-number-or-email-address.html

All immaterial now, but you seem to forget that this vulnerability would have made it easier without hacking an account.
 
And low and behold like last time it is shown to the public and Apple magicly fixes it in a matter of days. they had nearly a month before hand.
The OSX one was reported to Apple 9 months prior to the publicly telling everyone by the same guy and less than 1 week later Apple fixes.

This guy just proves why the researchers go public with the holes. It gets things fixed.

It's not possible Miller became aware of when the vulnerability was going to be patched and then went public soon before?

Beyond that I don't see the point you are trying to make given that Apple is doing a lot better in this regard in comparison to the most analogous alternative to iOS.

So, low and behold Google has not yet fixed a more serious issue in Android despite the vulnerability being public even before this iOS vulnerability was public.

Google Android has a similar bug but the bug in Android does allow for privilege escalation.

The two Android vulnerabilities, which have been reported to Google but not yet patched, shown in this video are:

- A permission escalation allowing the installation of applications with arbitrary permissions without user approval.

- A privilege escalation targeting Android’s Linux kernel that allows an unprivileged application to gain root access.​

http://blog.duosecurity.com/2011/09/android-vulnerabilities-and-source-barcelona/

The kernel vulnerability in Android presented in the article above is patched but the other issue is still unpatched. These threats were publicly disclosed on Sept. 20, 2011 and were most likely reported to Google prior to being publicly disclosed.

More information about these Android issues is found in the following link:

http://www.securityfocus.com/bid/49709

Even worse Android malware developers tend to use public and unpatched privilege escalation vulnerabilities in their malicious apps.

http://threatpost.com/en_us/blogs/g...using-root-exploit-android-gingerbread-081811

http://www.theregister.co.uk/2011/03/04/google_android_market_peril/

----------

All immaterial now, but you seem to forget that this vulnerability would have made it easier without hacking an account.

Hacking an individuals online accounts because the target uses weak passwords is not even close to as much effort as 1) falsifying or stealing an identity, 2) developing a malicious app, and then 3) social engineering someone you don't know to download the app or hoping someone interesting randomly downloads the app.
 
It's not possible Miller became aware of when the vulnerability was going to be patched and then went public soon before?

Beyond that I don't see the point you are trying to make given that Apple is doing a lot better in this regard in comparison to the most analogous alternative to iOS.

So, low and behold Google has not yet fixed a more serious issue in Android despite the vulnerability being public even before this iOS vulnerability was public.



Even worse Android malware developers tend to use public and unpatched privilege escalation vulnerabilities in their malicious apps.

http://threatpost.com/en_us/blogs/g...using-root-exploit-android-gingerbread-081811

http://www.theregister.co.uk/2011/03/04/google_android_market_peril/

From other threads I get the feeling that you hate Google.
Both cases these are Trojan horses to get into the first damage.
Apple for the most part i worry about more because well people get the very bad scene of security that nothing can go wrong and no trojans can get past Apple. Already know that can happen.
 
From other threads I get the feeling that you hate Google.

Nope, don't hate google.

I just think that any criticism of Apple's performance in regards to patching vulnerabilities should be looked at in a context relative to how competitors perform in the same domain.

I think doing so further eliminates any confounds in making any judgements about the quality of the response to fix the vulnerability.
 
Last edited:
Wow!

What a douche! Why did he need to submit an App exploiting the flaw!

This guy probably tourtures insects, and considers everone elswhere mouth-breathers.
 
If it was the first time I might agree with you but this is yet another one in a fairly long list of times it has lined up this way. Chances are Apple would of done nothing with pushing out an update with out the public embarrassment.

I don't know of such a long list, but given that he notified Apple on October 14th, and us customers received a tested and QA'ed fix on Nov 10th, I'd say that's an actual reasonable deployment timeframe, not spurred by public embarrassment.

http://www.techdirt.com/blog/wirele...re-lose-your-license-as-apple-developer.shtml

I'm aware there are companies that would rather hide embarrassing situations. I'm not saying Apple would or wouldn't do that.

But for this particular case, for the type of change required and implied by the tech note, 4 weeks is a completely reasonable timeframe to diagnose, analyze, and deploy a fix.

If it were 9 months, I'd agree. But 4 weeks? Um, no.
 
Now... how Charlie got the same exception as MobileSafari is unknown to us. Maybe he found a debug flag to toggle that the App Store reviewer didn't know about. Maybe he found some other hole. Maybe it was as dumb as he just had to insert a special string into the entitlements file and submit it.

My understanding was that as of iOS 5.0+ all ways of accessing a web page (Safari, Web Apps pinned to the home screen and embedded WebView) all have the same privileges - hence how the exploit works.

It now seems that the UIWebView doesn't support the Nitro engine (rather disappointingly).

This is also why I mentioned browsers working at a "low level" (which I consider the memory access to be - the browsers I mentioned ARE doing this).
 
My understanding was that as of iOS 5.0+ all ways of accessing a web page (Safari, Web Apps pinned to the home screen and embedded WebView) all have the same privileges - hence how the exploit works.

It now seems that the UIWebView doesn't support the Nitro engine (rather disappointingly).

This is also why I mentioned browsers working at a "low level" (which I consider the memory access to be - the browsers I mentioned ARE doing this).

It's true that most of the browsers you mentioned do require the ability to execute from memory. However, Charlie doesn't exploit the browser, he's exploiting the kernel/sandbox memory protection. That they use the same feature has no bearing on how vulnerable you are. Hence why your argument makes no sense.

In other words, whether or not said browsers are even installed has no effect on how vulnerable your system is.

UIWebView never supported Nitro because granting that support is granting the effects of this security bug to any App Store app.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.