Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
some like you might be that way but most of this hate is because this guy expose a major hole in Apple's iOS.

You know as well as I do if it was any other company he did this to (Microsoft WP7 or Google's Android) that most of the people who be cheering for him and saying how those products suck for that reason.
But god forbid Apple get nailed for the same thing.

To me it is wrong no matter what the platform or operating system.
 
Some random thoughts:

The same thing was done to Android back in the early days: researchers would upload apps with intentionally "malicious" code just to get proof and publicity.

--

Android has several hacker groups who are constantly checking new apps for malicious code. It helps that an Android app must declare many security options before it can be installed, whereas an iOS app does not. If an Android tip calculator wants internet and contacts access, you can be sure that someone is checking it out.

--

The very first iOS update ever released came about because a group was threatening to publish a giant hole that Apple hadn't fixed even after being privately told about it.

Every iOS update since has included security fixes, some pretty major. There's no way to guarantee that the current version doesn't have holes. In fact, it's almost certain that it does.

--

Apple has been living in a house built of cards and they know it. At any time it could come tumbling down if it turns out that truly malicious code secretly got in and stole information.

There is no way for Apple to detect all unapproved code, especially with the tiny amount of time they have to vette incoming apps. Heck, a kid was able to sneak in a hotspot in his flashlight app. Can't get easier to find than that.

For a hacker, it would be smart to have code that wakes up in a few months or even a year, so it would not be detected early on.

With all the apps in the Apple App Store, I think there's a good chance there are a few right now with code that no one has detected.
 
All this spin has got me dizzy.

Man finds security flaw
Man exploits security flaw in a public-facing manner
Apple drops the hammer.

ZOMG Apple is anti-security.
 
FTFY

That's be right there with cracking the jackpot at next week's lottery to accomplish that. I doubt stock-app #85442 did have that much public attention to get that chance. :D:apple:

So I re-watched that video he posted and after the two-minute mark he does say that it only checks for the payload the very first time you launch the app, which explains why he keeps uninstalling and reinstalling the app.

Wouldn't he have been shocked if while making the video, he found himself logging into somebody else's phone and downloading their address book?

So he narrowed the odds of connection here, but the users of the app still don't know for sure if he did anything so it leaves them in an uneasy state. He probably kept solid logs of all "phone home" connections or kept his "phone home" server down until he was making his video. I still think that maybe one more layer of checks like the devices current DHCP server would have been smart to avoid in inadvertent unsuspecting user from getting their phone logged into.

Being able to prove that the code could not have hacked anybody's phone would go a long long way for some folk's ease of mind.

I'm also curious about the download statistics on this Instastock app? Did it become popular in the time since September?

FYI, on Miller's Twitter account he claims that no device ever received a payload from his "phone home" server but his own.

UPDATE: Also on Miller's Twitter account he claims when he exposed a bug in Android that Google tried to get him fired from his employer.
 
So I re-watched that video he posted and after the two-minute mark he does say that it only checks for the payload the very first time you launch the app, which explains why he keeps uninstalling and reinstalling the app.

Wouldn't he have been shocked if while making the video, he found himself logging into somebody else's phone and downloading their address book?

So he narrowed the odds of connection here, but the users of the app still don't know for sure if he did anything so it leaves them in an uneasy state. He probably kept solid logs of all "phone home" connections or kept his "phone home" server down until he was making his video. I still think that maybe one more layer of checks like the devices current DHCP server would have been smart to avoid in inadvertent unsuspecting user from getting their phone logged into.

Being able to prove that the code could not have hacked anybody's phone would go a long long way for some folk's ease of mind.

I'm also curious about the download statistics on this Instastock app? Did it become popular in the time since September?

FYI, on Miller's Twitter account he claims that no device ever received a payload from his "phone home" server but his own.

UPDATE: Also on Miller's Twitter account he claims when he exposed a bug in Android that Google tried to get him fired from his employer.


I am willing to bet he the server down until he made the video the test it. After he did the testing he promptly took the server back down and might of gone even farther to tell the server only to accept incoming connections from a single IP address.

Another way to do it is when it does its check for the server IP it checks like 192.168.1.xxx and anyone who does any minor among of home networking will noticed that it is a home network IP address. That make it impossible for an outsider to catch it unless he was on his home network.

The ways to do it can be pretty long. Hell it could have it go to www.somerandomwebsitename.com and then on the routers hosting table that redirects it to his home server. Limits the amount of damage that can be done and put it on a closed system.
 
When did they get this 8 months ? The App was approved and the video done in September. That's less than 2 months in my book. And what makes you think it's trivial ?

Apple has had a month to fix this now, high time for a public reveal to get the "wheels" in motion.

I haven't been following the rest of this conversation, so excuse my interjection. But the from what I understand the app was approved in September and he told Apple about the flaw on October 14th -- though according to Miller on Twitter, he did not tell Apple about the app's existence in October.

So Apple has had three weeks to respond to Miller, and he also tells the Macalope on Twitter that Apple acknowledged the bug. Perhaps this fix is included in iOS 5.0.1 (and it should be given that by the time he explains the exploit it would have been a full month).

My only beef with Miller is that he subjected unsuspecting users to the uncertainty of not knowing whether or not he downloaded stuff from their phones. The odds of any problem were slim, but really, everybody just has his word that no other device downloaded a payload but his own -- assuming anybody actually installed the apps besides Miller.

It would have been cool if the app had an additional check -- or perhaps even prompted the user if a payload was found if they wanted to install it and test the iOS exploit -- just to at least notify them that they would be giving potential access to their device. If the app had such code, then a review of the code could provide users with ease of mind. If he did such a prompt he would have had to obfuscate the strings/text for the prompt message otherwise it would have been a tip-off to the approver (something a malicious hacker would not have provided to them).


I am willing to bet he the server down until he made the video the test it. After he did the testing he promptly took the server back down and might of gone even farther to tell the server only to accept incoming connections from a single IP address.

Another way to do it is when it does its check for the server IP it checks like 192.168.1.xxx and anyone who does any minor among of home networking will noticed that it is a home network IP address. That make it impossible for an outsider to catch it unless he was on his home network.

The ways to do it can be pretty long. Hell it could have it go to www.somerandomwebsitename.com and then on the routers hosting table that redirects it to his home server. Limits the amount of damage that can be done and put it on a closed system.

Yeah, he could have used a 192.168 or a 10.0 address -- that would have certainly shielded any other users who might have downloaded it. All sorts of techniques are available, of which Miller is keenly aware. I'm betting that nobody downloaded this app -- or at least nobody is aware that it is tied to today's news in computer security. :)
 
To put it simply, if he had simply reported the bug and announced it without doing a full proof of concept, Apple would have said "yes, but we would have NEVER approved an app with this exploit, so it was never an issue." By testing the full path, Apple can't fall back on their review team as a savior. The real question is has anyone else discovered this, and are there other rogue apps in the app store using this exploit? That's what people should REALLY be worried about.
 
To put it simply, if he had simply reported the bug and announced it without doing a full proof of concept, Apple would have said "yes, but we would have NEVER approved an app with this exploit, so it was never an issue." By testing the full path, Apple can't fall back on their review team as a savior. The real question is has anyone else discovered this, and are there other rogue apps in the app store using this exploit? That's what people should REALLY be worried about.

To put it simply, you don't make an app available for non-researchers if your intent is to notify the people who can fix the security bug.

You report the bug, you give those people the information they need to resolve it. You wait a reasonable amount of time, and if the bug still isn't fixed, you publish the information to apply pressure and so that other researchers can verify it.

If the bug is verified, but the appropriate security team blows it off with the "it'll never happen" you describe above, *then* you make some arrangements to create a special developer account and add the app, setting it's for-sale date for some time significantly in the future so that, even after approval, no one can actually download it. Then, you contact the appropriate developers and inform them that "yes, someone *can* get it through, here's the proof".

There's good ways to go about this sort of thing, and there's bad ways. This guy picked a particularly bad way, and really shouldn't be surprised that his dev account got burned as a result.
 
I spoke with a company representative today from a large integration house who informed me that they have independently filed a report with cybercrime.gov on this event. Apparently they have been contacted by customers who had installed the Trojan app and have serious legal concerns about compromised information.

Charlie may have been really stupid here, not just ego centric.
 
He says he told Apple. But they never have proof they said anything or that they said more than "hey you've got a security bug in X" and left it up to the company to find it for themselves for months and then when they couldn't, rather than tell them exactly what and where it is, they say it publicly and claim the company had 'weeks to fix this and haven't bothered' (cause they were trying to find it)

it's a fame game with these folks. so they play it to make sure they can get the fame. Making the original company look like idiots just ensures they will get lots of hits from all the articles that are posted about it and thus the blogs etc will post it.


Finally found the news post I was looking for, and why it stuck to my head about CM. This was linked as part of a separate story relating to him.

http://www.dailytech.com/Apple+Patc...Year+After+Initial+Discovery/article15427.htm


And wouldn't you think Apple would send in the wolves if someone claimed to have told them about a hole, but didn't?
 
I do, however, disagree with some of your assertions:

1 - Miller was able to get this app past the validation checks, how many others have?

Miller was able to get this app past the validation checks because the app was unable to do any significant activities in relation to what is required to make actually profitable malware.

Has anybody other than researchers looking for headlines successfully submitted a trojan into the iOS App Store?

No, because the requirements for acceptable apps via Apple's vetting process virtually eliminates the likelihood of getting profitable enough malware in the app store to warrant making the effort to do so.

Google allows anonymous signup and self signed certificates with much less vetting. Coincidentally, the Android market has much more incidences of malware.

2 - There is a lot of money being targeted at mobile devices to try and exploit them, iOS is a target.

If the costs to exploit a new vector exceeds the costs to further exploit an old vector given the same amount of profit returns, the new vector will be largely ignored.

This has always been the case in relation to malware in regards to any platform.

This is why malware, in the form of trojans, targeting OS X has only recently started to accelerate in growth. This is due to the increases in security of other platforms, such as Windows.

This is why malware is not yet a significant issue for iOS. Other much easier platforms are available to target.

3 - What validation is required to get a developer account, and what is the time to detection in the event of information harvesting going on? Unless you see that the app is doing something bad, the front end could be very useful and get great ratings, therefore pose a decent return.

This bug provides no good vectors to achieve profitable returns. Combine that with the fact that the process to get a developer account anonymously is much more difficult than for Android and you get what is manifesting in the actual malware ecosystem.

That being malware targeting Android over iOS. This is an exact replica of that which occurred between Windows and OS X. Only now has Mac OS X seen increased instances of malware due to the improvements in security in Windows.

This iOS issue isn't as serious as the article makes it out to be.

This bug doesn't include privilege escalation so it doesn't allow apps to be installed. It also doesn't have access to protected data storage and protected data entry.

This bug has no value in relation to mass automated malware. Computer criminals don't care about your photos and access to contacts is only meaningful to spread automated mass malware if a vector to make that malware profitable is present, which isn't the case with this bug.

It's not so much that XP is more exploitable,…

It's because Windows is much more easily exploited.

…more people use mobile phones in place of things that used to be reserved for computers like email. I'd always argue that more people keep updated contact info (and simply a lot more contacts stored within their phones and tablets) than in their address books on their machines.

Sure it does. There are more iOS users than there are Mac OS X users. According to the phone marketshare Apple has almost half, which wouldn't be too far off to think that it's nearly as many as Windows 7 or XP users.

Smartphone/Tablet OSs only represent a small fraction of the total computing device marketshare of which Windows XP still has almost 50%.

The marketshare of iOS is nowhere near that of Windows XP let alone all versions of Windows, which all represent much easier targets than iOS.

Charlie Miller didn't exploit the app store anonymously, that doesn't mean he couldn't have. If he also incorporated ID theft into the mix which would put him in a very serious situation, there's zero doubt that it would've worked, but that wasn't his intent nor was it important to do. What he proved was more than sufficient, it debunked the myth that iOS is completely secure, it also disproved that the App Store can't accept exploited apps. This isn't like many of Vupen's security bulletins where they claim something might be used to take advantage of a vulnerability, Miller actually did it.

But why would a cyber-criminal bother going through all that extra effort when easier targets exist including easier targets in the mobile marketplace, such as Android.

See my replies to bydandie earlier in this post for more details.

Neither is Apple so what's your point. Many companies don't always reveal all the details within each update.

Every update from Apple includes a detailed set of release notes that are presented when the update is downloaded and is accessible via the Apple support website.

http://support.apple.com/kb/HT1222

I can't seem to find Google's support page that provides access to the content of the security patches released for Android.
 
I spoke with a company representative today from a large integration house who informed me that they have independently filed a report with cybercrime.gov on this event. Apparently they have been contacted by customers who had installed the Trojan app and have serious legal concerns about compromised information.

Charlie may have been really stupid here, not just ego centric.

Something like that or a class-action lawsuit is bound to happen. Miller could get in really nasty legal trouble with his stunt, and that would be deserved, considering the level of douchebaggery he displayed.
 
I spoke with a company representative today from a large integration house who informed me that they have independently filed a report with cybercrime.gov on this event. Apparently they have been contacted by customers who had installed the Trojan app and have serious legal concerns about compromised information.

Charlie may have been really stupid here, not just ego centric.

I have to agree with that .. I don't get why he didn't just write a dull app noone would install anyways. Then again .. he is a smart guy and knew what would happen .. so he probably made sure that the app didn't "accidentially" submit data.

Has anyone actually installed that app .. could maybe take a look at the traffic it generates?

Well .. we'll know more in about a week from now.

T.
 
This bug provides no good vectors to achieve profitable returns. Combine that with the fact that the process to get a developer account anonymously is much more difficult than for Android and you get what is manifesting in the actual malware ecosystem.

That being malware targeting Android over iOS. This is an exact replica of that which occurred between Windows and OS X. Only now has Mac OS X seen increased instances of malware due to the improvements in security in Windows.

You're missing a big point here in the cybercriminal economy, quality is as interesting as quantity. Targetted attacks have been in evidence since 2005, it was only in 2009 that they were sexed up with the name of spear phishing. If I can submit an app and with a payload, that means that I can send an email to someone who will download it from within their own appstore on the iOS device.

There appears to an inordinate focus on privilege escalation, the information gathered from a device can be used for extortion, IP theft or simple corporate espionage. The Chinese FIS have been doing this for years, so why do you assume that the range of jailbroken iOS devices won't have been scrutinised to see what vulnerabilities exist?

We do, however appear to be in agreement that iOS is a target, albeit a small one; a small threat that requires no more than diligence to counter until we obtain evidence otherwise.
 
except that this could be used to steal your data and then wipe your phone.

----------



Removing the app makes sense. Removing him as a developer not so much.

He is a security researcher who is basically helping apple and giving them time to fix it before he exposes it next week. Expect them to include a fix in 5.0.1
before it is released.

So he is a security researcher. But he was not hired by Apple to try to break in and find vulnerabilities. It is like if somebody stole money from a bank and later claimed that he wanted to help them to find security holes. He can tell them what he found, but not exploit it to make a point.

I am glad that Apple cares about malicious code in apps and "removes" their developers.
 
You're missing a big point here in the cybercriminal economy, quality is as interesting as quantity. Targetted attacks have been in evidence since 2005, it was only in 2009 that they were sexed up with the name of spear phishing. If I can submit an app and with a payload, that means that I can send an email to someone who will download it from within their own appstore on the iOS device.

There appears to an inordinate focus on privilege escalation, the information gathered from a device can be used for extortion, IP theft or simple corporate espionage. The Chinese FIS have been doing this for years, so why do you assume that the range of jailbroken iOS devices won't have been scrutinised to see what vulnerabilities exist?

We do, however appear to be in agreement that iOS is a target, albeit a small one; a small threat that requires no more than diligence to counter until we obtain evidence otherwise.

I'm not sure what you're trying to say, but based on what has been reported, the nastiest thing the bug allows is access to contacts list (which is indeed bad, but not quite a disaster). If the bug allowed to read in your mailbox or steal your passwords, that would be a different story; but it doesn't appear to be the case here.
I'm no expert, but it looks like the sandboxing of iOS apps, as well as the limited multitasking abilities of iOS are making the 'security hole' basically useless for seriously nefarious endeavors.

From Google Cache:
http://i.imgur.com/URHx6.png

Either he updated the app to remove the juicy bits, or Apple approved the app again, even after he notified them of the bug.
Interesting times. :D:apple:

He notified of the bug but didn't tell Apple he planted the hack inside the app he submitted/updated. Nothing to see here.

This is a huge bug when exploited is an unbelievably huge security leak. Apple cannot tolerate to have left this for more than a week as well.

This is FUD and unbelievably huge hyperbole.
The bug, to make it into a real-life threat requires someone with criminal intentions to sign-up for a developer account, providing comprehensive identity information, including a tax-registration identifier, pay $99 with a credit card and link that card to their developer-account. Of course, that is not impossible to forge by some big organized-crime ring, but the likelihood of that happening (with high risk of being tracked down anyway) is pretty low. And all that, notwithstanding the caveats I mention above about what level of 'nasty' can the vulnerability effectively allow.
 
I'm not sure what you're trying to say, but based on what has been reported, the nastiest thing the bug allows is access to contacts list (which is indeed bad, but not quite a disaster). If the bug allowed to read in your mailbox or steal your passwords, that would be a different story; but it doesn't appear to be the case here.
I'm no expert, but it looks like the sandboxing of iOS apps, as well as the limited multitasking abilities of iOS are making the 'security hole' basically useless for seriously nefarious endeavors.

What I'm saying is that cybercriminals would easily see the contact list of a high profile person or high value person as being valuable. Criminal and coporate organisations alike would pay for that information. It happens, and access to photographs is the same value.

As I've continually said though, whilst the potential for this vulnerability could be there we have no evidence either way that it's prevalent. One thing is sure though, if CM found it, then others will have done too. Does this mean that the sky's falling on our heads? Nope, but it does show that a certain amount of diligence is required even in the iOS world.
 
Imagine the compromising photographs that could be available on high profile iPhones from guys who just might want a stock tracking app, and you can see why this type of exploit could be extremely high value.

Any politicians on that list? CEOs of major corporations? High profile celebs?

Those crying no potential for financial harm here, or asking us to take Charlie's word on not stealing any data need to rethink your position.
 
Pretty thin reason, and I doubt the cyber crime division of the FBI will care...

The cybercrime division of the FBI will be the exact same as the PCeU over here; they like to seem busy when they have things handed to them on a plate, but aren't aware of everything else that happens. ;)
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.