Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
How does their exploit work? The app itself compiles and then runs the malware? But I thought sandboxing prevented that code from then being able to interrupt the code that belongs to other applications?
 
I find it strange ...

... that it was google Tech found this out, and nobody can verify that it is what they say, except them ...

hmm !!
 
The only way this will ever change is if the compilation of the apps is done on Apple servers.
With the source Apple could do some automatic checks but ultimately a code review done by an expert would be the only way to be sure. I doubt these kind of reviews can be realistically implemented, the cost would be huge.
 
Is this really new news? This was always possible, but it has a few problems.

[...]

2. If the app is good, why try and steal from users? The second problem is related to the first. If a developer spent a lot of time creating a good, functional app, then why would they hamper its performance by embedding malicious code?
[...]

Face...palm...

I rarely defer to internet meme's, but congratulations, this post wins a 1000 interwebz.

Now to get on topic - I'm surprised that a proof - of -concept has not been released sooner. The concept is pretty straightforward, and even if Apple reviewed the source-code, I doubt it would be hard to obfuscate the code in such a way has to permit un-desired actions to take place. Even with heuristics it would be hard to detect.
 
How does their exploit work? The app itself compiles and then runs the malware? But I thought sandboxing prevented that code from then being able to interrupt the code that belongs to other applications?

There is no information available except their report claiming that they can do all kinds of mischief. They very carefully avoid saying anything about any limitations.

As you know as a developer, you tell Apple what access your app needs, they check in the review process if you indeed need that access to make your app work (they will reject it if you ask permission to do things that your app shouldn't need to do), and your app can't do anything that Apple doesn't know about.

I think they carefully used their reputation as a university to get some range of permission and used that to the fullest, and don't tell in their press release about all the limitations. The biggest limitation is of course that a university which could be sued for money cannot do anything serious, or they would get their ass sued off.


The issue is that, if the walled-garden cannot protect end-users, then it has no value, or negative value. It certainly doesn't prevent outright copying, non-functional garbage apps, and it allows threats to the security of a users private information that frankly are easy to sneak by the "genius" approval staff they have.

But it does protect. They only got information from themselves because nobody downloaded the app except themselves. And if _you_ had downloaded that app and they had stolen information from you, you would now be able to sue Georgia Tech and two researchers at the university for huge amounts of money. The walled garden isn't just technical protection. It protects you because Apple knows the identity of the people putting an app on the store, so crooks can put up malware, but they can't get away with it, and that stops them. Plus Apple has the ability to effectively kill an app.
 
Last edited:
This isn't a big deal. IIRC, Apple pops messages at the OS level when core functionality is accessed such as contact information, phone functionality, and so forth. They can't get out of the sand box.
 
But it does protect. They only got information from themselves because nobody downloaded the app except themselves. And if _you_ had downloaded that app and they had stolen information from you, you would now be able to sue Georgia Tech and two researchers at the university for huge amounts of money. The walled garden isn't just technical protection. It protects you because Apple knows the identity of the people putting an app on the store, so crooks can put up malware, but they can't get away with it, and that stops them. Plus Apple has the ability to effectively kill an app.

Because the criminal would not be able to pay $500 to some guy in a third world country or a meth addict and put that guy's name on all papers.
That is, as we all know, impossible...
 
I do love the troglodytes of the Apple fan base.

P.S. More than 98% of all Android malware is located in the Russian and Chinese Play Store apps, or side loaded... Much like many of those jail broken iPhones (that don't get included in these malware studies BTW.) If you stick to official apps in either app store, you will likely never get malware. Common sense reigns supreme.

Really? 98% of all th Android malware is located in the Russian and Chinese ap stores. Do you have a credible source for this claim or should we all just take your word for it?
 
That is obvious, thanks.

The issue is that, if the walled-garden cannot protect end-users, then it has no value, or negative value. It certainly doesn't prevent outright copying, non-functional garbage apps, and it allows threats to the security of a users private information that frankly are easy to sneak by the "genius" approval staff they have.

:apple:

The walled garden has an amazing value to Apple, it makes sure that Apple gets its 30% cut from every sale.
 
With the source Apple could do some automatic checks but ultimately a code review done by an expert would be the only way to be sure. I doubt these kind of reviews can be realistically implemented, the cost would be huge.

It's possible to some degree with a binary as well, you can do a static analysis on the binary and look for calls that should not be there.
 
2. If the app is good, why try and steal from users? The second problem is related to the first. If a developer spent a lot of time creating a good, functional app, then why would they hamper its performance by embedding malicious code?

It depends on the value of the information that you'd be able to steal.
 
How much malicious code on Android platform?

I suspect a lot more gets though, with slower response.

Add to the all the different languages.

It is a huge challenge, and Apple does need to ramp up the testing process.
 
I like Apple's screening process for keeping out SOME of the bad guys.

I like it better for removing them after approval, once caught, and for having a paper trail pointing back to the criminal. Thus making it less worth their while.

I also like the OS design (better with each version) that limits what even CAN be done. Which needs to continue getting tighter, clearly.

I like these researchers, too! As long as they reported the issue to Apple privately long before dangling a treat in front of criminals. (That's good security practice, and if they didn't do it, and opted for attention instead, I don't like them so much... but I'll still take the benefit of their findings.)

Apple can never screen out and block everything anyone might try, but they've succeeded in making a safe, trusted platform that Android users can only dream of. AND it needs to improve--which can't happen without catching the loopholes.

Totally agree. The process has enough checks and balances that actual criminals haven't bothered to try.

The effort required to make a spoofed developer account to be able to anonymously submit a malicious app is greater enough than targeting Android or other platforms that the criminals decide to take the easier path, which isn't iOS.
 
How does their exploit work? The app itself compiles and then runs the malware? But I thought sandboxing prevented that code from then being able to interrupt the code that belongs to other applications?

Probably they used the same method that jailbreaking uses... by exploiting a bug in the software.

Often that involves overrunning a buffer somewhere and injecting code.

Apple's spent years nailing down holes as soon as they're found, but if iOS7 is really a big rewrite as some claim, that'll just mean new holes.

The walled garden isn't just technical protection. It protects you because Apple knows the identity of the people putting an app on the store, so crooks can put up malware, but they can't get away with it, and that stops them. Plus Apple has the ability to effectively kill an app.

That's exactly the way that other app stores work. Registration has long been the first defense, so that a malicious (or buggy) app can be traced back to its origin, and the developer notified or banned.

However, it's so easy to register as an Apple developer (look at all the people who do it sometimes dozens of times just to sell beta slots), that it's virtually useless as a preventative security measure. It's more about closing the door after the horse is gone.
 
Last edited:
Probably they used the same method that jailbreaking uses... by exploiting a bug in the software.

Often that involves overrunning a buffer somewhere and injecting code.

Apple's spent years nailing down holes as soon as they're found, but if iOS7 is really a big rewrite as some claim, that'll just mean new holes.

I don't think it's all that major a rewrite. It's another version update, like any other, that happens to also coincide with a massive rethemeing of the entire OS.
 
That's exactly the way that other app stores work. Registration has long been the first defense, so that a malicious (or buggy) app can be traced back to its origin, and the developer notified or banned.

However, it's so easy to register as an Apple developer (look at all the people who do it sometimes dozens of times just to sell beta slots), that it's virtually useless as a preventative security measure. It's more about closing the door after the horse is gone.

Android developers have many more options to develop and distribute that are less restrictive in terms of vetting than the path required to be an iOS developer.

This is why the Android platform has nearly all the mobile malware.

Beyond Android and iOS, no other mobile OS app ecosystem is particularly relevant.
 
I don't think it's all that major a rewrite. It's another version update, like any other, that happens to also coincide with a massive rethemeing of the entire OS.

I agree.

Android developers have many more options to develop and distribute that are less restrictive in terms of vetting than the path required to be an iOS developer.

To me, there's really just one more basic option. iOS has a vetted store, and unofficial jailbroken options. Android has vetted store(s), official non-store sideloads (if you enable that), and unofficial rooted.

.None of them are that restrictive, iOS included. The $100 fee does help keep out the hobbyist hacker though, which is why RIM used to charge that for a Blackberry code sign key.
 
Last edited:
To me, there's really just one more basic option. iOS has a vetted store, and unofficial jailbroken options. Android has vetted store(s), official non-store sideloads (if you enable that), and unofficial rooted.

.None of them are that restrictive, iOS included. The $100 fee does help keep out the hobbyist hacker though, which is why RIM used to charge that for a Blackberry code sign key.

There is a big difference in the amount of vetting to create a developer account for each platform.

https://threatpost.com/accountability-not-code-quality-makes-ios-safer-android-042012/76463

The key differences between Apple’s iOS and Google’s Android are what Guido termed “design decisions” that both platform makers made that have created incentives and disincentives for mobile malware writers and cybercriminals in the intervening years, he said.

Foremost among them is Apple’s insistence that mobile application developers verify their identity before they can introduce new applications. That includes submitting actual identifying documents like a Social Security Number or official articles of incorporation.

“There’s something that gets back to you,” Guido said. “That way, when Apple finds a malicious application, there’s the possibility that you could suffer real world punishment.”

In contrast, Google’s Android Marketplace and Google Play platforms have much more generous terms for developers, who must pay a small ($25) fee and agree to abide by the company’s Developer Distribution Agreement to begin publishing. That’s a low bar that makes it easy for malicious authors to get their wares out to hundreds of millions of Android users, according to Guido.

“You can upload dozens of applications at once. If any get banned, you can just resign, sign up under a new identity and resubmit them,” Guido said.
 
It's possible to some degree with a binary as well, you can do a static analysis on the binary and look for calls that should not be there.
Determining the "should not be there" part is not that easy. Unless you use blatantly forbidden calls you typically need to understand the context in which the call is used because in many context it could be perfectly legit. Not to mention that the call doesn't need to be explicit at all, you can even "exploit" your own application if you willfully leave the right bug open. Good luck finding that... it can be very difficult even for an expert analyzing the source code.

You can do almost anything with the binary, but it becomes even more difficult, which means even more expensive. I doubt Apple will field a host of software engineers with skill and experience in reverse engineering and code analysis to verify every new application and update in the App Store, it's simply too expensive.
 
Determining the "should not be there" part is not that easy. Unless you use blatantly forbidden calls you typically need to understand the context in which the call is used because in many context it could be perfectly legit. Not to mention that the call doesn't need to be explicit at all, you can even "exploit" your own application if you willfully leave the right bug open. Good luck finding that... it can be very difficult even for an expert analyzing the source code.

You can do almost anything with the binary, but it becomes even more difficult, which means even more expensive. I doubt Apple will field a host of software engineers with skill and experience in reverse engineering and code analysis to verify every new application and update in the App Store, it's simply too expensive.

Determining the "should not be there" part is the same as in your suggestion for an automatic check of source code. The reason I added the "to some degree" part is that it's not a source code review.

Regarding your second paragraph, I'm not suggesting any reverse engineering. Static analysis as I'm referring to is not a manual process, it's exploring all code paths without executing any code.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.