And the tip calculator that you mentioned earlier which was really on Android and not the iPhone. And all the apps mentioned in this study.
Actually, it turns out that the iPhone tip calculator wasn't doing Contacts, it was one of many apps using Pinch Media ad code, which stores and forwards
the following info:
- iPhone's unique ID
- iPhone Model
- OS Version
- Application version (in this case, camera zoom 1.x)
- If the application is cracked/pirated
- If your iPhone is jailbroken
- time & date you start the application
- time & date you close the application
- your current latitude & longitude
- your gender (if facebook enabled)
- your birth month (if facebook enabled)
- your birth year (if facebook enabled)
If you ignore the testing that Apple does during the approval process.
It's ignorable testing. Are you a developer and have you ever been involved in real software validation? I am and have, as have others here.
There have been numerous articles about how little testing Apple does, and how it's all mostly geared towards making sure the developer didn't violate one of the app store submission rules related to trademarks or bandwidth or morals or duplication of Apple apps... NOT about how the code works or what it might secretly do. And how could they?
Apple officially told the FCC that they only had a relative handful of approvers, which calculated out at that time as 10,000 submissions a week for 40 approvers = 6 apps per hour per approver! Time enough for morals policing? Sure. For code testing? No chance. (I heard they've added more approvers and it's now "only" like two or three apps per hour. )
Heck, even if Apple had the source code (which they do not) for each app, it could take days or weeks to check for malevolent code. At best, right now they can probably run an automated tool looking for known code holes or unofficial API usages. It obviously doesn't show up hidden code such as that WiFi hotspot, though.
I'd argue that there's a very big difference between policing content before it goes into the store - and the Google model - of relying on consumers to report transgressions after content becomes available. One is proactive, one is reactive.
Only if there's enough staff and time. Otherwise it's mostly just good for censorship as noted above. Some iPhone developers have called for Apple to drop even that time consuming and frustrating process, and do as the Android market does... rely on user responses to weed out apps that really should be banned, as Google does for about 1% of submissions (copyright issues, etc).
If you read the link you cite. It clarifies the report. It *did* send device information, subscriber identification, phone number, region, and voicemail phone number to the app developer’s website. Apparently, Google are cool with this.
No, they asked him to change it, even though he was doing it to serve his customers's requests. And apparently Apple was cool with similar information (see above).
Again, there's no perfect process, and neither Android nor iOS are perfectly safe. You want safe? Go with a Blackberry, which only allows managed Java apps, not native code, and is not based on a Unix OS where rooting gives SU status. (Well... at least it used not to be. QNX will bring them into that same fold

)