Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The developer made a mistake, and did something wrong; something which harms the security of some iPhone users (specifically his own customers) and does so by collecting data people didn’t know was being collected, and then, further, sharing that data. This data IS harmful even if anonymized. And allowing the practice would harm Apple’s App Store security reputation too.

This was thoughtlessness, not evil, and not nearly as bad as many things a truly unethical developer might intentionally try to do. It’s a "minor offense," and had some interesting results. But it was still wrong and deserves a response from Apple. The app should have been pulled for now. Hopefully the app will be re-instated later, without this particular “feature.”
 
This is why I wish apple would do something like Android market where when you go to install an app it list the thing app needs access to. So something like this app would send up red flag when it needs full Internet access.

I think Apple's POV is that if you make security too complicated for users they'll just ignore it. It's also not in the least foolproof. If you want to steal a certain kind of data you make an app that would apparently need access to that data in order to perform its stated function. eg an app for kids to automatically text their friends on their birthdays, could secretly send all their address book entries to a thief, via MMS. Or an app like 1Password which stores all your sensitive data, could transmit it to anyone under the pretence of performing a sync, or cloud backup. So in the end, you've burdened your users, and possibly made them more paranoid or more complacent, but not protected them at all.
 
Hackers Rejoice - Another Tool in the Box

It was more like "I COLLECTED PASSWORDS."

Stop personalising it... seriously. No one was harmed, no one's had their phones unlocked, no one's had their credit card details stolen. This hysteria is hilarious to watch! :p

Harm will come of this! This is another tool in the hackers arsenal. It wouldn't be difficult to use this mined data to run an algorithm to break the Passcode. Truly, there are too many hacks/exploited loopholes as it is already.
 
Harm will come of this! This is another tool in the hackers arsenal. It wouldn't be difficult to use this mined data to run an algorithm to break the Passcode. Truly, there are too many hacks/exploited loopholes as it is already.

I hope you're being sarcastic - there are only 10,000 possible 4-digit PIN codes - any reasonably powerful computer (like my iMac) could brute-force that in minutes.

Actually, while we're at it, why don't we just use this leaked list of PIN codes stolen from Sony? http://www.positiveatheism.org/crt/pin.htm
 
What's in a name?

The app is called Big Brother, after all, someone's got to be looking over its user's shoulders.

Even if the data was collected anonymously, if the app's developer didn't explicitly spell out that it was being collected somewhere in the terms of use he was in the wrong.
Even while the nature of what was collected was benign- I don't think a four digit pin code, there're only 10,000 possible combinations, used to secure a throwaway app is very personal- it's still incumbent upon developers to let users know if anything is sent to, well, Big Brother.
 
I think Apple's POV is that if you make security too complicated for users they'll just ignore it. It's also not in the least foolproof. If you want to steal a certain kind of data you make an app that would apparently need access to that data in order to perform its stated function. eg an app for kids to automatically text their friends on their birthdays, could secretly send all their address book entries to a thief, via MMS. Or an app like 1Password which stores all your sensitive data, could transmit it to anyone under the pretence of performing a sync, or cloud backup. So in the end, you've burdened your users, and possibly made them more paranoid or more complacent, but not protected them at all.

But it is miles ahead of what apple has now which is nothing. It also let's users get a better idea on what each app does.
 
This developer has done nothing wrong, besides show the stupidity of users who use passcodes such as these. The unsolicited collection of data is something that happens everyday. Whenever you shop at WalMart, they record your credit card number and what you bought so they can refund you if need be. BUT they can easily bring up a purchase history and work out what your shopping style is, what you like to buy, what kinds of thing you buy. That's an invasion of privacy to a degree, but do you care?

The HUGE difference here is the developer can't tie up passcodes to individuals. What he wanted to do was look at the bigger picture. Apple published that they've sold x million iPads. OMG My iPad is in that statistics! That's MY data THEY HAVE NO RIGHT! See how stupid that is?

Information is taken from you all the time, whether or not you know it, and for most purposes it's used for seeing trends in large datasets, not to target you personally. Until your personal privacy is breached there's no need to cry. Apple are bending to consumer pressure because of a large volume of complaints they've probably received about the App.
Agreed. It is probably the same stupid users that have 0000 or 1234 as their passcodes that are all up in arms about OMG DEY STEELIN MA INFOS! You get your information stolen every day, but since its not brought to your attention in an article, you don't care? I get so many Amazon emails "recommending similar products" that I would never use that it borders on spam. How do they know which products are similar? Oh noes! Shut down Amazon!
 
As many are saying, and almost anyone with some technical background can tell, absolutely no one was harmed by this action. In fact, if he uses the data he collected to warn people of obvious passcodes, it would actually, on average, help the user. Only in the most extreme and hypothetical case--which would involve physically stalking someone specific because you were about to steal their phone for the data on it--could this have possibly been a security risk, and even then you'd probably do just as well to have someone glance over their shoulder and watch which numbers they hit.

That said, I fully support whatever storm comes down on this developer's head. Just because it's not actually harmful, surreptitious collection and aggregation of data is not okay. Period. That's the thing about slippery slopes--this wasn't, and couldn't realistically be, harmful, but what if the developer had done something with exactly the same intent that, because he hadn't thought it through, DID compromise security in some way? Or what if he were doing something outright malicious, and it just happened to be useless? Would it then be ok, because he screwed up?

Is it the intent that counts? Sony didn't WANT to compromise the personal data and credit card info of a hundred million users, so is it ok that their shoddy security did?

Is it the actual harm? So if some guy is trying to steal passwords with his malicious app, but he forgot to include code to map the passwords to a username, so they're useless other than aggregate data collection, does that make it ok?

Privacy is a sort of on-off switch--you either tell the user clearly that you're collecting data from them, or you don't collect ANYTHING. Period. It doesn't matter if it's harmless data or not, it's personal data, period. You draw a line in the sand and use it, regardless of effect or intent, or you might as well not bother with privacy at all (which is, according to their public statements, apparently what Google thinks should happen).

Unlike OSX, there's no LittleSnitch for someone geekier and more paranoid than me to notice when an app is doing shady things on iOS, so there's no accountability or gatekeeper other than Apple for making sure it doesn't happen.
 
As many are saying, and almost anyone with some technical background can tell, absolutely no one was harmed by this action. In fact, if he uses the data he collected to warn people of obvious passcodes, it would actually, on average, help the user. Only in the most extreme and hypothetical case--which would involve physically stalking someone specific because you were about to steal their phone for the data on it--could this have possibly been a security risk, and even then you'd probably do just as well to have someone glance over their shoulder and watch which numbers they hit.

That said, I fully support whatever storm comes down on this developer's head. Just because it's not actually harmful, surreptitious collection and aggregation of data is not okay. Period. That's the thing about slippery slopes--this wasn't, and couldn't realistically be, harmful, but what if the developer had done something with exactly the same intent that, because he hadn't thought it through, DID compromise security in some way? Or what if he were doing something outright malicious, and it just happened to be useless? Would it then be ok, because he screwed up?

Is it the intent that counts? Sony didn't WANT to compromise the personal data and credit card info of a hundred million users, so is it ok that their shoddy security did?

Is it the actual harm? So if some guy is trying to steal passwords with his malicious app, but he forgot to include code to map the passwords to a username, so they're useless other than aggregate data collection, does that make it ok?

Privacy is a sort of on-off switch--you either tell the user clearly that you're collecting data from them, or you don't collect ANYTHING. Period. It doesn't matter if it's harmless data or not, it's personal data, period. You draw a line in the sand and use it, regardless of effect or intent, or you might as well not bother with privacy at all (which is, according to their public statements, apparently what Google thinks should happen).

Unlike OSX, there's no LittleSnitch for someone geekier and more paranoid than me to notice when an app is doing shady things on iOS, so there's no accountability or gatekeeper other than Apple for making sure it doesn't happen.

I couldn't agree more - no app should be phoning home for any reason at all without disclosing those reasons to the user and giving them the option to opt out. Also, it seems suspicious to me that the "I was going to use it to improve security by warning users" explanation only came out after the app had been pulled. He also now claims that there is no correlation between user's passcodes and the passcode entered into his app. Whilst this is strictly true (unless someone chooses the same passcode), he had a slightly different spin on things when he published the report: (and he didn't help his case when he used the title "Most Common iPhone Passcodes" for the blog post...)

Because Big Brother’s passcode setup screen and lock screen are nearly identical to those of the actual iPhone passcode lock, I figured that the collected information would roughly correlate with actual iPhone passcodes.


IMO, this guy did it for 15 minutes of fame by publishing the report: He got his 15 minutes, but I'm guessing it's not what he was hoping...
 
Last edited:
All of you kids crying are stupid. [...] "...may collect and use technical data and related information, including but not limited to technical information about Your device, system and application software, and peripherals..."
A user-entered passcode is not, under any reasonable technical or legal definition, "technical or related information about your device, system, application software, or peripherals."

People may be too trigger-happy to click "ok" on fifty-page user agreements, but in this particular case even if I had I would not assume--and would argue vehemently, up to and including in a court of law--that personally-entered passcodes or passwords, as well as a wealth of other personally-entered data, is NOT the technical information being indemnified in that clause of the Store contract.

Said information would include hardware specs, version of iOS you're running, what other apps you have installed, and what other hardware you have plugged into your iDevice. Not your email address, not your name, not your physical location, and certainly not the passcode you type into that app. Even if you argue that your top-level iPhone passcode qualifies as "technical information about your system" (which it isn't), that would't include the info you type into the app unless the app itself has a separate agreement or pop-up notification that asks for permission to transmit anonymous data to the developer's server.

Which it might--if it at some point popped up a warning saying "I'd like to transmit some anonymous data about the settings you're using in this app for aggregate analysis." then I wouldn't complain. But so far as I know, it doesn't, and given that the information being collected is the passcode, I'd expect an even more explicit warning, just for courtesy's sake.

Further, even if the app did technically fall within the fine print of the blanket App Store user agreement, it's still common courtesy to have it ask. Maybe not strictly required from a legal standpoint, but customer relations--particularly if your customer base is by definition a paranoid one-- is not the law; you deserve what you get if people get skittish about what you've been collecting.

In addition, it's been argued that even from a purely legal standpoint massive, multi-page user agreements that require you to click "ok" before using software may not be legally binding due to the disproportionate effort involved and the implicit expectation that you, indeed, will NOT read it.
 
Is this a case where Apple should use the kill switch to disable the previous version where it's been installed?

Don't think so. A person whose identity and address is known to Apple can receive a pin number. That person would have to be mad to use that information now that the world + Apple know about it. So there is not much risk in this case.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_2 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Mobile/8H7)

LOL. Should have kept your identity secret :\
 
I would feel bad for Daniel Amitay and his ilk if he let people know -- clearly, in advance -- what he was doing with the information he collected. Doing it on the sly is probably what got his app pulled, and I for one would say deservedly so.
 
I don't think that Apple is fair. And he should not have published the data because people will be able to bypass the passcode so much easier.
 
But it is miles ahead of what apple has now which is nothing. It also let's users get a better idea on what each app does.

I'm not sure that its ahead at all, Android is the one with the bigger malware problem, albeit still very minor. Perhaps a security measure that is trivial to bypass does more harm than good?

I think users just want one source to tell them what the app does, not the app store to tell them one thing, and their phone to tell them another. What kind of service is that? If the app doesn't do want it says in the app description, don't post cute little nag messages in the installer, just ban it. That's a fair simpler and more effective approach than transferring responsibility to the users and hoping for the best.

As for getting a better idea for what an app does. If my OS tells me app X is trying to access the internet, well, mostly, "duh...". But which bit of the internet, what server, what protocol, what data is it sending, how is it encrypted? What kind of phone user really wants to trawl through that kind of information anyway?

The best solution is accountable developers liable to prosecution if they try to commit a serious privacy breach, random testing of suspicious apps/developers, and an effective blocking of discovered threats. Who knows whether Apple/Android have all those things? It seems Apple didn't test this one properly, and IMO anything that claims to be a security/privacy app should be top of the list for spyware tests.
 
I'm not sure that its ahead at all, Android is the one with the bigger malware problem, albeit still very minor. Perhaps a security measure that is trivial to bypass does more harm than good?

I think users just want one source to tell them what the app does, not the app store to tell them one thing, and their phone to tell them another. What kind of service is that? If the app doesn't do want it says in the app description, don't post cute little nag messages in the installer, just ban it. That's a fair simpler and more effective approach than transferring responsibility to the users and hoping for the best.

As for getting a better idea for what an app does. If my OS tells me app X is trying to access the internet, well, mostly, "duh...". But which bit of the internet, what server, what protocol, what data is it sending, how is it encrypted? What kind of phone user really wants to trawl through that kind of information anyway?

The best solution is accountable developers liable to prosecution if they try to commit a serious privacy breach, random testing of suspicious apps/developers, and an effective blocking of discovered threats. Who knows whether Apple/Android have all those things? It seems Apple didn't test this one properly, and IMO anything that claims to be a security/privacy app should be top of the list for spyware tests.

As already pointed out by others, Apple doesn't test any apps properly. The walled garden will stop apps that aren't hiding what it's doing, but it won't stop those who hide what they do.

MyTetheringApp would get stopped, but the Handy Light app got through.

This app would have been able to continue to collect information, without the user knowing anything if the developer hadn't decided to report on his findings (which I think was a GOOD thing).
There's more likely a number of apps secretly collecting data right now, and the developers for those apps will never publish what they have collected.

Liable to prosecution? For what?
More likely is that Apple can try to sue developers in civil court.
 
I would feel bad for Daniel Amitay and his ilk if he let people know -- clearly, in advance -- what he was doing with the information he collected. Doing it on the sly is probably what got his app pulled, and I for one would say deservedly so.

Right.
This is (1) common sense; (2) explicitly required in Apple's developer agreement -- a developer needs to tell users what data is being collecting and what will be done with it.

LOL, image what it would look like. On the screen where you choose your passcode it might say:

"Your passcode will be recorded and sent anonymously to the developer. The developer may aggregate the data and share the aggregated data with the world. No information connecting your passcode to you or your device will be retained."
 
bs

This app never asked for the passcode you use on your device. Everyone is simply assuming that the code you use for this app would be the same as the one you use for your device's passcode. That's just an assumption, and we all know what happens when you assume.

1) This cannot be against Apple's terms. Many apps ask for a security code, and Apple isn't pulling all of those other apps. Apple's issue would have to be with collecting the data, but it's just the app developer capturing user data from his own app. Can't see that as an issue either.

2) Again, many apps use their own passcode. 1password does for example. And it isn't always correct to assume those codes are the same. I don't use the same code for 1password as I do for my device passcode. Just like you shouldn't use the same password for every website. Common sense.
 
...1) This cannot be against Apple's terms. Many apps ask for a security code, and Apple isn't pulling all of those other apps. Apple's issue would have to be with collecting the data, but it's just the app developer capturing user data from his own app. Can't see that as an issue either...

It *is* against the developer agreement.

A developer can collect this kind of information, but they must disclose the data that is being collected and how it will be used and shared. Also, the developer has to have the consent of the user to collect user data.

The guy may have been well intentioned, but if so, he didn't even bother reading the developer agreement. With a sloppy dev like that, it makes me wonder if the data he's collected was really as anonymous as he thinks and if it is really secure.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_2_1 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5)

This is no where near as big a privacy issue as the list of all apps EVER purchased (deleted or not) suddenly showing up on my devices for anybody to see. Apple, please address this misstep in as timely a manner!
 
I hope you're being sarcastic - there are only 10,000 possible 4-digit PIN codes - any reasonably powerful computer (like my iMac) could brute-force that in minutes.

Exactly, 4 digit pin codes can be brute forced in seconds with any decent computer.

The developer did this for attention.

Knowing which ones are common is pointless given that any 4 digit pin is not much different than having no pin at all.
 
Last edited:
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_1 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8G4 Safari/6533.18.5)

He should post a report on the top ten thousand most commonly used 4 digit numerical codes.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.