Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The way Ars Technica phrased it it sounded like they'd managed to slip it past Apple's review policy and not 'find a way around it', which just sounds like they sideloaded it.

So a jail broken device?
 
How is this even remotely considered a security issue?

Yes, every touch is logged, but none of the logs carry any semantic information about the touches.

What those guys have just demonstrated is of no use to an actual hacker. It would be like tapping a phone line and then only be able to know how many calls are placed each day.

Plot the data points on-screen overlaid atop keyboard, keypad, etc. Et voila. :eek:
 
Yet another NSA techie is going to slam his head into the wall while saying "****! They found yet another loophole that I inserted!"

I'll say it again... funny, but closer to reality that it may seem. I'm beginning to wonder if Apple has caved to NSA demands and, in a negligent manner that can plausibly be denied, is now building in backdoor access for clandestine use.
 
The responses of the uniformed on MacRumors always amaze me. I'm going to shout here.

NO PLATFORM IS SECURE FROM PHISIHING SCHEMES INVOLVING WILLFUL USER INPUT.

Having said that, even if this was not fixed it would be a non-issue.

A hacker would have to get the app past Apple's review and into the store. Where's all the "Walled Garden" complaints now?

Second, if it did, Apple could kill it the minute it was discovered. Some of you people want to be able to criticize Apple so bad you will do it even when they don't deserve it. And sometimes they do deserve it, but not over this.

Anybody could write a key logging app, they just can't get it on the App store.
 
At the risk of seeming naive...is it not a bit more parsimonious to assume an accidental bug than some nefarious plot which includes Apple and the NSA in some secret collusion?
 
How is this even remotely considered a security issue?

Yes, every touch is logged, but none of the logs carry any semantic information about the touches.

What those guys have just demonstrated is of no use to an actual hacker. It would be like tapping a phone line and then only be able to know how many calls are placed each day.

This is basically a keyboard logger. While its running in the background if you log into your credit card apps or banking apps or anything with a login, they get your username and passwords.
 
If you know the exact coordinates you can simply overlay the iOS Keyboard and extract everything the user typed in, including passwords, logins or other personal information. :rolleyes: But yeah, no security issue here. LOL.

While you could use the coordinates to determine areas that are on a keyboard, it is useless unless you also derive what app was being used at the time. You have no idea if you are just playing flappy bird, selecting songs in itunes, typing an email or entering in a user name or password etc. Thus, the data is pretty useless. Not to mention you would have to know the exact time a password box was displayed on the screen and then grab the resulting input. If you could get the running App and then determine based on coordinates, what is being pressed, then maybe there could be something more to this. You then have to have the ability to parse exactly what the password happens to be from all of the things typed from the keyboard.

This is another reason to use something like 1Password.
 
I know it's standard-Apple to not comment or respond to security issues until they have a fix but when this one settles, I'd sure like to know how long it has been there.

This has "NSA" written all over it.

The goto fix problem is too broad to have been a "back door" for the NSA. It was a screw-up by a coder. This sounds like another coding error, but not of the same urgency as the goto fix problem.
 
This is basically a keyboard logger. While its running in the background if you log into your credit card apps or banking apps or anything with a login, they get your username and passwords.

Not exactly. A keyboard logger just grabs input made via keyboard. Thus you know this is info entered. Since this grabs touch input, it is impossible to discern if the touch input was made specifically on the virtual keyboard or taps on the screen in that same area. Thus, it is impossible to know if in fact the gibberish is a password or just tapping in a game or other program. If this also captured screen grabs as well, or grabbed touch input based on a specific field being accessed, it would be highly effective.
 
...A hacker would have to get the app past Apple's review and into the store. Where's all the "Walled Garden" complaints now?

Second, if it did, Apple could kill it the minute it was discovered. Some of you people want to be able to criticize Apple so bad you will do it even when they don't deserve it. And sometimes they do deserve it, but not over this.

Anybody could write a key logging app, they just can't get it on the App store.

I'm not comfortable with Apple being the sole line of defense in a world where the most powerful government on the earth has been discovered to be infiltrating devices on a wholesale level. That seems foolhardy. Criticism of the organization responsible for this defect is warranted, IMHO. I want Apple to be better at security.
 
Thats happened when you fire the head of iOS and replace it with a puppet that is not even a software engineer
 
Not exactly. A keyboard logger just grabs input made via keyboard. Thus you know this is info entered. Since this grabs touch input, it is impossible to discern if the touch input was made specifically on the virtual keyboard or taps on the screen in that same area. Thus, it is impossible to know if in fact the gibberish is a password or just tapping in a game or other program. If this also captured screen grabs as well, or grabbed touch input based on a specific field being accessed, it would be highly effective.

Analyzing the captured data before and after could easily reveal when a string is being entered on the keyboard. Remember, this isn't just some short point in time. This data is presumably being captured over a long timespan, and would thus reveal unmistakable patterns.
 
The responses of the uniformed on MacRumors always amaze me. I'm going to shout here.

NO PLATFORM IS SECURE FROM PHISIHING SCHEMES INVOLVING WILLFUL USER INPUT.

Having said that, even if this was not fixed it would be a non-issue.

A hacker would have to get the app past Apple's review and into the store. Where's all the "Walled Garden" complaints now?
...
Anybody could write a key logging app, they just can't get it on the App store.

Of course. Obviously, this type of "hack" is 1000X easier to just do on regular Mac OS X or Windows, and probably even more likely to actually grab some "goodies". And then there's no 100% signed app requirement for apple to remotely disable apps.
 
I want Apple to be better at security.

I want Apple to be perfect at security...but it ain't going to happen because coding is done by humans, and humans are fallible.

I would hope that, as much as is humanly possible, errors are caught before releasing any software to the public..with the emphasis on "humanly possible".

Could Apple do better? Anything short of perfection can always be improved.

The hypothesis that Apple is intentionally putting our buggy software seems, to me, to be a bit over the top.

Just one non-geek's opinion...:p
 
Even if Apple now knows this vulnerability exists, it doesn't mean other apps didn't implement it.

See this awesome game using IAP ? Guess they could start loading a "touchlogger" when it's asking you for your password, if they managed to hide the code well enough. The app could probably survive for a time before being pulled out of the appstore.

Unlikely, aye, Impossible, nay.
 
Just wondering if there are many issues in iOS after Scott Forstall's departure?
 
Analyzing the captured data before and after could easily reveal when a string is being entered on the keyboard. Remember, this isn't just some short point in time. This data is presumably being captured over a long timespan, and would thus reveal unmistakable patterns.


OK. but what does that data mean? If you use the same user name and password for every site, then yes the likely hood of pulling out this repetitive info increases. But, then you have to know what sites/apps were being used to make this info valuable or usable. If you use different passwords for each site, I would have to think the odds then of this info being valuable are are so minimal if any at all, since there really wouldn't be repetitive patterns and the odds of deciphering this exact app are close to impossible. BTW, most apps don't require you to enter in the user name, only the password, since the user name is saved after the 1st time entering it into the system. So the only info retrieved is the password, via touch input. This further reduces the risk and chances one can tell in what App/Site/etc those passwords are being used.

In addition, if you use 1password or something similar, you are not typing any site passwords at all. Maybe copying and pasting (if you don't use their browser).
 
So a jail broken device?

It does not have to be broken. As it was stated many times on this forum and elsewhere there is nothing Apple can do to prevent malicious software from entering AppStore. Except for really amateur cases, it's simply impossible to detect a malware without reviewing the source code which Apple can't do.
 
Agreed....

Honestly, security flaws are always going to be found, and here's the kicker: ALWAYS bound to be trumped up as more dangerous for the average use than they really are.

There's a very lucrative industry in "computer security" out there. A lot of people put food on their table primarily by issuing warnings about the latest "threat" and promising workarounds or solutions.

Except it's difficult to get people to keep paying for this stuff when "my device has always been just fine, the way it is". So they have to keep scaring people ... maybe even manufacturing a few actual hacking incidents by "leaking the info on how to exploit it, since the original manufacturer didn't respond to our demands to fix it quickly enough".


I want Apple to be perfect at security...but it ain't going to happen because coding is done by humans, and humans are fallible.

I would hope that, as much as is humanly possible, errors are caught before releasing any software to the public..with the emphasis on "humanly possible".

Could Apple do better? Anything short of perfection can always be improved.

The hypothesis that Apple is intentionally putting our buggy software seems, to me, to be a bit over the top.

Just one non-geek's opinion...:p
 
Time for Apple to invest in software Q/A

I mean its crazy for a company to be investing billions in wanting sapphire covered devices, but then have ridiculous trivial security and usability flaws that any decent amount of software Q/A should have discovered.

How about not being able to receive new mail in Mavericks?
How about not being able to come out of sleep mode?
How about being able to bypass the lock screen on a phone without a password?
How about a map app that points you in the wrong direction?

I mean these are all very trivial issues that should have been found with a moderate amount of Q/A, but obvious Apple is failing in this area.

I could see these kinds of issue on Google or Microsoft devices because these companies only make the software, and it is run on an infinite variety of hardware, so if Windows didn't come out of sleep mode on some random Chinese notebook, fine.

But Apple controls every aspect of software and hardware so having an OS update come out where it breaks an entire model line of their own hardware (from a VERY limited number of models) in inexcusable.

While being 100% secure is going to be unreachable by any company, Apple has a reputation (and also a very vocal campaign) that touts them as being leaders in security and reliability, however this is very much discredited with almost every update to OS X and iOS that they release.

There is something wrong with software development at Apple, and its time for Apple to figure out what it is and fix it. I don't care if you make an iPhone that is wrapped in a diamond, if it can be hacked into because of some trivial security issue because the Q/A at Apple is lazy and broken, then I am not going to buy their products.
 
Could the errant app be using a private API? These are API calls used internally by IOS itself, and any app using them will automatically get rejected by Apple.

It'll explain why nefarious means were used to get the app past Apple's scrutineers, and will also explain how the app manages to get hold of individual touch screen presses.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.