Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
One thing that really concern me is when I use my password manager (LastPass) to log into a site on my iPhone. I copy the password from LastPass then paste it to what ever app I need to log into. Seems like any app that is reading my clipboard would then have access to that password. Scary.
GOOD.

i hope they win and set a precedent.

I use a password maanger, specifically KeePass, and I was completely baffled to see my clipboard was being spied upon. I have my passwords in there! What the heck is the point of a password manager if this is taking place?

I swear to God, Apple just did the world a massive favour in exposing this BS. I’ve just about had enough of every app being spyware. Enough is enough!


They might have your password, what they don't have is the site for that password, there are millions of sites, millions of sites to check this password on, they also need your user ID/login name and on almost all really important sites like, banks/social security sites, you need two factor ID/sms verification and/or some other security protocol.

So, yes, there's a risk, but it's smaller than you think.
 
  • Like
Reactions: infelix
1. That’s absolutely common. An app doesn’t know whether the clipboard contents can be pasted without looking at the contents. So the app developer looks at the clipboard contents to decide whether a “Paste” button should be shown instead of always showing it, and if the user tries to use “Paste” giving an error message. There is so much nonsense posted here by people who don’t have a clue of app development.
2. No.
It seems like Apple should make an API or whatever that can tell an app what type of content is stored in the clipboard (text, image, etc.) without giving it the content. That way the app knows if it makes sense to offer a paste option without having access to the content.

If Apple doesn't require the developer to provide source code what stops a nefarious developer from say putting a fake 'log in with google/facebook/apple' option that collects the username and password and then returns a typo error AND a real login option? No one would ever know that the first time the were presented with a login that it was fake, and if the developer stored the information and then sent it hours to weeks later no one would ever be able to tell that it was harvesting data.
 
....
If Apple doesn't require the developer to provide source code what stops a nefarious developer from say putting a fake 'log in with google/facebook/apple' option that collects the username and password and then returns a typo error AND a real login option? No one would ever know that the first time the were presented with a login that it was fake, and if the developer stored the information and then sent it hours to weeks later no one would ever be able to tell that it was harvesting data.
Apple tries the login during validation and sees if there was a hit on their system? I would think the code validate would be able to find that type of nefariousness.
 
Apple tries the login during validation and sees if there was a hit on their system? I would think the code validate would be able to find that type of nefariousness.
So intentionally alter the password on the first login attempt so it fails but capture, encrypt, sit on, and package the data in a user requested action.

There is no way Apple could know they typed the password in right, nor would they have access to external systems to know if the correct data was being sent.
 
It appears that when you sign up for Linkedin it reads your address book and sends out invitations automatically. I signed up for this app a few years ago for about 30 minutes. I cancelled immediately because of the phone calls i was getting asking why i was sending out invitation for LinkedIn.
In fairness. If you paid attention you could have skipped the read my contacts and also skipped the send out invites. That you didn’t pay attention is on you...
 


The lawsuit attempts to certify the complaint as class action based on alleged violation of the law or social norms, under California laws. Last week, LinkedIn claimed that the clipboard copying behavior is a bug and is not an intended operation. A VP at LinkedIn commented that the contents of the clipboard are not stored or transmitted, and that a fix for the issue will soon be available.

Article Link: LinkedIn Sued for Reading Universal Clipboard Data

Actually I can believe that it is a bug. Merriam-Webster defines a bug as, amongst other things, “a concealed listening device.” The “unintended operation” was getting caught with the bug.
 
  • Like
Reactions: 4509968
There is a good layman’s explanation of this in the “Skeptics with a K” podcast. Episode #280, roughly about the 30 minute mark. Starts off with cookies and how they were manipulated then onto how Ios14 found the clipboard issue.
 
So intentionally alter the password on the first login attempt so it fails but capture, encrypt, sit on, and package the data in a user requested action.

There is no way Apple could know they typed the password in right, nor would they have access to external systems to know if the correct data was being sent.
Taking this to the nth degree, might as well get off the grid. I still believe Apple could find this out during automated code validation.
 
It'll get settled for an amount that insures they lawyers get the vacation homes they'd picked out when seeing the original LinkedIn/clipboard news.
 
add google to the list. this is from my apple developer ios 14 beta 1 device a few days ago.

 
If Apple doesn't require the developer to provide source code what stops a nefarious developer from say putting a fake 'log in with google/facebook/apple' option that collects the username and password and then returns a typo error AND a real login option? No one would ever know that the first time the were presented with a login that it was fake, and if the developer stored the information and then sent it hours to weeks later no one would ever be able to tell that it was harvesting data.

What stops it? What you suggested would be plain criminal, so someone would involve the police about this, and then someone would go to jail. Apple can't stop every criminal. Apple most certainly cannot verify the source code for every single app in the store. For my app, I'd say a week of time by an expert is the absolute minimum. Multiply that by a million apps.

In the end, this warning that Apple is giving now in a bet version (and iOS 14 is the only OS in the world doing it), is causing so much trouble to so many people, there's a good chance that it won't be there in a released version.
[automerge]1594629039[/automerge]
So intentionally alter the password on the first login attempt so it fails but capture, encrypt, sit on, and package the data in a user requested action.

There is no way Apple could know they typed the password in right, nor would they have access to external systems to know if the correct data was being sent.
I bet Apple has some Facebook accounts for that purpose, without anything of any value in it, and the password is something like 123456. If the reviewer notices the Facebook login fails, then they know something is up.
[automerge]1594629113[/automerge]
Thank you Apple for exposing allowing this. There I fixed it for you.

If Apple is truly the champion of privacy they'd proactively prevent it from happening rather than just reactively informing you.
Mi7chy, you are getting absolutely daft here. _Every single operating system_ currently used, Windows, Linux, MacOS, Android, iOS up to 13 allows this and has always allowed this.
[automerge]1594629304[/automerge]
It seems like Apple should make an API or whatever that can tell an app what type of content is stored in the clipboard (text, image, etc.) without giving it the content. That way the app knows if it makes sense to offer a paste option without having access to the content.
Yes, that would be useful. There are apps that detect URLs in the clipboard and use them. There are apps that accept plain text but in a very specific format. Yes, it would reduce the number of warnings. But the immediate effect is that functonality will be removed from apps.
[automerge]1594629432[/automerge]
Not sure how something that can basically be a bug can really be litigated. Even that aside, seems like actual ill intent would need to be demonstrated and actual damages.
It's not actually a bug. It is behaviour that is perfectly legal in Windows, MacOS, Linux, Android, and iOS up to 13.0.
 
Last edited:
I personally was always a little suspicious of LinkedIn. I quit using the app years ago, but would continue to get "invites" from friends who, after i asked them about it, said they had never sent one to me. The "bug" argument, however, with a good lawyer could get them off the hook. Just IMO but it seems like justice anymore is subject to the highest bidder.

LinkedIn was good in the beginning with the idea of a digital resume but died when the marketing teams got on there.

Now it’s worst that Facebook for ads and ‘marketing’. Everyone is an expert in how to sell and generate leads on LinkedIn by basically spamming the **** out of your clients feeds and inboxes.
 
So much uninformed knee-jerk reaction on here. Just about every previous thread about this issue over the last few weeks has included some very credible examples from developers of why apps may be doing this for benign reasons. That's not to say that some apps aren't using the function for questionable purposes, but the very fact that Apple provide this functionality in iOS and allow it to be used in this way in their app approval process (something that can be easily checked for) should at least suggest that there's a legitimate use for it. People are too quick to jump to conclusions that suit their preconceptions.

I can see why people get upset when they discover that data such as clipboard content is being sent off-device without their knowledge or consent, but if informing users of this just causes people to panic then I'm not sure what the solution is. Maybe in their notifications, Apple needs to let developers add custom text explaining why the app is asking for permission to access your location/photos/clipboard etc. The trouble is the legitimate need to access your clipboard is likely to be more technical than the other examples.
 
Good I hate LinkedIn, at times a necessary evil. It was obviously going through my email address list as I kept getting suggestions to be friends with people there. I changed my email address to the only one on my iCloud account. It keeps me from spamming my contacts LOL! But I have occasionally had the odd spam email in that inbox, which can only have come from LinkedIn, as nobody else knows that I have an iCloud account. Even Apple does not use it to contact me and I have NEVER signed up to anything using that address.
 
What stops it? What you suggested would be plain criminal, so someone would involve the police about this, and then someone would go to jail. Apple can't stop every criminal. Apple most certainly cannot verify the source code for every single app in the store. For my app, I'd say a week of time by an expert is the absolute minimum. Multiply that by a million apps.

In the end, this warning that Apple is giving now in a bet version (and iOS 14 is the only OS in the world doing it), is causing so much trouble to so many people, there's a good chance that it won't be there in a released version.

Of course it's criminal. So is polling the clipboard and sending its contents to your server. Of course Apple can't verify every app. They can't do that now and they let things like LinkedIn slip through. There could be very dangerous apps installed on millions of phones and no one knows.

I bet Apple has some Facebook accounts for that purpose, without anything of any value in it, and the password is something like 123456. If the reviewer notices the Facebook login fails, then they know something is up.

But do they actually use it on every app? And would they really flag an app and get the police involved if the first log-in failed but the second worked? I doubt they even test every login option provided by an app.

Yes, that would be useful. There are apps that detect URLs in the clipboard and use them. There are apps that accept plain text but in a very specific format. Yes, it would reduce the number of warnings. But the immediate effect is that functonality will be removed from apps.

I didn't understand this.
 
Of course it's criminal. So is polling the clipboard and sending its contents to your server. Of course Apple can't verify every app. They can't do that now and they let things like LinkedIn slip through. There could be very dangerous apps installed on millions of phones and no one knows.
....
It's also criminal that some devs are selling your PII without explicitly acknowledging it in the privacy statement. But I haven't heard about any dev going to jail over it. That type of behavior is more worrisome than malware, because it can't be caught. Of course this type of thing is not limited to IOS.
 
In fairness. If you paid attention you could have skipped the read my contacts and also skipped the send out invites. That you didn’t pay attention is on you...
If it was that obvious I and many others would not have fallen into this trap.
 
The most gross thing about this is you can be signed up for ad free premium and it’ll still collect your data. I did a free trial of the weather channel premium to test it out and they still copy from your clipboard.
 
Two questions for you.
1.Why does the app request the clipboard contents without a user initiated request?
2.Does Apple require you submit the app source code such that they could review when nefarious code was inserted?

1. None of them do, that's why I'm not concerned about all of these cases. We didn't get to do heavy investigation yet because we didn't start iOS 14 fixes yet and won't until beta 3 but the fact our apps are getting these warnings is enough to be suspect.

2. Yes they do.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.