Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
69,424
40,502


Anthropic announced today that it is changing its Consumer Terms and Privacy Policy, with plans to train its AI chatbot Claude with user data.

anthropic-data-collection.jpg

New users will be able to opt out at signup. Existing users will receive a popup that allows them to opt out of Anthropic using their data for AI training purposes.

The popup is labeled "Updates to Consumer Terms and Policies," and when it shows up, unchecking the "You can help improve Claude" toggle will disallow the use of chats. Choosing to accept the policy now will allow all new or resumed chats to be used by Anthropic. Users will need to opt in or opt out by September 28, 2025, to continue using Claude.

Opting out can also be done by going to Claude's Settings, selecting the Privacy option, and toggling off "Help improve Claude."

Anthropic says that the new training policy will allow it to deliver "even more capable, useful AI models" and strengthen safeguards against harmful usage like scams and abuse. The updated terms apply to all users on Claude Free, Pro, and Max plans, but not to services under commercial terms like Claude for Work or Claude for Education.

In addition to using chat transcripts to train Claude, Anthropic is extending data retention to five years. So if you opt in to allowing Claude to be trained with your data, Anthropic will keep your information for a five year period. Deleted conversations will not be used for future model training, and for those that do not opt in to sharing data for training, Anthropic will continue keeping information for 30 days as it does now.

Anthropic says that a "combination of tools and automated processes" will be used to filter sensitive data, with no information provided to third-parties.

Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.

Article Link: Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out
 
this has been the plan all along. create a compelling product, get you to use it, you train it for free (you're paying a fee) and then it replaces you a few years down the road. why have human employees when you can have AI bots that have no rights?
 
Note: you need to delete your conversations for the 30 day window to apply.

Also, if you violate trust and safety and it gets flagged by their systems, it's 2 years of retention and 7 years of the classification score.

TL;DR don't do anything extraordinarily nefarious with any of these tools, which should be obvious, but people that might do those things are dense.

The fact that they do delete data after 30 days of you doing so is still notable and commendable; OpenAI may not train on your data if you opt out but right now they aren't deleting anything unless you have a ZDR policy with them due to the NYT lawsuit.

If the outcome of that lawsuit is in OpenAI's favor they will purge the backups, if not and especially if it becomes material for discovery processes, oh boy.

TL;DR #2: Don't use ChatGPT for anything sensitive at all, full stop.

Despite these limitations they're still very useful tools. Just be sensible about what you share.
 
this has been the plan all along. create a compelling product, get you to use it, you train it for free (you're paying a fee) and then it replaces you a few years down the road. why have human employees when you can have AI bots that have no rights?
I'd say it's inevitable the general population is onboard for more AI in their lives. Sharing sidewalks with mechanical bipeds is not a matter of if but when.
 
this has been the plan all along. create a compelling product, get you to use it, you train it for free (you're paying a fee) and then it replaces you a few years down the road. why have human employees when you can have AI bots that have no rights?
that isn't a few years down the road. I work in healthcare, our system outsourced IT department to get access to AI and other technology tools. The fear of AI taking over jobs is happening. Spreadsheet readers dont care that its still in infancy they just see numbers on the spreadsheet.
 
that isn't a few years down the road. I work in healthcare, our system outsourced IT department to get access to AI and other technology tools. The fear of AI taking over jobs is happening. Spreadsheet readers dont care that its still in infancy they just see numbers on the spreadsheet.

Interesting to read; I work in clinical healthcare and my company has a 100%, no exceptions, blanket ban on AI. The domains are network-wide blocked.

I don't work anywhere near IT, but they aren't going anywhere.
 
If Apple does choose an external AI partner, I hope it's not this one. I would literally switch to Android.
 
  • Like
Reactions: Stenik
pointless and inaccurate for what, exactly?
Okay here's a perfect example. I just asked ChatGPT to create me the map of the train system in Phoenix, AZ. The white map is the crappy totally inaccurate AI one, and the grey one is the actual train system.

It can't handle a 3 line train system, I am not going to trust it for MUCH of anything else at all. Literally overhyped tech pro BS "technology"
crappyMap.png
 

Attachments

  • 250410-7_val_msys_rail-2025_8.5x11_250311-grey.jpg
    250410-7_val_msys_rail-2025_8.5x11_250311-grey.jpg
    384.9 KB · Views: 56
Okay here's a perfect example. I just asked ChatGPT to create me the map of the train system in Phoenix, AZ. The white map is the crappy totally inaccurate AI one, and the grey one is the actual train system.

It can't handle a 3 line train system, I am not going to trust it for MUCH of anything else at all. Literally overhyped tech pro BS "technology"
View attachment 2541672
So it's just you not knowing how to use AI? Ok. LOL!
 
Ok first of all, AI is not magic. It's a helper. It's like a personal assistant. Just imagine talking to your buddy. Would you ask your buddy to draw you a map of the train system or would you ask him to find you a map of the train system? It's important to know what LLMs can do and what they can't and how to get the best results. The best analogy is really to just talk to an LLM like you talk to your best friend who happens to be an expert on the thing you are just asking him.
 
I have really scaled down my AI usage recently. First I was amazed a used them all the times, especially also for life advice. But it literally makes you dumber. They always make small, hard to notice mistakes.

For example if you speak about situations with multiple involved parties even GPT5 Thinking constantly confused who did or said what and misattributes things.

Also, while it sounds smart, if you use it in complex domains you know a lot about you realize how man mistakes it actually makes.

It is useful for boiler plate stuff and has its occasional moments of brilliance but overall it has only marginally affected m productivity.

Maybe future models will be better but today’s stuff is dangerous.
 
Ok first of all, AI is not magic. It's a helper. It's like a personal assistant. Just imagine talking to your buddy. Would you ask your buddy to draw you a map of the train system or would you ask him to find you a map of the train system? It's important to know what LLMs can do and what they can't and how to get the best results. The best analogy is really to just talk to an LLM like you talk to your best friend who happens to be an expert on the thing you are just asking him.
Can confirm. I asked chat GPT to "find" me a map of the example above, and it spit out the correct map thats posted

Edit: The map is slightly cut off and terrible quality (I use the free version if it means anything)
 
Regardless of this, can they please add a password option? That's the one thing holding me back from Claude and perplexity. It's just easier to login to grok and chatgpt. I'm all for marketplace competition and so I'm happy to see many successful products. But part of marketplace competition means doing the little things that result in a happier end user, even for some slightly niche solutions. Those of us who manage multiple devices and multiple users sessions benefit from passwords because it easily autofills from device to device and user sessions to user session. But if they don't have passwords, it always requires us to go check out email, an extra minute every time. That adds up over time to multiple minutes, time wasted that could be solved by giving users the ability to setup a password. Thank you.
 
  • Like
Reactions: jhfenton
Pass on any Machine Learning/Finite Automata that deals with modeling the human condition, and not used strictly for the likes of FEA/CFD/Non-linear dynamics, Statistical Mechanics, Fracture Mechanics, Solar Cell design, Material Science, post solid rocket fuel electromagnetic/superconductive room temperature exotic material engine propulsion design, real-time predictive control systems, etc.
 
  • Like
Reactions: 212rikanmofo
This is why everyone wants to invade our privacy so that they can collect enormous amounts of data to feed AI. So many blind and brainwashed people out there. People are becoming too dependent on technology and AI more and more. It's like becoming addicted to drugs. It's not good for you.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.