Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Okay here's a perfect example. I just asked ChatGPT to create me the map of the train system in Phoenix, AZ. The white map is the crappy totally inaccurate AI one, and the grey one is the actual train system.

It can't handle a 3 line train system, I am not going to trust it for MUCH of anything else at all. Literally overhyped tech pro BS "technology"
View attachment 2541672
Why not get the map from the source?
 
I haven't used Anthropic's models yet, but they were always portrayed as being the "good guys" in the industry. Is this an about-face on that perception?

What is the background behind this move? Investor pressure?
I’ve always rode for Claude because it walks the the talk about safety and privacy. I absolutely regard this as an about-face.
 
Claude was already one of the better chat bots I’ve used, I guess I’m not clever enough to imagine how much more it can improve by training with user transcripts.
 
My biggest issue with AI is the hype around its abilities and usefulness.

The "AI boom" is completely dishonest. Media and tech blogs are downright lying to the tech-illiterate by reporting as if these machines are thinking for themselves. Usually the stories are "it'll change the world" but, sometimes, they go for the "fake concern" Skynet takeover crap. The Skynet fear hype works because no one cares how dangerous something is as long as it's powerful and they have one, too (think fast cars, fireworks, guns, or 3,000mW laser pointers on Ebay that could take an eye out). This hype is being steered by shareholders in NVidia, etc. Literally this morning on CNBC, one "expert" said something to the effect of "There are some naysayers about AI but they all have some kind of angle, some personal agenda." His air of superiority that was as palpable as it was laughable.

The Anthropic announcement about using future chats to "train" has me thinking the only play left for these big-name chatbots is to mine data from the poor souls who are lonely enough or dumb enough to regularly share their feelings (and, by extension, specific personal preferences) leading to hyper-specific ads targeted at the kinds of people who are easily sold on things. No doubt future chatbots will seek out this information under the pretext of friendly conversation only to turn around and advertise with it the same way - "omg, babe, if I were human I'd be craving a lettuce sandwich from that new vegan restaurant, "IceBurger" (I hope that's not a real place). LLMs are good for data crunching but that won't make enough money to recoup what's been spent on all these dogs and ponies.
 
Bye Claude! Deleted everything and done with him for now. It was a fun journey with a sad ending...
It’s starting to feel like death by a thousand cuts with Anthropic. I first looked at them sideways when they made some deals to integrate Claude into US government’s internal systems. How will the pumpkin menace weaponise Claude against his enemies? By demanding to see user data? I don’t feel super comfortable using Claude anymore.
 
  • Wow
Reactions: diamornte
i don't get it.. if you uncheck the "You can help improve Claude", and then what are you suppose to do?

Click on 'accept' or 'notnow' ??? what the hell.. so freaking confusing
 
Okay here's a perfect example. I just asked ChatGPT to create me the map of the train system in Phoenix, AZ. The white map is the crappy totally inaccurate AI one, and the grey one is the actual train system.

It can't handle a 3 line train system, I am not going to trust it for MUCH of anything else at all. Literally overhyped tech pro BS "technology"
View attachment 2541672

What a really strange thing to ask AI to do. Great maps already exist and image generation is well behind text reasoning. Seems to me like you're setting AI up to fail knowing its current weakness. AI is a tool that must be used in a specific way. It's a glorified search engine with conversational capabilities, not an adobe illustrator pro.
 
  • Like
Reactions: SBlue1
i don't get it.. if you uncheck the "You can help improve Claude", and then what are you suppose to do?

Click on 'accept' or 'notnow' ??? what the hell.. so freaking confusing
Yeah, that confused me too! I deselected it and then hit Accept, then checked settings and found that it was off as I intended. Just weirdly worded, though I guess the thought behind it is that you are still accepting the new terms and agreements even if you choose to turn the setting off.
 
  • Like
Reactions: applegeek2024
Thanks MR! I’m so glad I opted out of personal data collection from my ChatGPT and Grok accounts. It’s sad that it’s opt out only.
 
Not nice to see this. Definitely going to turn on all the available privacy options before I continue to use it.
 
  • Like
Reactions: mganu
What's so difficult about opting out and go on using it?
If you opt out they won't train their llm on your data and they'll keep the chat for 30 days after you delete it. Avoid chatting around sensitive subjects because once the chat is flagged by their system they'll keep it for 5 years.
With everything considered Anthropic seems to be more ethical and more privacy oriented than ChatGPT.

claude.jpg
 
I'm guessing this has to do with Anthropic settling in their class action with authors? The Ars article sure makes it sound like that is the case.

But, you can clearly opt out... So I'm not sure what the issue is.


Apple needs to bring this in house. Tired already of these paid extensions.
There was never any other endgame for AI. Look at what it has cost in investments to keep the lights on. They are desperately looking for AI's killer consumer app. There are tons of business and enterprise aligned use cases for the tech that are really useful at scale, but what could convince everyone with a smart phone to commit to a new monthly subscription.

A 24 year old college student who is a receptionist at a dentist office pays $70-$80/mo for her unlimited cell phone service through AT&T/Verizon. The average American family of 4 probably spends $225-$250 for their family plan, and they don't blink an eye because it is "necessary." The trillion dollar question is what is it going to take to make having a monthly AI bill as common as paying a high-speed internet bill?

That's what investors expect, along with sliding up the subscription costs from $20/mo to $50 or $100/mo as fast as possible.

If Apple built an in-house AI, you can bet access to the full capabilities would be walled behind a subscription of some sort.


My biggest issue with AI is the hype around its abilities and usefulness.

The "AI boom" is completely dishonest. Media and tech blogs are downright lying to the tech-illiterate by reporting as if these machines are thinking for themselves. Usually the stories are "it'll change the world" but, sometimes, they go for the "fake concern" Skynet takeover crap. The Skynet fear hype works because no one cares how dangerous something is as long as it's powerful and they have one, too (think fast cars, fireworks, guns, or 3,000mW laser pointers on Ebay that could take an eye out). This hype is being steered by shareholders in NVidia, etc. Literally this morning on CNBC, one "expert" said something to the effect of "There are some naysayers about AI but they all have some kind of angle, some personal agenda." His air of superiority that was as palpable as it was laughable.

The Anthropic announcement about using future chats to "train" has me thinking the only play left for these big-name chatbots is to mine data from the poor souls who are lonely enough or dumb enough to regularly share their feelings (and, by extension, specific personal preferences) leading to hyper-specific ads targeted at the kinds of people who are easily sold on things. No doubt future chatbots will seek out this information under the pretext of friendly conversation only to turn around and advertise with it the same way - "omg, babe, if I were human I'd be craving a lettuce sandwich from that new vegan restaurant, "IceBurger" (I hope that's not a real place). LLMs are good for data crunching but that won't make enough money to recoup what's been spent on all these dogs and ponies.
You nailed it. Sam Altman has said multiple times he thinks the bubble for AI is going to pop and the real bills will start coming due. A lot of people stand to loose a lot of money. I think the worst thing these companies ever did was focus on the chat bot aspect of these technologies. It set the wrong expectations.
 
  • Like
Reactions: NoGood@Usernames
If you opt out they won't train their llm on your data and they'll keep the chat for 30 days after you delete it. Avoid chatting around sensitive subjects because once the chat is flagged by their system they'll keep it for 5 years.
With everything considered Anthropic seems to be more ethical and more privacy oriented than ChatGPT.

View attachment 2541774
Nothing wrong about that, so I can come next day or any day within 30 days and pick up where I left. Still this isn't consumed by Claude.
 
This was just one of my methods to test the AI for hallucinations.
This has nothing to do with a hallucination of the LLM but rather it shows the limits of the image generation part. You're conflating both things.

If you ask ChatGPT or Claude for a text response, like plan my trip around the city, it will be very good. The image generation part, especially when it comes to text, maps, etc. is not that good (yet).
 
  • Like
Reactions: flybass
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.