Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That’s not correct, at least legally. If the “supply chain risk” thing goes through (it hasn’t been formally confirmed yet, and they have good chances to overturn it in court), it only blocks other companies from using Anthropic specifically in the context of government contracts; all other uses by the same companies are unaffected.
Sure, just nothing goes "legal" currently...
 
Anthropic’s principled stand was enough to win me over. I use Claude now when I want to use an AI.

I get the inclination to applaud them when we’re talking about something like mass surveillance, but do you really think a private company should have veto authority over how a democratically elected government and its appointed officials use a specific type of technology?
 
  • Like
Reactions: Newton1701
I get the inclination to applaud them when we’re talking about something like mass surveillance, but do you really think a private company should have veto authority over how a democratically elected government and its appointed officials use a specific type of technology?
It's hard to say; both the private-company & the elected-government can have their leaders swapped out, and then that new leader might have VERY different views from the prior ones.
 
I get the inclination to applaud them when we’re talking about something like mass surveillance, but do you really think a private company should have veto authority over how a democratically elected government and its appointed officials use a specific type of technology?
I’m not who you’re responding to, but if you don’t mind I’ll chime in.

To answer your question, no, I don’t think private companies should be able to tell th government how they can use “a specific type of technology”. But I do think companies should be allowed to dictate how their technology can be used when signing contracts, even with the government, except in very narrow circumstances.

If Anthropic was the only AI company on planet earth, or the only American AI company, then there might be justification for the government to say “we need this, and want to use it for stuff you don’t approve of, so we’re going to take it (but compensate you appropriately)”.

But Anthropic isn’t the only American company providing AI technology. And in democracies, we generally don’t force a private company to work for the government against their will, except as a very last resort. Especially when there are other companies providing the same technology who are willing to agree to the government’s terms.

To recap, this is literally what happened:
  1. DoD signs a contract with Anthropic. (Hegseth is Defense Secretary when the contract with Anthropic is signed, so he can’t blame Biden officials.)
  2. The DoD decides, after signing the contract, that they are unhappy with certain the terms of the contract.
  3. The parties couldn’t mutually agree on a resolution.
  4. Instead of just canceling the contract, (which, as someone who works in government contracting, I can tell you is trivial for the government to do - there’s literally a clause in every single contract I’ve ever been a part of (more than 20 at this point) saying it can be canceled “at the government’s convenience” with no compensation owed), the DoD decided to go scorched earth and try to destroy Anthropic’s business for insisting the government abide by terms the government previously agreed to
  5. And this is all because DoD wants to be able to use a technology that literally will tell you in the scenario “I live walking distance from a car wash and my car is dirty should I walk or drive” that you should “leave your car at home and walk to the car wash because it’s better for the environment” to fire weapons autonomously. Even though the company making the technology says that’s a terrible idea and the technology isn’t designed or ready for it.
There are at least 2 other AI companies (xAI + OpenAI) who are willing to play ball with the terms the DoD requires.

I’m sorry, but there is no universe where punishing Anthropic in any way outside of canceling the contract is the correct move. And honestly canceling the contract is probably too much - the government not reading/understanding the contract they signed should be the government’s problem, not Anthropic’s.
 
Last edited:
ChatGPT was great until they started putting paywalls is so it can remember or keep ad’s off…
Before I could load text from notes so it could summarize everything but it can’t even do that anymore because of a page limit. So, it has become useless for me now.

Our family will be trying Claude going forward. Hopefully, it’ll replace what ChatGPT used to offer for free.
 
Am I the only one that dislikes "memory" features? I don't need a chatbot remembering stuff from other conversations.
Sometimes it’s actually useful, like if you have a hobby or ongoing project you ask about a lot, having that context saves time and makes the answers better.
For random one-off questions, it’s not all that helpful.
 
I get the inclination to applaud them when we’re talking about something like mass surveillance, but do you really think a private company should have veto authority over how a democratically elected government and its appointed officials use a specific type of technology?
Adding surferb’s reply by reference, because I agree with it. But also, yes, I don’t think the government gets to dictate the terms under which a business will offer its products for sale.
 
Can we get ability to sign up login with password. And if not that, then an additional SSO login. Login with Google / login with apple is ok. But sometimes, something like "login with Microsoft" (or another non-google provider) as a more "third" SSO option comes in handy if we want something that's almost as handy as password login. For those of us who have a default+alt setup, we may not be logged in to that Google account, but still want cross profile access to a service and a shared history but without the added (somewhat cumbersome) issue of checking an email address for a code every time we want to login. I only suggested login with Microsoft because for me that's what's worked with other services, although maybe a more dedicated SSO could be even more helpful. Sometimes I wish Google themselves would even recognize those of us who have a default+alt setup and create a dedicated type of Google account that's made *just* for SSO, without even the *ability* to potentially/accidentally sync/tie photos, files, etc with that given account type.
 
Am I the only one that dislikes "memory" features? I don't need a chatbot remembering stuff from other conversations.
They'd be pretty useless without it, imagine having a conversation with a real person and they forget everything you said and who you were every time you spoke to them. Not only would they be useless, it would be so annoying.
 
Sometimes it’s actually useful, like if you have a hobby or ongoing project you ask about a lot, having that context saves time and makes the answers better.
For random one-off questions, it’s not all that helpful.

Why not just go back and resume that previous conversation with all that context?
 
It also cannot read the existing thread that you're in.

Not sure how ChatGPT does it, but when using Claude your entire conversation is run through the LLM each time you send a message. Meaning each response is based on the entire thread, not just the latest response.

I suppose you could exceed the bot's context window, but who talks to these things for so long without changing chats / topics?
 
"only $200M" now, in the long run they will not get any new gov contracts as well as private sectors who have any ties to gov. And some enterprises will follow ...
I'm not saying what they did was wrong, but the biz implication is huge

They're going to sue and almost certainly win, so they probably aren't too concerned about that.
 
While this is good, the better thing to gain more users might be to increase the usage limit for free users and also to have an option to try out Claude without an account or logging in. For simple quick use, many might be using ChatGPT without logging in, something that Claude does not have currently.
 
  • Like
Reactions: mganu
Claude is down. Maybe it grew too fast... or maybe Palantir is having fun with it?
Yep. I decided to give it a try few minutes ago but received „rate limited” error on the FIRST prompt and suggestion to upgrade… not the best introduction…
 
Sorry folks, but let me welcome this with a LOL!

I've been using Claude Code since May 2025 and VERY OFTEN it even fails to read and obey to its own CLAUDE.md and now they want to import memories from other tools?! WHY?! What's the point of having a handly curated instruction file if it starts ignoring it?!

Do you want examples? Cool.

1) My projects CLAUDE.md have clear instructions about how to run tests and all the examples clearly say "nox ....". Claude randomly run tests in a different way, or it uses pytest directly etc... every time I make it notice it apologises as usual

2) There is a clear instructions which say "NEVER DO A git commit OR A git push WITHOUT MY EXPLICIT CONSENT" and it randomly commits or pushes without my permission!

I could go on for hours...
 
I get the inclination to applaud them when we’re talking about something like mass surveillance, but do you really think a private company should have veto authority over how a democratically elected government and its appointed officials use a specific type of technology?
As you phrased it, no. Anthropic has no authority to dictate how the government uses AI. They absolutely should have authority and agency to decide how their product is used though. Especially when there are multiple competing products. If a corporation is going to be afforded the same First Amendment rights as an individual in this country for political donations and support, how can you argue they don’t have agency over the use of their own products?

Finally, should a democratically elected body be able to compel a company to comply? Yes, but unless there are extenuating emergency circumstances, it should only be possible with passage of legislation otherwise.
 
Something I forgot from last year: Two OpenAI executives were given Army commissions with the rank of Lt. Colonel last summer. How compelled are they to follow orders? Did the DoD use it as leverage to get OpenAI to cooperate?
 
Am I the only one that dislikes "memory" features? I don't need a chatbot remembering stuff from other conversations.

It highly depends on what you use AI for.

If you just want to use them for one off conversations, then memory adds little value.

But if you're working on a bigger project that spans tens of chats, it is incredibly helpful that AI has knowledge of the project so you don't have to explain the same things to it every time.
 
Enjoy it while it lasts. I hope I’m wrong but I don’t think that’s a spine, I think that’s opportunism.

It’s what the very last shred of functional capitalism looks like, where they still feel like they need to look like they care what users want.

how on earth do you see shunning the biggest military with the biggest budget of any organization in existence as opportunism?

I'd argue you have to be pretty jaded that even a principled stance with great cost looks like opportunism to you.

Then again, we all project our inner beliefs onto the world, so the world looks different to different folks.
 
I get the inclination to applaud them when we’re talking about something like mass surveillance, but do you really think a private company should have veto authority over how a democratically elected government and its appointed officials use a specific type of technology?
Unfortunately your democracy is broken, the guard rails your forefathers built have come off and now you have policies being enacted by decree of one person.

Bullying of private companies who refuse to change contracts, bullying of universities who refuse to comply, bullying other countries, threatening to start wars in European countries, starting an illegal war in the Middle East. All without any congressional oversight.

Should this methodology be used to run a country? Hell no!!!!

And the idea that Americans are comfortable with all this is frightening.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.