Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
70,339
41,950


Anthropic today announced the launch of Claude Opus 4.5, which it says is the "best model in the world for coding, agents, and computer use." It's improved over prior models for everyday tasks like deep research, and it is a "step forward in what AI systems can do."

anthopic-claude.jpg

According to feedback Anthropic received from early testers, Claude Opus 4.5 can complete tasks that were impossible for Sonnet 4.5, and that it is able to handle ambiguity and reason about tradeoffs without hand-holding. The Opus 4.5 model offers better vision, reasoning, mathematics skills, and coding than prior versions of Claude.

Along with the Opus update, Anthropic is updating its apps, the Claude Developer Platform, and Claude Code. There are tools for longer-running agents, and options to use Claude in Excel, Chrome, and on the desktop.

In the Claude apps, users will no longer run into limits during a long conversation. Claude is able to automatically summarize earlier context, which means the conversation can keep going endlessly. Claude for Chrome is available to all Max users, and Claude for Excel beta access is now available to all Max, Team, and Enterprise users.

Claude Code is now available in the desktop app, and with Opus 4.5, it is able to build more precise plans and execute them more thoroughly. Claude is able to ask clarifying questions upfront and then build a user-editable plan before executing.

Claude Opus 4.5 is available today across Anthropic's apps and its API. Opus-specific caps have been removed for Claude and Claude Code users with access to Opus 4.5, and for Max and Team Premium members, overall usage limits have increased.

Article Link: Anthropic Launches Claude Opus 4.5 With Improved Coding and Agent Capabilities
 
  • Like
Reactions: SFjohn
Opus 4.5 is awesome and the usage limits are now finally generous. There is some great detail about what they've done and the videos for this release are really worth watching, especially the one comparing Sonnet 4.5 vs. Opus 4.5 on context usage and problem solving / opening 'locks'.

If you used Claude Code, compacting now works in chats automatically.

I'm not sure if this is good or not for very long context, I tend to prefer summarizing at the end and carrying that forward manually, I imagine the context window is sliding once the compaction happens so you probably won't be able to get a real summary like you would before, but for people who change topics or keep a long thread open it will probably be a welcome change. I'd like this toggle-able per chat but I doubt that will happen.

Max users get signiifcantly more Opus usage than before, as much as Sonnet previously. Opus is the default model for me for the first time ever in the UI which is great. It even defaulted to extended thinking (!).

I'm in the middle of a 3 hour session now and have used les than 1% of my week; easily 1/3+ less than usual. VERY happy with this from Anthropic and can't wait to use Opus 4.5 with WebStorm in a few weeks to refactor some typescript.
 
Hmm... did uou know that everything you are using - your phone, computer, home tech, etc. - use some form of AI? Perhaps you shouldn't even be talking on this board either.
People are afraid of AI taking their jobs so they bury their heads in the sand hoping that this is just a bubble that will collapse soon and they can go back to pre-2022.

The only people who won’t be employed and won’t be able to compete are the ones who don’t use AI to assist them.
 
Last edited:
Will try this out. Looks like the improvements are significant. Glad to see changes to the usage limits.
 
  • Like
Reactions: mganu
What a contrast (at least for me). Even just from the articles here on MR, with seemingly every update, OpenAI is so focused on making its models blow more smoke up your ass and getting involved in commerce, while Anthropic is over here, by all accounts, making its models better at producing code (one of the only things I use these tools for) and following prompts while raising usage limits. Really looking forward to trying this out over the next week.
 
What a contrast (at least for me). Even just from the articles here on MR, with seemingly every update, OpenAI is so focused on making its models blow more smoke up your ass and getting involved in commerce, while Anthropic is over here, by all accounts, making its models better at producing code (one of the only things I use these tools for) and following prompts while raising usage limits. Really looking forward to trying this out over the next week.
OpenAI has close to 1 billion users, I’m sure most of them are not interesting in coding.
 
What a contrast (at least for me). Even just from the articles here on MR, with seemingly every update, OpenAI is so focused on making its models blow more smoke up your ass and getting involved in commerce, while Anthropic is over here, by all accounts, making its models better at producing code (one of the only things I use these tools for) and following prompts while raising usage limits. Really looking forward to trying this out over the next week.
OpenAI's GPT5 Codex was the best coding model until Gemini 3 and likely Opus 4.5. Not sure what you're talking about.

Anyways, OpenAI is also gunning for more than just coding. They're gunning for social media, web search, and probably hardware soon. They command a bigger valuation than Anthropic precisely because they have 1 billion users.
 
Last edited:
  • Haha
Reactions: UpsideDownEclair
I’m in the process of comparing models with a simplistic coding challenge. I’ll look at adding this to the list, though the price may put me off.
 
Last edited:
  • Like
Reactions: carlsson
People are afraid of AI taking their jobs so they bury their heads in the sand hoping that this is just a bubble that will collapse soon and they can go back to pre-2022.

The only people who won’t be employed and won’t be able to compete are the ones who don’t use AI to assist them.
I don’t want just to ‘like’ this post as it’s an emotional and concerning subject - employment and AI - and will affect nearly. everyone as you say.

But I agree.
 
  • Haha
Reactions: UpsideDownEclair
Tested it today (to make a short lesson), and it's VERY promising, IMHO.

Now, one issue was that I hit the usage limit on the free tier relatively fast, anyone know if the Pro plan is (considerably) more generous, the pricing page just says: More usage*

...and that asterisk isn't going anywhere...

And the link in "Additional usage limits apply. Prices shown don’t include applicable tax." wasn't super clear... nor updated after Opus 4.5 was released.
 
The only people who won’t be employed and won’t be able to compete are the ones who don’t use AI to assist them.
I agree with you that AI is here to stay and will replace jobs. But you overestimate the benefits of being early. AI tools are still in their infancy but they improve incredibly fast. It is not necessary to become "experienced", as this won't be required at all in just a few iterations. It is as if you were the first to adopt driving a 19th century car with profound mechanic skills, abilities to frequently repair the engine, drive, park etc. And two years later the whole technology is already replaced by self-driving cars that do everything voice-controlled and without getting your hands dirty.

The person who will keep their job is the one with the best business domain knowledge, analyst and communication skills, who can therefore achieve he most with whatever limitations AI will still have. Not the one guy who tinkered with every model since GPT-2. Everyone will just use similar tools in the same ways. Back to a car analogy: There certainly was once a doctor who was the first to own a car and therefore was considered the best doctor far and away because he was the only one who could travel long distances quickly. As soon as every other doctor bought a car too the skills that mattered were doctor skills, not who owned their car for the longest time.
 
  • Like
Reactions: hans1972
Now, one issue was that I hit the usage limit on the free tier relatively fast, anyone know if the Pro plan is (considerably) more generous, the pricing page just says: More usage*
If you want to get things done with ANY AI tool, don't use the free tier. You have to pay to play right now. If you're doing even medium scale development and using them regularly you have to pay for Max, not Plus. Might not be worth it to you; I've saved easily $40k in developer costs in the last week alone and it cost me about $1,000 in usage.

For trying it out a bit more, I'd recommend you give Plus a shot and see if it meets your needs. They do pro-rate you if you upgrade.

I agree with you that AI is here to stay and will replace jobs. But you overestimate the benefits of being early. AI tools are still in their infancy but they improve incredibly fast. It is not necessary to become "experienced", as this won't be required at all in just a few iterations. It is as if you were the first to adopt driving a 19th century car with profound mechanic skills, abilities to frequently repair the engine, drive, park etc. And two years later the whole technology is already replaced by self-driving cars that do everything voice-controlled and without getting your hands dirty.
Self driving cars are actually a great analogy, because they work under similar technological hopes and dreams yet you still need to know how to drive a car unless you're living in a specific city where everything is mapped. So I think your analogy is somewhat apt but your conclusion is a little skewed.

The person who will keep their job is the one with the best business domain knowledge, analyst and communication skills, who can therefore achieve he most with whatever limitations AI will still have. Not the one guy who tinkered with every model since GPT-2. Everyone will just use similar tools in the same ways.
I don't think this is true, it won't be dumbed down enough – for general use, yes, like office tasks etc. For programming and architecting you will need to understand the domain and the technology. Using the models now can help you get up to speed for the future that's coming, I absolutely do not think it's a waste of time and the main caution I'd have is relying too much on the output and not understanding what is actually occurring behind the scenes or with the technology being implemented. e.g. if an AI writes some module, it behooves you to really dive in and learn what that module does and why it's structured that way – and press on it (with or without AI's help) to understand it and also how to improve it yourself.

As I said in another thread, "coders" are not "Computer Scientists" and that difference is going to become enormously stark very quickly. Anyone in this field should be sprinting to catch up unless they are close to retirement because the luxury salaries and jobs are going away for the regular coders.

Specifically on your point about "analyst and communication" you are 100% correct. This is critical and it's something programmers suck at. Similarly, most business people also suck at programming. The rare person who bridges those gaps and has a good high level knowledge but can understand when something doesn't make sense are going to 10x their output and improve their product quality, but it won't be the norm.


A curious person who bridges the gap in a very similar way to "the intersection of liberal arts and technology" that Steve Jobs talked about... we're exactly at the bicycle for the mind part of all of this and it is thrilling. The number of people dumping on AI who do not understand it just further proves that willful ignorance is a major problem, despite there being objective issues with the technology from ethics to sociological impacts to societal surveillance and control. The thing is, that stuff is happening whether we want it to or not.

If this is someone's career (not you specifically) and they're early or even later mid-stage and ignoring AI and they don't work on a very narrow domain that is extremely uncommon (RTOS, HPC, Compilers, PhD frontier research) their trajectory is going to get absolutely obliterated if they don't learn and learn fast.


edit: the cowards who just post a laugh emoji and don't engage in discussion are hilarious :)
 
Last edited:
OpenAI has close to 1 billion users, I’m sure most of them are not interesting in coding.

I have to say, ChatGPT is really lacking ( sucks ) compared to Gemini and Claude. Claude creates much nicer graphs and is far more capable when it comes to reasoning than either ChatGPT or Gemini.
 
  • Like
Reactions: novagamer
Hmm... did uou know that everything you are using - your phone, computer, home tech, etc. - use some form of AI? Perhaps you shouldn't even be talking on this board either.
That's not true. Maybe machine learning, but not AI. Machine learning is a subset of AI and has been around for a very long time.
 
  • Like
Reactions: 01cowherd
For programming and architecting you will need to understand the domain and the technology. Using the models now can help you get up to speed for the future that's coming, I absolutely do not think it's a waste of time and the main caution I'd have is relying too much on the output and not understanding what is actually occurring behind the scenes or with the technology being implemented. e.g. if an AI writes some module, it behooves you to really dive in and learn what that module does and why it's structured that way – and press on it (with or without AI's help) to understand it and also how to improve it yourself.

As I said in another thread, "coders" are not "Computer Scientists" and that difference is going to become enormously stark very quickly. Anyone in this field should be sprinting to catch up unless they are close to retirement because the luxury salaries and jobs are going away for the regular coders.
As someone in the autumn of his career, I agree with you. I'm not close enough to retirement to coast on home, and I would not want to anyway: the folks who are going to be producing stellar work are the ones who learn to use these tools.

Right now, I'd put "AI coding tools" in the category of "somewhat smarter than RPC marshaling code generators": you use them to write a lot of startup boilerplate, and then you dig in hard. No one writes RPC marshaling code by hand anymore, and they haven't for decades. The real value add is solving a problem in an efficient way, and some models are astonishingly good at it, and some aren't. All of them will hallucinate from time to time, and if you do not know what you are doing and blindly trust them, you're going to have a bad time.

If I can have a conversation in natural language with an agent and it can identify problems and fix things for me in response to (sometimes very detailed) prompts, I'm going to be a lot more productive than I am if I stick my head in the sand and hope I don't get left behind. I still have to be aware of data structures and algorithms and their costs and tell the agent to use what I want it to use, so that it does not make a dumb, non-performant choice. I also still have to be aware of the underlying problem I'm trying to solve, and make sure that when I prompt the agent, the prompt is specific enough to get me a good chunk of the way there.

"Agent, solve world hunger" is not necessarily going to do what you want. Having a conversation with an agent to learn about the issues involved in the economics, scalability, and distribution of food is going to be more productive (though you have to check its work--they hallucinate). Then you can go in to specifics of solving different parts of the problem.
 
  • Love
Reactions: novagamer
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.