Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster


Anthropic today updated its Sonnet model to version 4.6, and the company says it is the most capable Sonnet model to date with upgrades across coding, computer use, long-context reasoning, agent planning, knowledge work, and design.

anthopic-claude.jpg

Claude Sonnet 4.6 is the default for users who have Free and Pro plans, and it has an updated 1M token context window.

Sonnet 4.6 improves consistency and instruction following for coding, it's better at computer use tasks, and it can complete office tasks that previously required an Opus model. Sonnet 4.6 has human-level capability for tasks like navigating a complex spreadsheet or filling out a multi-step web form.

According to Anthropic, Sonnet 4.6 has a "a broadly warm, honest, prosocial, and at times funny character, very strong safety behaviors, and no signs of major concerns around high-stakes forms of misalignment." It offers Opus-level intelligence at a more affordable price point, so it is practical for a wider range of tasks. Opus 4.6 is still the better option for agentic coding, agentic code use, and multidisciplinary reasoning, but Sonnet 4.6 offers measurable improvements over Sonnet 4.5.

Claude 4.6 is available as of today for all Claude plans, and Anthropic has also provided file creation, connectors, skills, and compaction to free users.

Article Link: Claude Sonnet 4.6 Brings Improved Coding, Computer Use, and Office Tasks
 
  • Haha
Reactions: Z-4195
Claude remains the best AI for coding, it's the only one I have tried giving something complex that hasn't taken multiple attempts to figure out the answer. I don't think any AI is really ready to be used to generate a complete program without massive debugging, but Claude seems to work if you give it little bits to chew on and manage the output to fit it back into your codebase.

I know people are big on "vibe coding"...we just had someone do some of that at work and it made such a mishmash of operational code that it managed to crash an entire computer. It did pull off what they intended, but hacking something to demonstrate a function is far different than making a robust operations system, and AI just is not there yet from what I've seen.
 
Last edited by a moderator:
Well, Apple’s partnership with Google for Gemini seems to give them a stake in this game, but if other models turn out to be significantly better at some tasks that is interesting.

Anthropic seems to be making steady progress…
 
  • Like
Reactions: KurtWilde
Apple should have bought Anthropic. From my experience, Claude is the best, at least at coding and complex reasoning.
There's no ROI with Anthropic, any LLM provider is a bad / risky investment now.

I look at it like Apple Maps in a way, Apple will work on the front end and collect all the data they need and then eventually deploy their own purpose built solution - hopefully with less bumps in the road.
 
Because people are waking up and calling out the AI Slop Machines for what they are. The sentiment had extended to non-tech enthusiasts. Tech Bros, the Finance Bros, and media puppets are the only one's still pushing it. And many Finance Bros appear to be souring considering the tech stocks lately. Anthropic and OpenAI are panicking.

Look at the public backlash over Windows 11 and CoPilot. Still, Microsoft's "AI chief executive" Mustafa Suleyman, only a week ago, had the gaul to claim "AI" will replace all white-collar jobs within 18 months. Seriously? Does anyone really believe that?

EDIT: Anthropic right now:
8946622b-9e1e-4f89-b27d-23bd747054c9_text.gif
 
Last edited by a moderator:
Because people are waking up and calling out the AI Slop Machines for what they are. The sentiment had extended to non-tech enthusiasts. Tech Bros, the Finance Bros, and media puppets are the only one's still pushing it. And many Finance Bros appear to be souring considering the tech stocks lately. Anthropic and OpenAI are panicking.

Look at the public backlash over Windows 11 and CoPilot. Still, Microsoft's "AI chief executive" Mustafa Suleyman, only a week ago, had the gaul to claim "AI" will replace all white-collar jobs within 18 months. Seriously? Does anyone really believe that?

That's interesting, didn't Microsoft just announce a walk back of some of the integrations in Windows 11 as well?

I agree with you 100% - there's just no ROI for these LLM products. Anthropic has really tightened down the reasoning usage but there's no way they earn enough revenue to continue to burn on new models / inference at scale as long as competitors exist.
 
  • Like
Reactions: SqlInjection
That's interesting, didn't Microsoft just announce a walk back of some of the integrations in Windows 11 as well?

I agree with you 100% - there's just no ROI for these LLM products. Anthropic has really tightened down the reasoning usage but there's no way they earn enough revenue to continue to burn on new models / inference at scale as long as competitors exist.
Look up Project StarGate, Blackrock equity has a large division building AI Infrastructure. Now theyare buying New Mexico's largest power company with secret deals with the regulatory Agency. Deals like that are happening in many states.
 
  • Like
Reactions: Agit21
Because people are waking up and calling out the AI Slop Machines for what they are. The sentiment had extended to non-tech enthusiasts. Tech Bros, the Finance Bros, and media puppets are the only one's still pushing it. And many Finance Bros appear to be souring considering the tech stocks lately. Anthropic and OpenAI are panicking.

Look at the public backlash over Windows 11 and CoPilot. Still, Microsoft's "AI chief executive" Mustafa Suleyman, only a week ago, had the gaul to claim "AI" will replace all white-collar jobs within 18 months. Seriously? Does anyone really believe that?

EDIT: Anthropic right now:View attachment 2605991
Outside of the tech bubble, AI has a huge public relations problem. Sure people use ChatGPT as a search engine and for writing help, but selling these systems as “job killers” to amp up valuations in anticipation of IPOs has poisoned the well quite profoundly. Add in local municipalities and data center construction becoming a huge flashpoint in communities, there’s a huge disconnect between what you see in the markets and on Twitter versus how everyday people are interpreting this shift.
 
Outside of the tech bubble, AI has a huge public relations problem. Sure people use ChatGPT as a search engine and for writing help, but selling these systems as “job killers” to amp up valuations in anticipation of IPOs has poisoned the well quite profoundly. Add in local municipalities and data center construction becoming a huge flashpoint in communities, there’s a huge disconnect between what you see in the markets and on Twitter versus how everyday people are interpreting this shift.
Apple has completely dropped the ball on AI and you can make up all the excuses you want for why that *might* be a good thing. But the fact of the matter is that while generative and agentic AI might have its issues, it is *the* central technology to making computing powerful, but simple to use and in a way that is user-friendly -- all central to Apple's core philosophy. Something that it has belatedly realized with its embarrassing capitulation Google and the introduction of Gemini as the intelligence behind the most critical of "Apple" "Intelligence" features - Siri. Any other CEO would have been fired for such blatant incompetence.
 
Everyday people are using this technology; the user numbers don't lie. There are tons of problems but it's myopic to think the common sentiment is what you read online, which I would actually still classify as an echo chamber.

There are about a billion people using Generative AI, this isn't going anywhere for better or worse, no matter how much clickbait articles or blog posts try to make us think otherwise.

I'm more sick of the anti-AI crowd than the AI-hype crowd at this point, there are barely any original thoughts anymore among the commentariat and it's depressing that I can't learn about research because it's drowned out by hot takes that are effectively meaningless as people scream into a void to try to gain a toehold on relevance.

Anthropic is doing fine. Better than fine if you look at their income streams. I'd be concerned about OpenAI much moreso, especially since advertisements are a real issue unless they're handled with a true transparent code of ethics. We know from decades of Google lying to everyone how that goes. I'd put a reasonable chance on their recent purchase of OpenClaw being a turning point for that company in a very negative way.

World models aren't even really public yet, we haven't seen anything. There will be another paradigm shift, as well as some consolidation in the market, but this technology is just to useful to too many people to go away. Haters, however justified, be damned.

RealID verification is coming whether we like it or not, it's the only way to protect Open Source and we can't sacrifice the entire software industry so we'll soon start seeing even privacy-forward people support it to some degree. Unfortunately.

As far as this thread is concerned, Opus 4.6 is excellent and I expect Sonnet 4.6 will be too although I don't use Sonnet much anymore.

Anthropic really needs to fix their voice interface situation though, they removed the ability to use it with the high-end models and that is an enormous error particularly when OpenAI has a new version coming very soon. Right now Claude will route all voice users to Haiku, and this is a new change since about 10 days ago. I wish someone would report on that because it's a big deal for certain workflows.
 
Last edited:
Because people are waking up and calling out the AI Slop Machines for what they are. The sentiment had extended to non-tech enthusiasts. Tech Bros, the Finance Bros, and media puppets are the only one's still pushing it. And many Finance Bros appear to be souring considering the tech stocks lately. Anthropic and OpenAI are panicking.

Look at the public backlash over Windows 11 and CoPilot. Still, Microsoft's "AI chief executive" Mustafa Suleyman, only a week ago, had the gaul to claim "AI" will replace all white-collar jobs within 18 months. Seriously? Does anyone really believe that?

EDIT: Anthropic right now:View attachment 2605991
Microsoft also acts all proud that 30% of Windows is written in AI. Yeah, we can tell Microsoft. Not something you should be proud of with the constant issues lately.
 
  • Haha
Reactions: amartinez1660
Glad to see it. Opus 4.6 has actually been really great, at least for my little hobby projects, so hopefully this will follow suit.
 
We know from decades of Google lying to everyone how that goes. I'd put a reasonable chance on their recent purchase of OpenClaw being a turning point for that company in a very negative way.

I really appreciated your post. Could you elaborate a bit more on the point above? I’m curious to hear how you see things taking a very bad turn.
 
  • Like
Reactions: novagamer
Will try it out soon. Nice that it is available for all users. Would be great if they increase limits of free plan.
 
  • Like
Reactions: mganu
Apple should have bought Anthropic. From my experience, Claude is the best, at least at coding and complex reasoning.
Considering Apple’s recent mishaps and the fact Anthropic is doing exceptionally well, I wouldn’t be too surprised if it were anthropic to buy Apple instead in a few years.
 
Look at the public backlash over Windows 11 and CoPilot. Still, Microsoft's "AI chief executive" Mustafa Suleyman, only a week ago, had the gaul to claim "AI" will replace all white-collar jobs within 18 months. Seriously? Does anyone really believe that?
And even if it did, that would crash the economy. I don't understand the end game! To make it so that roughly two-thirds of adults lose their jobs in the US? I've been hammering this point online for years now: If AI is successful, the economy crashes. If AI is not successful, the economy crashes. Either way the economy is gonna crash.

I was listening to a story on the Ride Home podcast (with Brian McCullough) in a recent episode and they were talking about research is finding that as companies scale up AI on software developers, the developers are burning out. Why? Because they're doing 10x the work and the cognitive load is too much to bear. Even with AI doing a lot of the heavy lifting, they are having to make so many huge, mission critical decisions multiple times per day, reviewing so much code and praying that the AI didn't screw it all up along the way if they can't keep up with it. It's insane. these companies could still 5x their work and give these employees a 4 hour work day, but no. That's not how capitalism works. And for that reason, see above, the economy is gonna crash. In a utopia AI works. It doesn't work in capitalism. These are incompatible systems. Either one is gonna have to go, or the other. They cannot co-exist.
 
  • Love
Reactions: amartinez1660
Apple has completely dropped the ball on AI and you can make up all the excuses you want for why that *might* be a good thing. But the fact of the matter is that while generative and agentic AI might have its issues, it is *the* central technology to making computing powerful, but simple to use and in a way that is user-friendly -- all central to Apple's core philosophy. Something that it has belatedly realized with its embarrassing capitulation Google and the introduction of Gemini as the intelligence behind the most critical of "Apple" "Intelligence" features - Siri. Any other CEO would have been fired for such blatant incompetence.
How does anything I wrote make any excuses for Apple and its AI problems? Apple is not even mentioned once in my comment about the public perception problem AI has solely because of AI-adjacent CEOs, executives and online boosters.
 
And even if it did, that would crash the economy. I don't understand the end game! To make it so that roughly two-thirds of adults lose their jobs in the US? I've been hammering this point online for years now: If AI is successful, the economy crashes. If AI is not successful, the economy crashes. Either way the economy is gonna crash.

I was listening to a story on the Ride Home podcast (with Brian McCullough) in a recent episode and they were talking about research is finding that as companies scale up AI on software developers, the developers are burning out. Why? Because they're doing 10x the work and the cognitive load is too much to bear. Even with AI doing a lot of the heavy lifting, they are having to make so many huge, mission critical decisions multiple times per day, reviewing so much code and praying that the AI didn't screw it all up along the way if they can't keep up with it. It's insane. these companies could still 5x their work and give these employees a 4 hour work day, but no. That's not how capitalism works. And for that reason, see above, the economy is gonna crash. In a utopia AI works. It doesn't work in capitalism. These are incompatible systems. Either one is gonna have to go, or the other. They cannot co-exist.

Neither extreme scenario is going to happen.

The models are now no longer improving at the rate they were from 2022-2024 when they were ingesting massive amounts of data. Now the improvements are in the low single digit percentage range twice a year and sometimes there are regressions in their own benchmarks.

According to their own benchmarks their best models today are around 80% competent at coding compared to an experienced and skilled programmer, not including the times when the model just outputs garbage or lies for unknown reasons.

Simulating every kind of job on the planet with machine learning alone is a fantasy. It's not even science fiction because science fiction authors were smart enough to understand computers and robots tend to fall over themselves and break down and have bugs forever and ever.

We can't even get a bug free file manager in 2026 so we can stop suffering from a delusionary future where an AI model can do anything. We are talking about software and robots, not living lifeforms.

ChatGPT and Claude might be "smarter" today than last year but the user experience is still bad with fake links and wrong answers given often. Even when you ask the model to stop giving fake links it says sorry and gives you more useless links.

What will happen is the top of the mountain will be so steep that the machines find it harder and harder to improve. Then there will be stasis. AI will trundle along forever without reaching the level the extreme predictions some made. Workers in all fields will work alongside these imperfect models and will always be correcting and double checking the output. In many jobs and sectors AI is just not even a thing. They don't care about it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.