Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I switched to Gemini Pro, mainly because I can get it from Verizon for $10 a month minus a 12% military discount. I'm cheap, but willing to pay $8.80 a month.
 


OpenAI today added a new subscription tier, which the company says is meant to support increasing Codex use. Codex is OpenAI's AI coding agent that's integrated into ChatGPT, and it competes with Anthropic's Claude Code.

openai-logo-blue.jpg

The new $100/month Pro tier provides 5x more Codex usage than the $20/month ChatGPT Plus plan. OpenAI says that it is best for longer, high-effort Codex sessions. ChatGPT also has a $200 Pro tier with a 20x higher usage allowance, and the $100/month plan is a new middle-tier option. Both the $100 and $200 plans share the "Pro" name.

Pro subscribers will have access to all Pro features, including the Pro model and unlimited access to Instant and Thinking models.

To celebrate the launch of the new plan, OpenAI is increasing Codex usage for a limited time. Through May 31, customers who subscribe to the $100/month Pro plan will get up to 10x usage of ChatGPT Plus on Codex.

In addition to introducing the new plan, OpenAI is "rebalancing" Codex usage in Plus to support more sessions throughout the week, instead of longer sessions in a single day. OpenAI says the ChatGPT Plus plan is the best offer for steady, day-to-day usage of Codex, while the more expensive $100/month plan is a "more accessible" upgrade path for heavier daily use.

With the $100 plan, OpenAI has pricing tiers similar to Anthropic. Anthropic has a $20/month Pro plan, a Max 5x plan for $100/month, and a Max 20x plan for $200/month.

Article Link: OpenAI Adds New $100/Month ChatGPT Subscription Tier for Heavier Codex Use

OpenAI needs to recon that they and Anthropic are not competitors.

The smart ones are the Chinese.

Selling $10 plans like GLM and Kimi. While their AI is not as good as Anthropic's, it's as good as Anthropic was 6 months ago. They are cheap and can be used for simpler tasks, because Anthropic is expensive.
 
I wish the macOS Claude desktop app was as good as ChatGPT's. I don't like how the app constantly appears in the dock. ChatGPT's app only appears in the menu (unless you expand the mini chat window).

Overall, Claude is a better AI in my opinion. Just needs some fine tuning with its desktop app.

Try the latest Claude macOS app... it's has Claude Code and Cowork now.

But I agree with the Dock icon. Also, the Claude app takes too much RAM!
 
I like Claude. The pro plan does get used quickly as others have noted.

The biggest problem I have with Claude is they keep banning my account. I’m not doing anything other than using as a swift coding assistant (Claude code) and a few simple chats. 100% confident I’ve not broken their T&Cs.

Really they are way out of line with it. There’s no customer support so they use this automated system for banning people and don’t provide any way to get support. It’s kind of terrible tbh.
 
  • Like
Reactions: ChamomileTea
I see the excitement when talking about alternatives and all I can think of is, am I the only one who doesn't "get it"? The only time I use AI is when google shoves AI answers to me when I search for things.
Why do you want this in your life?
 
  • Like
Reactions: CombatCaptured
I see the excitement when talking about alternatives and all I can think of is, am I the only one who doesn't "get it"? The only time I use AI is when google shoves AI answers to me when I search for things.
Why do you want this in your life?
Exactly the same, as a designer I only ever use duck.ai in Firefox to give me quick answers to things, save me from wading through websites.

I guess most here on MR are coders/developers that need a premium subscription. I fail to see how it’s of any benefit to the average person to pay for it.
 
Mistral Le Chat Pro is $14.99/month and doesn’t support Sam Altman. Dumped ChatGPT a couple months ago for several reasons, including quality and policies.
Mistral AI doesn't really care about le Chat. Really disappointed about lacking updates and no transparency for consumers.
 
Exactly the same, as a designer I only ever use duck.ai in Firefox to give me quick answers to things, save me from wading through websites.

I guess most here on MR are coders/developers that need a premium subscription. I fail to see how it’s of any benefit to the average person to pay for it.
Everyone has a different attitude to work. Some just want to 'win' a productivity competition against peers, some want to enjoy the craft. Unfortunately, those who want to enjoy the craft are going to get priced out of the market and find themselves trying to sell hand-coded C++ on Etsy, while those who want to win are going to be competing against ever more cybernetic competitors, until the market decides the human part of the arrangement isn't meaningfully contributing anymore.
 
Pardon my noobiness, but why do people bother with vibe/AI coding anyway, much less pay for it, if they can’t even rely on the quality of the code?

Maybe people don’t want to bother to learn coding themselves, so they think they’ll get something for nothing (or next to nothing) by having an AI do it?
 
I see the excitement when talking about alternatives and all I can think of is, am I the only one who doesn't "get it"? The only time I use AI is when google shoves AI answers to me when I search for things.
Why do you want this in your life?
That’s why I no longer use Google for Web searches, and I pay for a monthly subscription to Kagi. No AI (unless you really want to enable it), no ads. I don’t miss Google’s shove-it-down-your-throat speech to AI “assistants” one bit.
 
Considering recent limits on Claude and Anthropic playing "let's gaslight the users" game, might consider switching to Codex.

Opus is good, but it's getting unusable even on 200 EUR plan - and i'm not even coding.
 
Pardon my noobiness, but why do people bother with vibe/AI coding anyway, much less pay for it, if they can’t even rely on the quality of the code?

Maybe people don’t want to bother to learn coding themselves, so they think they’ll get something for nothing (or next to nothing) by having an AI do it?
You definitely can rely on AI code - but you have to be proficient in the language you're using, to be able to review and spot issues.

AI coding agent is like a freshly-graduated top-of-the-class intern, doesn't have much experience, but eager and remembers his CS courses really well. Well, like a bunch of them, actually.

It's 10x'ing for a developer, 0.1x'ing for 'vibe coder'.
 
Last edited:
  • Like
Reactions: hans1972
Already switched to Claude and not looking back
I switched to Claude more than a year ago but I notice that even the x20 plan is not enough with today's agentic challenges. I started maxxing out my Codex Pro every week because my Claude Max x20's usage is used up by day 5 or 6 of the week.
 
You definitely can rely on AI code - but you have to be proficient in the language you're using, to be able to review and spot issues.

AI coding agent is like a freshly-graduated top-of-the-class intern, doesn't have much experience, but eager and remembers his CS courses really well. Well, like a bunch of them, actually.

It's 10x'ing for a developer, 0.1x'ing for 'vibe coder'.
Hmm... so if you still need to know about the language and be able to spot errors, wouldn't it be faster and more efficient to just do it yourself, if you have the skill? What if the AI takes a wrong turn and writes less efficient code than it should? I guess we just accept the bloat?
 
  • Like
Reactions: Trusteft
Hmm... so if you still need to know about the language and be able to spot errors, wouldn't it be faster and more efficient to just do it yourself, if you have the skill? What if the AI takes a wrong turn and writes less efficient code than it should? I guess we just accept the bloat?
This would explain all the poop Windows updates lately by MS.
 
Hmm... so if you still need to know about the language and be able to spot errors, wouldn't it be faster and more efficient to just do it yourself, if you have the skill? What if the AI takes a wrong turn and writes less efficient code than it should? I guess we just accept the bloat?
My guess is the latter. AI craps out a bunch of mediocre, unoptimised code which ends up being copied and pasted as-verbatim because it technically works on paper, and developers cannot be bothered to take the time to improve on it because it would defeat the whole point of having an LLM generate it for you.
 
  • Wow
Reactions: jchap
My guess is the latter. AI craps out a bunch of mediocre, unoptimised code which ends up being copied and pasted as-verbatim because it technically works on paper, and developers cannot be bothered to take the time to improve on it because it would defeat the whole point of having an LLM generate it for you.
Seems about on par for a civilization that tends to prioritizes speed and cost over quality.

As the saying goes, out of the "fast", "delicious" (high quality) and "cheap" options, you can have two of these, but likely never all three...

Even the $100-some cost of the ChatGPT Premium Codex plan (and similar plans by the competing AI services) is probably a bargain for those who want to crank out code without paying someone for it or caring too much about how optimized or efficient it is.
 
Hmm... so if you still need to know about the language and be able to spot errors, wouldn't it be faster and more efficient to just do it yourself, if you have the skill? What if the AI takes a wrong turn and writes less efficient code than it should? I guess we just accept the bloat?
No, it would not.

A lot of coding is scaffolding and boilerplate, AI is amazing at those.

You don't one-shot the code based on vague idea (that way you WILL get crappy code and bloat).

First you run through analysis-planning-DRY/KISS-reviewing-critiquing pipeline just to create the spec. Then you tell it to design and implement test suite for the thing you're doing.

That way you have detailed plan to feed the agent, you implement it and then run de-slop/critique/review workflows.

Still orders of magnitude faster than hand coding and actually is of better quality generally. It literally does in 30 minutes week's worth of work of a middle+ engineer now.

With modern models and agents it's mind-blowing tbh (i'm not coding much now, been doing it since 90s, still amazed by how good AIs are getting).

IMO, main vibe (hah!) of "AI code bad, hur hur" comes from "vibe-coders" who have NFI what they're doing and are disappointed when they get crap in response to "AI, maek me cool app".

P.S. At the place i'm working we almost stopped hiring juniors/interns, because AI is way better than they would be, and costs 10-20 times less. Now that is something to be scared about, because in several years' time we will have no middle engineers, and after that no seniors...
 
The whole thing is kind of a fraud if you think about it like they always claim "x times more usage than y" but you can't even really tell how much that actually is or how you currently stand at any given month. Plus you pay them for training their AI, they should be the ones paying us 😅
 
Pardon my noobiness, but why do people bother with vibe/AI coding anyway, much less pay for it, if they can’t even rely on the quality of the code?

Maybe people don’t want to bother to learn coding themselves, so they think they’ll get something for nothing (or next to nothing) by having an AI do it?

This is the problem.

People who don't know much about something don't have enough experience to know what is bad and what is wrong. Therefore everything looks ok to good. This leads to a self-reinforcement loop and overconfidence which becomes a problem at some point when something gets unmanageably complicated or does genuinely go bad.

And that's where I come in take all the money that they were planning to spend on LLM tokens...
 
  • Like
Reactions: jchap and 2128506
I tested local AI on my old Lenovo P52 with models 30B or on MacBook Pro M5 Pro 64 GB with 70B models. It's "free", paid once - use free forever. It has similar quality to ChatGPT two years ago. Why people pay anything for AI is wonder to me.
Hybrid is the future: local LLM with access to the world + agents scraping current data online + occasionally use online providers together local LLM without access to outside world and agents learning from local private data and maybe doing some work.
Paying thousand $ for compromise your privacy and security vs pay once use forever with full control. What do you choose?
 
No, it would not.

A lot of coding is scaffolding and boilerplate, AI is amazing at those.

You don't one-shot the code based on vague idea (that way you WILL get crappy code and bloat).

First you run through analysis-planning-DRY/KISS-reviewing-critiquing pipeline just to create the spec. Then you tell it to design and implement test suite for the thing you're doing.

That way you have detailed plan to feed the agent, you implement it and then run de-slop/critique/review workflows.

Still orders of magnitude faster than hand coding and actually is of better quality generally. It literally does in 30 minutes week's worth of work of a middle+ engineer now.

With modern models and agents it's mind-blowing tbh (i'm not coding much now, been doing it since 90s, still amazed by how good AIs are getting).

IMO, main vibe (hah!) of "AI code bad, hur hur" comes from "vibe-coders" who have NFI what they're doing and are disappointed when they get crap in response to "AI, maek me cool app".

This is completely missing the point.

If there's a lot of scaffolding and boilerplate then the technology is broken. We built so much complexity into things these days through overbearing layered abstractions it's insane. I mean we're not far off early J2EE levels of complexity now - it's just YAML and package frameworks around it rather than XML. AI can solve those problems but all it does is paste over the fact that they are still there. And at some point, when something goes pear shaped, which it will from direct observable experience on a very large software project, it's going to be 10x hard to unpick if you don't have the knowledge to do so or didn't build it mindfully with humans in the loop.

What we're paid to do as engineers is solve problems. And so far we can't even solve problems cheaply and reliably with LLMs which you could bang out reliably in an hour with Microsoft Access 2.0 in 1994. At best everything is mediocre and that's a problem in a mature market.

Also chat is the worst interface for defining systems. Most engineers have enough trouble communicating requirements and translating those into code already when they are at the helm.

AI code bad hur hur here comes from about 35 years of working on and managing large well known and safety critical software projects from defence and finance industry. Good luck everyone 🙂

---

Mini complexity rant. Needed to get some data into a database from an SFTP server. Old way of doing it - cron job, validate it first with a python script then bcp it into the database engine, job done. Along comes cloud architect and wraps it in several layers of AWS lambda, S3, kinesis and SNS and now it ****s up at least twice a month and incurs hours of debugging effort. That's what the LLM told him to do with it when trying to make a change to how it operated. I had to roll this back to the original version, make the change and untangle that mess. Now for another 10 years of no issues...
 
Last edited:
  • Like
Reactions: jchap
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.