Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
$20 a month has been an absolute steal in how much time I’ve saved solving some major data analytics and coding issues. The amount of money saved just from time saved alone can’t even be described. I typically work alone and it’s like having a “coworker” with me that knows a whole lot more than I do. It’s meant to be a tool, not a crutch, but it’s been beyond helpful for me. Wish I could afford the $200 organization version but that’s a bit beyond my current needs. Even $200 for someone getting a good income in return if they’re using ChatGPT to solve their issues is a drop in the bucket with how much it would cost them in most resources otherwise. I do prefer Claude in some ways but their file limit is absolutely atrocious. Can barely upload anything without it telling you you’re over the limit, load something smaller, etc.
In 5 five years I can see this aging well. For $20 a month they could just hire an intern and replace you, lol 🤣
Kidding; but also a little serious. It’s a great tool and I use it all the time, but when a company needs to cut cost while increasing their bottom line 🤷‍♂️
 
Good to know about the changes. Will definitely check this out once it becomes available on the free tier.
 
  • Like
Reactions: mganu
My company has blanket ban on using AI, too, and all domains are blocked.

It is fairly comical to see people rant and froth at the mouth about Google and privacy, then they willing and happily give openAI all their data.
lol, if you’ve been using googles products (chrome, maps, etc.) for years… they literally know everything you search for and when and where from and how often your searches translate to purchases and exactly how much you spent, etc.
Years and years and years worth of carefully crafted data data points from their suite of sites.
What is MORE than “fairly comical” is you thinking that you could possibly cede this same amount of data to an AI app in moments.
Bro…. do you think it can scan your web history & that you have a decade of cookies and website data currently on your machine??? 😂
lol, the data Google has on you, it has been gleaning for YEARS. That data hasn’t been on your phone or computer since before Obama’s first term. You thinking ChatGPT can access the data Google has compiled on you over the years and years and years, by you clicking a button is silly and absurd.
 
  • Like
Reactions: User 6502
It is crazy that they’re asking for $200/month to access the latest features. Even $20/month for Plus is ridiculous. I mean I get it doesn’t have ads to support it, but they charge for API access from services that use it at least.
Considering Apple charges $10/month for fitness plus (basically a bunch of videos) I’d say $20/month for one of the most advanced and computationally intensive servire are a bargain, especially considering they have a very generous free tier.
 
lol, if you’ve been using googles products (chrome, maps, etc.) for years… they literally know everything you search for and when and where from and how often your searches translate to purchases and exactly how much you spent, etc.
Years and years and years worth of carefully crafted data data points from their suite of sites.
What is MORE than “fairly comical” is you thinking that you could possibly cede this same amount of data to an AI app in moments.
Bro…. do you think it can scan your web history & that you have a decade of cookies and website data currently on your machine??? 😂
lol, the data Google has on you, it has been gleaning for YEARS. That data hasn’t been on your phone or computer since before Obama’s first term. You thinking ChatGPT can access the data Google has compiled on you over the years and years and years, by you clicking a button is silly and absurd.

I don't use Chrome, nor Google search, nor Goggle maps, nor YouTube, and I've had their tracking ads and cookies firewalled for years, bro.

Maybe you keep a decade of web history and cookies in your machine; mine gets scrubbed every 30 days.

..and I don't use LLMs.
 
Last edited:
The reason Anthropic is gonna win in the market is because openAI does not know how to build products. How is a normie supposed to understand what to do with each model?
 
Except when you search through ChatGPT, you never know if what it finds is real or if it hallucinates. You would have to re-check everything anyway to make sure that it didn’t hallucinate.
That’s been the case with Google for a while or two. You just don’t know if you are in an algorithm or sponsor posts hellhole.
 
If you’ve linked your OpenAI Pro account to Siri it seems she will.
Still, I find the interface for this on mac to be so stupid...
You just have one super small window, with no means to read the previous entries and no way to easily add images or other files.

With the current implementation, it is only useful for quick, and very simple questions.
 
It is crazy that they’re asking for $200/month to access the latest features. Even $20/month for Plus is ridiculous. I mean I get it doesn’t have ads to support it, but they charge for API access from services that use it at least.
You're not the target audience. For people who have uses for Pro models these $200 are pennies. Yes, Deep Research is that good.
 
Does it have better "business sense"? I deploy AI to companies and the use case isn't chat, it is doing business analysis. For example, consider a case where you have a flood of support tickets. The LLM can look through all of them in seconds and identify the common issue. If there are multiple issues, it can prioritize them automatically, and suggest ways to fix the root cause.
This "find groups and prioritize" pattern applies to pretty much every business. My company implements this in a variety of industries and we see massive gains. Totally worth the API cost, by orders of magnitude.
If 4.5 does this better, then it's worth the increased cost. But the current models from all the providers are really quite good, so I'm pretty skeptical.
 
The thing with chat got is that you don’t even know because it depends on where you use it. If I go to the website it shows me a mini version that I guess is worse than the non mini version.

That’s not dependent on where you use it, that’s dependent on whether you’re a free or paid user and if the former then on whether you’ve exceeded your daily limits of 4o and O3 Mini.
 
That’s not dependent on where you use it, that’s dependent on whether you’re a free or paid user and if the former then on whether you’ve exceeded your daily limits of 4o and O3 Mini.

That’s not true. If I go to the website it used to say mini, which it didn’t on the iPhone.
 
That’s not true. If I go to the website it used to say mini, which it didn’t on the iPhone.

Were you logged in?

The default behaviour for free users that are logged in is that you will get limited usage of 4o, followed by unlimited 4o Mini usage. This is both for web and the apps.

If you’re a free user (logged in) you can’t see or access the model picker until you’ve received a reply at which point you can access it and see which model produced the reply.
 
Last edited:
Were you logged in?

The default behaviour for free users that are logged in is that you will get limited usage of 4o, followed by unlimited 4o Mini usage. This is both for web and the apps.

If you’re a free user you can’t see or access the model picker until you’ve received a reply at which point you can access it and see which model produced the reply.
No, I don’t have an account. So like I said, when I went to the website it showed mini but on the iPhone it didn’t.
 
No, I don’t have an account. So like I said, when I went to the website it showed mini but on the iPhone it didn’t.

I vaguely recall reading that non-account free users only get access to 4o Mini. I think. This would be because it is the cheapest model for OpenAI to serve to low-value users.

But free (logged in) users will get bumped down to Mini across any access point anyway.

I’m not sure how you’re determining what model is being used because as far as I can see this information is not displayed nor accessible when using the website while not logged-in. Where did you see the model displayed?

To me, the speed of the response indicates it is 4o Mini.

I don’t think there’s any reason that OpenAI would choose to have app or website free users (logged-in or not) be served different models. It’s unified across the access points.

If you want to be sure what model you’re using just setup and login with an account.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.