Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
67,425
37,610


OpenAI CEO Sam Altman has announced plans to streamline the company's AI offerings and provided details on the upcoming releases of its GPT-4.5 and GPT-5 large language models.

open-ai-new-typeface.jpg

GPT-4.5 will be OpenAI's final non-chain-of-thought model, said Altman in a post on X (Twitter). "After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks," he added.

To achieve this, GPT-5 will integrate multiple OpenAI technologies, including o3, which will no longer be available as a standalone model. Meanwhile, the free tier of ChatGPT will get unlimited access to GPT-5 at a "standard intelligence setting," and Plus and Pro subscribers will gain access to higher intelligence levels with additional features like voice, canvas, search, and deep research capabilities.
"We want AI to 'just work' for you," said Altman. "We realize how complicated our model and product offerings have gotten. We hate the model picker as much as you do and want to return to magic unified intelligence."
Currently, Apple Intelligence's ChatGPT integration, which doesn't require a ChatGPT account, uses the GPT-4o model. When users who aren't signed into an account reach daily limits, the system switches to a basic mode, likely powered by the GPT-4o mini model. Given OpenAI's roadmap, Apple device users who make use of Apple's suite of AI tools should benefit from the new models when they're rolled out.
Altman didn't provide specific release dates for GPT-4.5 and GPT-5, but he suggested in a follow-up post that they should arrive within "weeks / months," respectively.

Article Link: OpenAI Reveals GPT-4.5 and GPT-5 Roadmap, Promises Simplified AI Experience
 
So many terms that I don't even know what it all means. Is this how my dad felt in the 90s
It’s mostly marketing jargon that confuses people. In basic terms:

Currently: separate models for generating text and for thinking through complex problems, and users have to manually choose

Future: model intelligently chooses between them and we get charged a crap ton of money to access the smarter parts.
 
Last edited:
It's so weird reading things like this. I haven't found any use for it in my personal or work life, and my wife is tried it in her law practice but pretty quickly gave it up as well.

Ah well!
I think it really depends on what your work is and what stage of career you’re in as well. People who do a lot of grunt work going through lots text and writing semi-formulaic things/coding benefits most from these. Once your value-add transitions more to your experience and expertise and you have people working below you handling all that it becomes a lot less useful.
 
Looking forward to these updates. I primarily use GPT-4o, but given the impressive intelligence levels already demonstrated, an AI that dynamically selects the optimal reasoning mode seems like a logical step forward. It always felt like OpenAI was balancing model segmentation for monetization while ensuring performance remained a key focus.
If executed well, this shift toward a unified, adaptive system could make AI interactions feel much more seamless.
 
  • Like
Reactions: johnsawyercjs
It's so weird reading things like this. I haven't found any use for it in my personal or work life, and my wife is tried it in her law practice but pretty quickly gave it up as well.

Ah well!
Well Prompt Engineering is something you should consider investing time to make any LLM work for you better.

It certainly diminished process related work and desktop research functions of my work to 10% of what it used to take me in terms of time.
 
  • Like
Reactions: SBlue1
Well Prompt Engineering is something you should consider investing time to make any LLM work for you better.

It certainly diminished process related work and desktop research functions of my work to 10% of what it used to take me in terms of time.

It would only be for personal things, as we're not allowed to use AI at work or for work-related things...so not really any advantage for me. All the domains are blocked on our company network.

I don't work in tech.
 
  • Like
Reactions: picpicmac
Obviously this forward-looking stuff has been prompted by DeepSeek, as OpenAI is a bit rattled and needs to get its messaging out that it is still innovating, but it's still exciting to me and it is astonishing, really, how AI has become normal within just a couple years. I use it all the time.
 
It’s mostly marketing jargon that confuses people. In basic terms:

Currently: separate models for generating text and for thinking through complex problems, and users have to manually choose

Future: model intelligently chooses between them and we get charged a crap ton of money to access the smarter parts.

Why don’t they just call it „basic“, „advanced“ and „Pro“ or something. Instead, we get hieroglyphs.
 
Why don’t they just call it „basic“, „advanced“ and „Pro“ or something. Instead, we get hieroglyphs.
I think that’s what they’re moving towards based on the article. But yes, needlessly complicated at the moment.
 
It's so easy:
All models that start with an O cost $200 a month to be usable and take a month to reply.
All other models forget everything you asked 2 messages before the last one or more.
 
  • Disagree
Reactions: Caviar_X
My main use is replacing Siri for all the things Siri can't do that seem like simple lookups to me. I asked the other night, how many square feet is the White House. Siri said, its 16' tall. So, I said, ChatGPT how many square feet is the White House. The answer was 55k. Seems like a simple thing to deal with but Siri routinely gets it wrong or lacks information. Great at following commands like turning lights on/off, reminding me to do things, etc but not great outside of the ecosystem. Dont get upset, I know there are exceptions and fine examples of more. My point is, I wish Siri was at least as good as the basic level of ChatGPT.
 
... As he watches now excellent open source models draining the piggy bank. The ability to create highly targeted, 100% secure and private, off-line models that surpass open AI's best offerings it's not going to be answered by a streamlined selection process.
 
still wild to me that a company can get users to pay to use a tool that was built entirely off stolen content, but alas, I don’t know if I’m too surprised that such is the 30-year endpoint of access to such an open portal of information in a world that lives and dies by capitalism

…remember when OpenAI was still attempting to use the goodwill branding that comes with being a nonprofit? man. btw, I promise AI/crypto bros are as annoying to everyone in the younger generations as they are confusing to the older generations, lmfao
 
It's so weird reading things like this. I haven't found any use for it in my personal or work life, and my wife is tried it in her law practice but pretty quickly gave it up as well.

Ah well!
It has completely changed the way I work. I would have laughed at you if you had told me that six months ago.

The other poster is right about prompt engineering - figuring out how to ask the right question is key. Also figuring out a use case. For example, I need to do a lot of research of competitors as part of my job - ChatGPT has shaved hours of time I spend researching a week. It's like having an intern/junior analyst I pay $20 a month for.
 
It has completely changed the way I work. I would have laughed at you if you had told me that six months ago.

The other poster is right about prompt engineering - figuring out how to ask the right question is key. Also figuring out a use case. For example, I need to do a lot of research of competitors as part of my job - ChatGPT has shaved hours of time I spend researching a week. It's like having an intern/junior analyst I pay $20 a month for.

As I posted after that, my company a strict no-AI at workplace policy. We're a hospital and there is too much chance of leaking PPI or getting wrong info from unverified sources. Since I can be held personally liable for incorrect data, I go to the source and no where else.

I can even access the domains or AI providers when on a company network.
 
As I posted after that, my company a strict no-AI at workplace policy. We're a hospital and there is too much chance of leaking PPI or getting wrong info from unverified sources.

I can even access the domains when on a company network.
Makes total sense. I do work with clients and we would never use it with client data (or honestly, even about a client without their data); but a good part of my job involves winning new business, researching competitors, etc. and that's where it has proven invaluable.
 
I do think a lot of the usefulness depends on your situation. I have confidential data so I can't use any of the online models to help with that even though I would love to see what it could do in terms of some of the data work I need to do. I do find it helpful for code (which is easy to verify if right or wrong), but I don't do a lot of that.

With that said, I've asked it many times for things it just doesn't get right. For example, I asked about some photography math that it both got the logic and reasoning wrong for. So I think I'd be worried to rely on these models for anything that I didn't have a good understanding of already.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.