Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
70,254
41,854


OpenAI today upgraded GPT-5 to GPT-5.1, the next-generation version of the AI model that powers ChatGPT. There are two versions of GPT-5.1, including Instant and Thinking.

chatgpt-logo.jpg

Instant is warmer, more intelligent, and better at following instructions, according to OpenAI, while GPT-5.1 Thinking is easier to understand, faster on simple tasks, and more persistent on complex tasks.

OpenAI says that users can expect a more enjoyable communication experience, with options to more easily customize ChatGPT's tone. There are new presets for tone, including Professional, Candid, and Quirky. The new presets join the existing Default, Nerdy, Cynical, Friendly (previously Listener), and Efficient (previously Robot) options. ChatGPT can also proactively offer to update preferences during conversations when you ask for a certain tone or style, and there are fine tuning options to adjust how concise, warm, or scannable responses are, along with how often it employs emojis.

By default, GPT-5.1 Instant is warmer and more playful, and more likely to adhere to parameters that you set. GPT-5.1 Thinking is able to adapt thinking time more precisely to the question, and responses are clearer with fewer undefined terms. It's also warmer and more empathetic than before.

Questions will continue to be routed to the most suitable model using GPT-5.1 Auto. GPT 5.1 Instant and Thinking are rolling out to users today, with paid Pro, Plus, Go, and Business users set to get access first, then free and logged-out users. Enterprise and Edu users will get a seven-day early-access toggle, after which GPT-5.1 will become the default model.

OpenAI plans to roll out GPT-5.1 gradually to keep performance stable, so not all users will see it right away.

GPT-5 will remain available in the legacy models dropdown for paid subscribers for the next three months.

Article Link: OpenAI Launches Smarter, More Conversational ChatGPT 5.1
 
  • Haha
Reactions: jw2002 and seinman
I know it's heresy around here, but I've switched to Google AI (which I'm assuming is some form of Gemini) in the search bar of Safari on my phone. Usually the first answer understands and gives what I need, and then I'll click through the links as necessary. It's a glorified search for me now, and I'm happy with that. (I sometimes click "Dive Deeper", but only for certain projects.)

I guess I was never the intended audience. 🫤
 
The conversational features need to be full duplex. It's annoying that the slightest background noise causes the response to pause.
 
The conversational features need to be full duplex. It's annoying that the slightest background noise causes the response to pause.
It's impossible for me to work with this...it pauses after only a couple words spoken...over and over. Why is this so hard to fix?
 
It's not "smart." That's the marketing used to sell it because if they simple said it was an improved version 2.1, 2.2, and 2.3, etc., no one would get excited about it. It is an energy-hungry enhanced search feature algorithm. It relies on everyone else's data as a raw material for free, everyone else paying higher electricity costs, and then wants to charge the source of its raw material for its use. What a deal! I'm sure, behind the scenes somewhere, they're chuckling about this being the mother of all subscription services.
 
It was hard to make it worse. Between the constant censorship, the ignored instructions, and the hallucinations, I turned to other options a long time ago. Even translations are censored now.
 
As a software dev I'm finding ChatGPT5 better than the built in Gemini on Android Studio.
I see some posting that Grok and Gemini are better.
Each to their own.
I've got DeepSeek R1 and gpt-oss running on my Mac Mini M4 Pro for when there's a network outage.
They are fine too, just a bit slower but still usable.
They are all based on the same technology and right now the competitors in this space are just brand building.
OpenAI seems to be the leader in this respect - in brand building to be clear!
Unless there's another major breakthrough in LLM research then they will all be much the same.
Nothing's been as impressive as what we saw a couple of years ago.
It's looking more like the smart phone market with little new in each update.
5->5.1 is just a tweak and to keep things looking fresh and moving forward while building the brand value.
 
Oh this is going to be a level-headed, well reasoned thread full of informed people definitely speaking from a place of deep knowledge.

Anyway, briefly testing this it's decent and for the time being thinking is much faster which is nice.

Some of the edge cases that it was updated to handle seem more in line with Claude and that is a good thing.

Grok, if you use it, has gotten incredibly bad and now hallucinates as much as LLMs did 2-3 years ago. I only throw stuff into the free version on the web because it's was pretty good at technical answers. Unfortunate because it was so fast at searching external sources.
 
Nearly there. JP Morgan, BoA started kicking it in the last couple of days after Softbank made the first move. House of cards about to go.
There is a bubble in some instances, mainly large LLM companies but the technology is bigger than the internet itself. We've deployed internal LLM's and processes at our company and we are starting to see productivity gains start to accelerate. Ultimately people won't get replaced by AI (maybe robots doing manufacturing or warfare drones) but people who don't use AI will get replaced by those that do and can. Our company is going to monitor use and make employment decisions based on productivity and use.
 
As a software dev I'm finding ChatGPT5 better than the built in Gemini on Android Studio.
I see some posting that Grok and Gemini are better.
Each to their own.
I've got DeepSeek R1 and gpt-oss running on my Mac Mini M4 Pro for when there's a network outage.
They are fine too, just a bit slower but still usable.
They are all based on the same technology and right now the competitors in this space are just brand building.
OpenAI seems to be the leader in this respect - in brand building to be clear!
Unless there's another major breakthrough in LLM research then they will all be much the same.
Nothing's been as impressive as what we saw a couple of years ago.
It's looking more like the smart phone market with little new in each update.
5->5.1 is just a tweak and to keep things looking fresh and moving forward while building the brand value.
I agree. I’ve used Gemini, Grok and keep coming back to my GPT-5 Plus account. I have minor frustrations such as uploading a CSV and then answering a question, then asking a new question it will keep requesting that CSV to be uploaded again. It should store it in cache, but the quality of answers is amazing. I even have been playing with the voice feature and my toddler who is 3 was shy of it at first and wouldn’t want to talk with it now it has been able to explain to her why things she finds scary aren’t scary such as the bath drain sound and now she can stand there watching the water disappear because of the explanation.
 
There is a bubble in some instances, mainly large LLM companies but the technology is bigger than the internet itself. We've deployed internal LLM's and processes at our company and we are starting to see productivity gains start to accelerate. Ultimately people won't get replaced by AI (maybe robots doing manufacturing or warfare drones) but people who don't use AI will get replaced by those that do and can. Our company is going to monitor use and make employment decisions based on productivity and use.
The bubble talk has now gotten to the point where it's even more annoying than the AI hype cycle imo. Everyone thinks they are a genius because there are really obviously mediocre companies wrapping foundation models and they are all going to fail. And the media will decry a bubble, and stocks will go down, and everyone will have validation and nothing about the deeper aspects of the market or the technical trajectory long-term will change.

The core foundation model companies aren't going anywhere for many years, outside of just consolidation maybe. Anthropic who someone on here told me wasn't going to last out a year just today announced a $50B datacenter build out. Their funding must be really running out and they are on their last legs, clearly.

We haven't even seen World Models begin to roll out at scale across domains or mature yet. Ignorance is everywhere, even if it is driven by very understandable concerns with the technology and sociological implications it is frustrating to endure in every single area where these tools are discussed.
 
There is a bubble in some instances, mainly large LLM companies but the technology is bigger than the internet itself. We've deployed internal LLM's and processes at our company and we are starting to see productivity gains start to accelerate. Ultimately people won't get replaced by AI (maybe robots doing manufacturing or warfare drones) but people who don't use AI will get replaced by those that do and can. Our company is going to monitor use and make employment decisions based on productivity and use.

Some thinking points for your faith argument from an analytical perspective (this is my job):

1. Your company will be in trouble when the LLM token pricing goes through the roof.
2. Your company will be in trouble when the LLM company changes the model and your prompts do not function correctly.
3. Your company will be in trouble when the LLM company goes down the toilet and the other LLM company gets an influx of traffic they can't handle with their hardware provision. This also incurs point 1 and 2 as a damage multiplier.
4. It's a tangible business risk building on technology which has absolutely no working revenue model. It may disappear tomorrow.
5. You do not have the cash, hardware or resources to train your own model and make a ROI on it and run it yourself even on a cloud platform.
6. You are likely to reach regulatory and legal problems when it comes to making employment decisions based on automation of this class (chain of proof).
7. Robots and manufacturing have near zero use for LLMs. There is some specific AI use cases in inspection. That is it. Having humanoid robots working in a factory setting is science fiction. Production is required to be 100% deterministic and LLMs are not.
8. You can't replace people with AI. But you can replace people with AI spending and watch your stock prices rise while burying the lay off.

This whole thing is faith without empiricism.
 
The bubble talk has now gotten to the point where it's even more annoying than the AI hype cycle imo. Everyone thinks they are a genius because there are really obviously mediocre companies wrapping foundation models and they are all going to fail. And the media will decry a bubble, and stocks will go down, and everyone will have validation and nothing about the deeper aspects of the market or the technical trajectory long-term will change.

The core foundation model companies aren't going anywhere for many years, outside of just consolidation maybe. Anthropic who someone on here told me wasn't going to last out a year just today announced a $50B datacenter build out. Their funding must be really running out and they are on their last legs, clearly.

We haven't even seen World Models begin to roll out at scale across domains or mature yet. Ignorance is everywhere, even if it is driven by very understandable concerns with the technology and sociological implications it is frustrating to endure in every single area where these tools are discussed.

Your knowledge on this is rather naive. The $50bn doesn't exist. It's a promise to try and drive stock value up by demonstrating it needs more capacity which allows them to live a few more months and keeps NVDA up.

The thing is the investors want to cash in their return now. And there isn't one. And they're starting to get rather vocal about it. The big investment companies have already dumped their own stock onto bagholders and are now cutting their losses.

Ignoring the technical, social issues, the basic finances don't work. The technology has no tangible return on the spend, only loss. It's so so so bad the pissants like Altman went begging to the US gov for a bailout.

It's not a bubble, it's market fraud.
 
Some thinking points for your faith argument from an analytical perspective (this is my job):

1. Your company will be in trouble when the LLM token pricing goes through the roof.
2. Your company will be in trouble when the LLM company changes the model and your prompts do not function correctly.
3. Your company will be in trouble when the LLM company goes down the toilet and the other LLM company gets an influx of traffic they can't handle with their hardware provision. This also incurs point 1 and 2 as a damage multiplier.
4. It's a tangible business risk building on technology which has absolutely no working revenue model. It may disappear tomorrow.
5. You do not have the cash, hardware or resources to train your own model and make a ROI on it and run it yourself even on a cloud platform.
6. You are likely to reach regulatory and legal problems when it comes to making employment decisions based on automation of this class (chain of proof).
7. Robots and manufacturing have near zero use for LLMs. There is some specific AI use cases in inspection. That is it. Having humanoid robots working in a factory setting is science fiction. Production is required to be 100% deterministic and LLMs are not.
8. You can't replace people with AI. But you can replace people with AI spending and watch your stock prices rise while burying the lay off.

This whole thing is faith without empiricism.
Some great insights. Thank you for sharing.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.