Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They also nerfed context window, you are only getting 32k unless you pay $200/mo
1754649661566.png


o3/o4-mini had 64k
 
  • Like
Reactions: johnsawyercjs
But the big question remains, is ChatGPT still overly sensitive and censored over even the most basic of questions?
 
I gave GPT 5 the same prompt and it told me 22. Apparently there's an R hiding somewhere in New Mexico.

Setting aside the incredible waste of resources and damage to the environment, these things are inherently unreliable. It's not something these companies can fix because it's the nature of the tech they're built on. It's easy to take potshots at these things because they mess up all the time.

It really depends on what you’re trying to use them for.

Cherry picking things they fail at is missing the point.

While you’re all laughing at LLMs making mistakes in tasks that are pretty irrelevant, others are accelerating real jobs with them and getting real results.

Laugh it up while you can, but I’d suggest figuring out how to use this tech sooner rather than later or very suddenly it won’t be funny for you anymore.
 
Curious what you use it for? Genuine question.
Lately I’ve been using it more and more to ask questions in these areas:
. Medical (of course I see a doctor if it’s more serious, but at least gives me basic knowledge beforehand. And it can reference and search for studies that back its points.)
. Diet / food / nutrition / recipes
. Exercise
. Therapy - you can literally talk to it when you’re feeling down, if you don’t have a friend to talk to at that moment. It can help you work through your thoughts if you explain what’s bothering you.
. Image generation
. You can take a screenshot of a conversation you had with someone and it will help you understand the social dynamics… or even give you an idea for a funny reply if you aren’t able to come up with something on your own. (Yeah I don’t rely on this all the time of course, but sometimes it can be helpful)

Those are a few. Of course you have to know that AI can hallucinate, so if it’s pretty important you might want to double check things. But it more and more will put links to the sources of where it got its information.

Lately I’ve noticed I can get the context around situations much more quickly through chatGpt versus if I were to simply search Google.
 
. Therapy - you can literally talk to it when you’re feeling down, if you don’t have a friend to talk to at that moment. It can help you work through your thoughts if you explain what’s bothering you.


Those are a few. Of course you have to know that AI can hallucinate, so if it’s pretty important you might want to double check things. But it more and more will put links to the sources of where it got its information.
I'm sorry, but you should not use any form of therapy where the therapist can hallucinate
 
I'm sorry, but you should not use any form of therapy where the therapist can hallucinate
Yes yes of course… we should only use professional services…. But, we live in an imperfect world. The price is a fraction of an actual therapist, and I feel like “you get what you pay for” doesn’t apply as much here, because generally speaking when I use ChatGPT in this way, it’s asking me questions that simply allow me to organize my thoughts an express myself… and I end up feeling better after a 7 minute “conversation”. So… it works for me. If it doesn’t work for you then fair enough.

For me it’s better to be aware that it can hallucinate, and take what it says with a grain of salt.

Versus paying exorbitant fees for a licensed therapist… (especially considering I don’t think I have serious problems). If I were actually depressed or not functioning with the day-to-day activities… then yes I should probably pay to see a professional.
 
We're starting the refinement stage of LLMs. not just getting smarter, but refinements all around and I think OpenAI has always been the most user friendly of the bunch and this feels like a nice step ahead with how it answers or doesn't answer questions and supposed a lot less hallucinations.
 
Sorry but that means nothing. I see the occasional Grok success anecdote on Twitter which is meaningless in an ocean of people swearing by the other models such as Claude
1. You said everyone and then suddenly "I see occasional people not swearing by Claude". So not really "everyone". Exaggeration.
2. What you see is anecdotal, even if it looks like an "ocean" to you.

This is why we rely on benchmarks rather than your ocean viewing. Sorry, what you said doesn't mean much.
 
We will. I’ve had teachers who study in this field tell me a decade ago that it will most likely be reached in the 2040s. Sam Altman agrees. As time passes and 100s of Billions are poured into this, I am believing them
76% of AI-scientists don't think LLMs can achieve AGI. Altman is a salesman, not a scientist or engineer; his skills are in creating hype and finding investors. He does not know or care how AI works, nor how to fix the flaws of his machines. LLMs still struggle with the same fundamental shortcomings known since 2018, with no solution in sight. I believe AGI is possible, but not via LLMs, those are a dead-end. Given the current completely unrealistic hype surrounding the LLM-bubble I fear we will first see another deep AI Winter before there is any chance of AGI. If I had to bet, I'd say not before 2050.
 
That's old thinking. How does a brain work? It's a network of neurons, which this AI tech is directly modeled after.

The question to ask is "Is intelligence and an identity of self that closely related?"

No, these LLMs are not alive, not organic, not self-aware, but... could they meet the definition of intelligence?
I have a strong belief that one of the biggest findings that will come from these LLMs is the discovery that our “intelligence” is not different from the “intelligence” of complex LLMs. Therefore the conclusion will be that we are not really “intelligent” and that the brain is simply a very very capable pattern storing and pattern matching machine.
 
I have a strong belief that one of the biggest findings that will come from these LLMs is the discovery that our “intelligence” is not different from the “intelligence” of complex LLMs. Therefore the conclusion will be that we are not really “intelligent” and that the brain is simply a very very capable pattern storing and pattern matching machine.
Let's split the difference: brains use pattern storing and matching, but they do it in ways pretty different from how LLMs do it, and when a brain is working well, that allows it to outperform LLMs on many tasks.
 
If this was the best of the best, why did OpenAI/Sam desperately raise cash right before releasing ChatGPT 5?

Generally companies raise cash *after* releasing something good. Sounds like Sam got scared of Elon's initiatives with xAi's Grok 4.

You don't understand valuations are done in venture capital. A private company gets a higher valuation based on hype and FOMO just before a product release. Once a product is released, customer feedback and revenue become the yardsticks. FOMO gets replaced with "wait and see". Valuations become based on actuals not projections and optimistic potentials. Investors are no longer in a rush to get onboard before the company takes off.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.