Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
76% of AI-scientists don't think LLMs can achieve AGI. Altman is a salesman, not a scientist or engineer; his skills are in creating hype and finding investors. He does not know or care how AI works, nor how to fix the flaws of his machines. LLMs still struggle with the same fundamental shortcomings known since 2018, with no solution in sight. I believe AGI is possible, but not via LLMs, those are a dead-end. Given the current completely unrealistic hype surrounding the LLM-bubble I fear we will first see another deep AI Winter before there is any chance of AGI. If I had to bet, I'd say not before 2050.
Yeah the 2040s. What comes beyond LLMs btw?
Do you know? Isn’t LLMs advancing going to help us with AGI?
 
I have a strong belief that one of the biggest findings that will come from these LLMs is the discovery that our “intelligence” is not different from the “intelligence” of complex LLMs. Therefore the conclusion will be that we are not really “intelligent” and that the brain is simply a very very capable pattern storing and pattern matching machine.

I believe you are right.

There is ongoing empirical research showing that the Platonic representation hypothesis holds in large part between current LLM embedding models. There is evidence from brain scans that show human brains share topological similarity in semantic maps even though neural wiring may be different. In short, brains share the same type of semantic topological similarity as seen in different AI.

The full Platonic representation hypothesis proposes that all semantic embedding spaces share the same topological structure, whether the embedding is from an LLM or a human brain. If we can prove this, then it isn't really a far leap to your conclusion, though I would rephrase it to promote machines, not demote brains.
 
You don't understand valuations are done in venture capital. A private company gets a higher valuation based on hype and FOMO just before a product release.
No, I understand fine. That implies product's actual performance is not meeting hype

I've been CTO at 3 startups. You might be able to fool a few investors in the first round but when it comes to another round of funding, investors will see the performance didn't match the hype you pitched in the first round and offer less.


Once a product is released, customer feedback and revenue become the yardsticks. FOMO gets replaced with "wait and see". Valuations become based on actuals not projections and optimistic potentials. Investors are no longer in a rush to get onboard before the company takes off.

ChatGPT-4 was launched on March 14, 2023. OpenAI raised $40 billion a couple of weeks after.
 
It is interesting that are claiming to be improving quirks and bugs which are not, in fact, quirks and bugs, but things OpenAI have clearly prompt-engineered on purpose.

For example "minimizing sycophantism".

I think it is quite clear that they've told the model to constantly tell us "thats such a GREAT question" and "your question is SO relevant and thoughtful." It doesn't just kiss up to us 99.9% of the time for no reason lol. There are deeply sycophantic instructions built into how it will respond. And then those instructions deeply shape the response as well.

But sure, blame the underlying model, and not the fact that they've prompt engineered it.
 
It's unfortunate that ChatGPT is not smart enough to know which version should be used based on the request. I guess I'm supposed to choose the version based on visual, coding, or some other type. It should read my request and determine on its own which model to use, imho.
It looks like GPT-5 is doing that, or something similar.
From GPT-5 System Card:
GPT‑5 is a unified system with a smart and fast model that answers most questions, a deeper reasoning model for harder problems, and a real-time router that quickly decides which model to use based on conversation type, complexity, tool needs, and explicit intent (for example, if you say “think hard about this” in the prompt). The router is continuously trained on real signals, including when users switch models, preference rates for responses, and measured correctness, improving over time. Once usage limits are reached, a mini version of each model handles remaining queries. In the near future, we plan to integrate these capabilities into a single model.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.