Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Sour grapes. 'Apple's research team'? Shouldn't they be working to improve things instead of attacking successful companies.

And they spent resources to prove something obvious?

And the timing of this. My goodness.

Which companies did Apple attack?
 
There’s a line in iRobot where Will Smith asks the robot “can you paint a masterpiece? Can you compose a symphony?” and the robot responds “Can you?”

This is where we’re at. We have lots of people who can’t compete with an LLM saying LLMs are useless. :)
 
Sweet
/s

Can you at least understand why that's not exciting to everyone?
Sure, but that doesn’t make it true that LLMs are useless. And I have to compete in the world that exists. And the world that exists means I can be more productive with less expense. I simply can’t go backwards.
 
Sure, but that doesn’t make it true that LLMs are useless. And I have to compete in the world that exists. And the world that exists means I can be more productive with less expense. I simply can’t go backwards.

I wonder if humans will ever have "enough rate of progress"?
(note, I didn't say enough progress ... just enough rate of it)

I'm always thinking to myself ... will we ever get to "relax a bit" or will there be a future of working every single day to constantly make "more" ... to "buy more", I guess?

None of it maps to a great, healthy, happy life.
It just makes rich people richer.

Tools displacing work of real people just makes it even worse, not better, as the overlords are marketing to everyone.
 
I wonder if humans will ever have "enough rate of progress"?
(note, I didn't say enough progress ... just enough rate of it)

I'm always thinking to myself ... will we ever get to "relax a bit" or will there be a future of working every single day to constantly make "more" ... to "buy more", I guess?

None of it maps to a great, healthy, happy life.
It just makes rich people richer.

Tools displacing work of real people just makes it even worse, not better, as the overlords are marketing to everyone.
I actually agree with this. And I’m actively working on a philosophical/political project to make a difference regarding this. I just published my first book on the topic. But I still exist in the world as it is today.
 
Last edited:
  • Love
Reactions: turbineseaplane
Lots of bad takes in these comments by people who clearly have never worked in AI or *with* (to a great extent) AI and who are repeating (ironically!) what they've heard from others and from the parts of the media who also don't understand. Your own (wet) neural network is as much a prediction engine as an LLM is. They are NOT just doing statistics on next-word. They are very high-dimensional neural networks that currently implement *some* brain-like features and that haven't fully caught up to us. YET. All this "AI is fake", "LLMs don't think at all", "They're just programmed" stuff is ignorance.

For the record: I have worked both with and on AI systems. Professionally.
 
Lots of bad takes in these comments by people who clearly have never worked in AI or *with* (to a great extent) AI and who are repeating (ironically!) what they've heard from others and from the parts of the media who also don't understand. Your own (wet) neural network is as much a prediction engine as an LLM is. They are NOT just doing statistics on next-word. They are very high-dimensional neural networks that currently implement *some* brain-like features and that haven't fully caught up to us. YET. All this "AI is fake", "LLMs don't think at all", "They're just programmed" stuff is ignorance.

For the record: I have worked both with and on AI systems. Professionally.

Ok -- sure.

I always love the "you don't get it" attempt to gotcha everyone who disagrees.
 
I’d love it if they spent half of the keynote today on how most consumer targeted AI is a nonsense, and that they’re gonna stop trying to shoehorn it into absolutely everything. Wishful thinking unfortunately.

Regardless they still have a big issue in that professional focused AI is gonna shrink the market for their laptops. So they can’t lean into it too hard else they’ll be wishing much of their hardware lineup out of existence.
 
Which companies did Apple attack?
Indirectly. They aren't attacking companies directly, but their work, which is the same thing.

This whole things is sadly obvious. Attack the competitor just before you release your new version. Well, I suppose that's standard company procedure. Show people that the 'other guy's' work is just 'smoke and mirrors'.
 
Last edited:
  • Like
Reactions: turbineseaplane
I’ve experienced this first hand with chatgpt for medical reasoning. It just makes up data in extremely convincing arguments. Then when you correct it, it just agrees with you but doesn’t learn from it.

Because of this, it just cannot be trusted without manually double checking the accuracy of its response.

So yeah this Apple study is spot on. chatgpt is a data aggregator at best. Really makes all the marketing Altman is doing look to be in bad faith.
 
Amuses me greatly that the people who advocate most stridently that AI is the best thing ever and is going to replace al the 'useless people' are themselves the ones that can most easily be replaced by the crummy models we have. All these people do is parrot back what they've heard, show zero real flexibility to their thinking, and are easily influenced by outside inputs because they don't have any real convictions...just like an LLM.
 
Lots of bad takes in these comments by people who clearly have never worked in AI or *with* (to a great extent) AI and who are repeating (ironically!) what they've heard from others and from the parts of the media who also don't understand. Your own (wet) neural network is as much a prediction engine as an LLM is. They are NOT just doing statistics on next-word. They are very high-dimensional neural networks that currently implement *some* brain-like features and that haven't fully caught up to us. YET. All this "AI is fake", "LLMs don't think at all", "They're just programmed" stuff is ignorance.

For the record: I have worked both with and on AI systems. Professionally.
I have many years of experience working with PhD CS Researchers in a professional capacity and have managed millions of dollars in autonomic / genetic and agentic academic and government research. I saw deterministic implementations of technology that behaved similar to LLMs years before the public knew about transformers.

All of that is to say in my personal opinion possibly the greatest error the industry at large is making is mapping our limited understanding of human brain function to this relatively rudimentary technology.

Emergent behavior is not complex reasoning, and these models do not “think” any more than an algorithm is equivalent to a human following a process document. There are similarities to a degree but the perception people are having (and being sold) of a model “understanding” is a dangerously false notion.

The technology is still useful as long as you understand the limits, but not implicitly to a scale that we’re being sold, especially when it comes to future scaling and very complex tasks.

Agentic workflows will be useful, robotics is going to improve, but novel autonomous self-learning is not going to happen with this current technology, or any scaled version of it.

edit:

This might be a more useful analogy. A complex, massively expensive LRM may be able to create a PDF document that synthesizes disparate data that already exists into a document. That is useful. It can and very likely will miss key points and possibly add some of its own algorithmic slop into the mix such that you shouldn’t rely on the output for anything mission (or life) critical.

Now, take the concept of creating the PDF format. Absolutely none of the extant technology, even scaled to a nearly infinite degree, would ever come up with that novel idea given the circumstances and constraints present at the time it was invented.


In 5+ years, when / if world models exist? Then we might see movement toward “true” AI, or what some knuckleheads are calling AGI. But anyone thinking ChatGPT 10.0 is going to solve all of our problems or obviate humans is not on the right path.

You need experience with this technology in depth, insulation from those trying to profit off of the current hype cycle, and a deep understanding of the biological and neurological realities of how humans perceive the world. How we think, reason, and invent. We are, for now, one of a kind.
 
Last edited:
I don’t know that anyone should be taking Apple’s word on anything AI related right now, least of all a non-peer reviewed whitepaper.

Interesting findings, if they can be reproduced and validated in a more rigorous way, but I’m taking it with a huge grain of salt.
 
Indirectly. They aren't attacking companies directly, but their work, which is the same thing.

This whole things is sadly obvious. Attack the competitor just before you release your new version. Well, I suppose that's standard company procedure. Show people that the 'other guy's' work is just 'smoke and mirrors'.

There's a huge difference between attacking companies and taking issue with the underlying technology/reasoning/methods/etc currently used. Apple is doing the latter believing they have a better approach - hardly an "attack." That's what research is about and how progress is made going forward in a variety of technical and scientific fields.
 
Last edited:
The human brain still remaining to be the most powerful ‘computer’ in the world, you don’t say!

AI is only as powerful as the person telling it what to do for them.
 
Most current AI is actually a very sophisticated search engine capable of of combining information from many sources into one (largely) coherent answer. Still very useful, but important to understand the limitations.
 
I tried their reasoning skills a few months ago by asking variations of the "get things across the river in a boat" logic puzzles. They didn't do so well except on the variations that are well known and published. Presented with a fresh puzzle, it couldn't do it. Even tested if it could recognize when there was no possible solution, but then they would suggest a solution that clearly violated the rules.
 
  • Like
Reactions: turbineseaplane
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.