Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
There’s something fundamentally human and intelligent your daughter can absolutely do that no computer can: use her imagination.
If I were to ask my daughter to imagine a wholly new recipe using eggs, flour, sugar & water, she’d be completely stumped.

I recently asked ChatGPT to do this, and it gave me 3 novel recipes, with full ingredient list, instructions, and photos of the expected end results.
 
  • Like
Reactions: Schtibbie
... ChatGPT has completely changed my work flow and eliminated my need for junior employees ...
Funny thing about eliminating junior employees: Eventually, all of the "senior" employees age out of existence. If you have no "junior" employees learning the ropes from the senior employees, you eventually go bankrupt.

But perhaps you're viewing this from the perspective of a small business, and aren't actually thinking long term and sustainable. In which case, good for you. But I would argue that in that event, your use case is not a particularly good generalization for the larger workforce.
 
Funny thing about eliminating junior employees: Eventually, all of the "senior" employees age out of existence. If you have no "junior" employees learning the ropes from the senior employees, you eventually go bankrupt.

But perhaps you're viewing this from the perspective of a small business, and aren't actually thinking long term and sustainable. In which case, good for you. But I would argue that in that event, your use case is not a particularly good generalization for the larger workforce.
I’m a small business. About 45% of employees in the US are employed by small businesses.
 
Not true. However, it's true that they're not as intelligent as us and they clearly lack some architectural things they'll need. But the simple statements from people in this thread that LLMs have no intelligence is wrong.

I guess it depends on how you define "intelligence". It's a philosophical question. Intelligence in the human sense relies on some measure of self awareness. Dolphins demonstrate self awareness and a high level of intelligence. An ant? No self awareness but in practice, it has a demonstrated level of executive function driven by instinct. I don't consider an ant to be intelligent, it's just following its "programming".

But humans, dolphins and ants all benefit from their own functions for practical purposes. LLM based AI may not meet that threshold of intelligence but it has practical purposes. That much is undeniable because millions of people use it to augment their own skills. Granted others mistakenly use it to do all of the work which it's not reliable at.
 
If I were to ask my daughter to imagine a wholly new recipe using eggs, flour, sugar & water, she’d be completely stumped.

I recently asked ChatGPT to do this, and it gave me 3 novel recipes, with full ingredient list, instructions, and photos of the expected end results.
I don’t mean this offensively, but do you spend a lot of time on the floor playing pretend with your child? Seeing the wonder and creativity?

Humans don’t fit neatly into benchmarks, my point is that creative play plants the seeds that germinate into actual human intelligence, reasoning, and novel invention.

My other posts in this thread better elucidate my thoughts regarding the technology, but the reductive nature of technological analysis being applied to humanity, and especially vice-versa (that we are neural networks purely following desire paths) is, I believe, not just wrong but deeply depressing.
 
I don’t mean this offensively, but do you spend a lot of time on the floor playing pretend with your child? Seeing the wonder and creativity?

Humans don’t fit neatly into benchmarks, my point is that creative play plants the seeds that germinate into actual human intelligence, reasoning, and novel invention.

My other posts in this thread better elucidate my thoughts regarding the technology, but the reductive nature of technological analysis being applied to humanity, and especially vice-versa (that we are neural networks purely following desire paths) is, I believe, not just wrong but deeply depressing.
Of course I do :) nor am I disparaging my daughter in any way. I’m simply pointing out that “intelligence” is being used in this thread in ways that haven’t been well defined.
 
  • Like
Reactions: novagamer
Humans don’t fit neatly into benchmarks, my point is that creative play plants the seeds that germinate into actual human intelligence, reasoning, and novel invention.

They sure don't, and it's really disheartening to see so much technical and financial effort being made to do so.
 
  • Like
Reactions: NT1440
That’s a bingo. LLMs are incredibly unreliable for anything that requires accuracy and consistency.

Your second point is a philosophical one that I think is more important than people realize. AI is incapable of any true creativity. Its responses are just a pastiche of information originally created by human beings, effectively plagiarizing human work. Who wants to live in a world where a person does all the legwork and creativity only for a computer to steal from that human?

I agree for the most part, but "currently" would be inserted before "incapable of any true creativity". With quantum computing now able to generate truly random numbers, this may change.
 
I make fun of Apple for lying or "overpromising" about AI. But when it comes to LLMS, as good and fun as they are, they are nowhere near ready for prime time. There is a lot of hype, and I do not think anyone is "behind" and Apple will be able to buy up their AI tech as the bottom of these unprofitable money pits fall off.

Let's remember that BILLIONS of dollars have been invested in these open-source albatrosses, and none of them have made money. Apple isn't behind; they have dodged a bullet. Letting everyone else pay for the R&D, and they just swoop in on fire sale items as the bottom falls, as it seems highly unlikely that LLMs are the future of AI.
 
  • Like
Reactions: gusmula
There's a huge difference between attacking companies and taking issue with the underlying technology/reasoning/methods/etc currently used. Apple is doing the latter believing they have a better approach - hardly an "attack." That's what research is about and how progress is made going forward in a variety of technical and scientific fields.

Semantics. A defensive move; diverting blame, highlighting deficiencies in the other guys product. The media is 'attacking' them constantly.

They are in trouble. It shows quite a bit.
 
How did Siri fare in these tests?
I would speculate: not well at all, since it isn't a LLM. In fact, by most current metrics, Siri only barely qualifies as "AI."

I'm pretty sure Siri is essentially a (comparatively) tiny database of human-curated query/response key pairs with a little bit of fuzzy logic on the matching, which then fails out to a google search when the database cannot produce an adequate match. Not the same kind of beast as all of this "LLM" stuff.
 
Semantics. A defensive move; diverting blame, highlighting deficiencies in the other guys product. The media is 'attacking' them constantly.

They are in trouble. It shows quite a bit.

Not in trouble since Apple's implementation will be far different and focused on user privacy.

Of course people are attacking Apple, being one of the most successful tech companies in the world with 1+ Billion active and repeat customers.

People on tech forums have been predicting Apple's demise for the last 20+ years, day in and day out. It's not something new and very predictable.
 
The reality is that what we are seeing is the worst that is gonna get, it's an uphill trajectory with notable improvements.

And of course it has flaws, but choosing to discredit and being overly sceptical in this context is somewhat strange, specially if your company is being criticized for losing the AI train.
I doubt that LLMs can improve much. We're fast approaching the point where the only new data to train them on will be generated by LLMs. This will inevitably lead to continually accelerating corruption of the training datasets and the results provided by LLMs trained on them. It's not AI, nor has it ever been.

It's just the same thing we saw with Google - as the web was SEO-optimized, search results got less and less useful, because it was returning more and more results that were designed not to provide actually useful information, but just to show up at the top of the search results and serve ads.
 
  • Wow
Reactions: gusmula
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.