Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In my opinion AI is grossly overrated at the moment but I am very concerned about the implications on society and economics should they eventually perfect it. I do not think people are ready for the changes that would have to be made to these two items alone for the world to move forward.
 
In my opinion AI is grossly overrated at the moment but I am very concerned about the implications on society and economics should they eventually perfect it. I do not think people are ready for the changes that would have to be made to these two items alone for the world to move forward.

They won't perfect it much further. There is an asymptote here. The ass is falling out of the market because throwing 10x cash it didn't make it 10% more accurate or 10% more useful. The gains were explained as linear based on the investment and this is not paying off. Big investors are done and will leave the idiots to clean the market up (eat the loss).
 
I recently checked out this other arXiv paper that shows the exact opposite: https://arxiv.org/abs/2407.01687
Seems there is still some work to do to before the consensus emerges. I wouldn't take any single paper as the absolute truth.

It really doesn't matter what consensus there is. There are a number of compromising problems which are killing the market dead:

1. There is an asymptote on accuracy gains. Something that is 60-70% accurate is not good enough for the market.

2. To train a model large enough to test two more steps against (1), the amount of money involved now is so large that no one will hedge the risk any further because there is no ROI.

3. There are environmental and social concerns around the technology which put it at regulatory risk.

Typically these are things no one wants to poke with a 6 foot long pointy stick.

As for the paper in question, how you can write a paper with a conclusion that generalisation is used, when the task is specific and it performs better on a specific sub-case than the generic case, I don't know. They write any old crap in papers now. If Claude **** out a shift-cipher proof, then applied it to all cases of n, there might be a point. But it doesn't. It's really good at ROT-13 and terrible at ROT-15. There is no generalised reasoning here and to even write that in the paper conclusion is disingenuous.

Ohhh there we go, the author is from a corporate paper mill who has been selling AI hard for the last couple of years rather unsuccessfully...
 
That's what I've been thinking since this wave of AI hype started based on what I've learned about artificial intelligence in college. Feels great to be validated for once.
 
  • Like
Reactions: KeithBN
If this surprises you, you've been lied to. Next, figure out why they wanted you to think "AI" was actually thinking in a way qualitatively similar to humans. Was it just for money? Was it to scare you and make you easier to control?
Many of the people who work in AI actually think this is how our minds work. They think we are LLMs just a little more refined.
 
Nonsense. I asked the same question and got a full and correct answer. Siri quoted Wikipedia. Maybe you should try speaking more clearly. And have you trained Siri to recognize your voice yet. Baloney on your post.
Maybe you should stop attacking people for calling out how awful Siri is.

IMG_6833.png
 
  • Disagree
Reactions: lkrupp
Many of the people who work in AI actually think this is how our minds work. They think we are LLMs just a little more refined.

I think you want to go and read some psychology papers and then compare results.

At best our understanding is somewhere along the lines of a typical thought experiment: a goose walks behind a wall and a woman walks out, therefore walls cause geese to turn into women. In a rush to capitalise on that finding lots of papers suddenly appear to reinforce this perspective using similar but poorly contrived experiments. Eventually this all lands in a meta-analysis and is picked up by a quite frankly dumb corporation or the press and turned into a marketing opportunity. This is turned into naive soundbites like "many of the people who work in AI actually think this is how our minds work".

In reality, what we get is some postgrad who can't get a research position in the field because the market is saturated with fad graduates. They get hoovered up by a corporation and end up pumping out more noise like the paper cited above which does nothing for the art of the science of cognition, mathematics or information theory.
 
  • Like
Reactions: turbineseaplane
Since when anyone is listening to Apple when it comes to AI? They have no credibility in this field whatsoever.

Whaaaaat. They literally have been shipping more ML hardware than any other vendor on the planet and have been doing it for about 7 years. Quick extrapolation shows at least 750 million ML processors in the field.

Granted it can't tell the difference between cows and horses, but neither can anything else!
 
Why has no one else reported this? It took the “newcomer” Apple to figure it out and to tell the truth?

Well thanks for that Apple. Not to say others are not well aware of these limitations, and have been for years, your help is much appreciated

This is why Apple has been careful to jump on the "AI" (as in "artificial intelligence") bandwagon like the other big names — because they were aware the technology doesn't yet fully employ true "intelligence", but is really just a very sophisticated pre-programmed machine that can compose content on-the-fly. It's still a mechanical workhorse following instructions.

It's a careful balancing act. They needed to release their own version for PR reasons — as "Apple Intelligence" — but still didn't rush it out the door. In the end, they are still putting customers first, rather than profits first.

Wouldn't it be amazing for Apple to come out with a Siri that employs this newer approach and blows the competition out of the water by its raw accuracy?
 
I suspect these problems will become less and less relevant as the models improve. Also nobody really understand what goes on in our brains, so it could well be we are also very good pattern finders and that’s all our ‘reasoning’ is.

While we're awake, we're building new neuron connections all day long. While asleep, dreams are the result of our brain reorganizing those connections, strengthening those that have significance and archiving those that don't. That's why dreams are so bizarre. Sleep is an essential part of building intelligence and memory archives. Maybe AI implementations need to have an exclusive "sleep" (self-maintenance / housekeeping) phase, too?

User: "What was the name of the guy that travelled the seven seas?"
AI: "Sorry, but I'm sleeping right now. I'll talk to you tomorrow."
 
Since when anyone is listening to Apple when it comes to AI? They have no credibility in this field whatsoever.

AI is a specific field of a more general Machine Learning field. Many experts in the field cringe at everything being called "AI" when they know much of it is not related to intelligence at all, but merely pattern recognition and applying learned processes to new input.

Apple has been employing Machine Learning (ML) for decades to various degrees. Any time you hear about "models", those really have nothing to do with AI, but are the output of a ML process.

Input > Learning > Model

Then the Model is applied to new input to create new output.

Input > Model > Output

When you snap a photo, that photo is instantly passed through zero or more pre-trained models to do things like colour correction, etc. That photo is also passed through image-recognition models to identify subjects, such as people, places and pets. So you can then ask you phone "Show me all photos of my vacation to Spain" and it can produce some useful results.

When you type on your keyboard, the characters are being passed through pre-trained models to do spelling correction, work prediction, etc.

This latest craze of "AI" is just involving vastly larger models (Large Language Models — LLMs) that require much more processing power and specialize in much larger tasks such as historical research and textual composition.

Apple Intelligence is Apple's endeavour into employing these larger models, which is also mandating minimum hardware requirements.

Does that help?
 
The current "AI" systems biggest success so far is fooling people that it is AI and should be invested in.
We should invest in this because language models are extremely useful - for example in finding patterns. However, no language model will say that it is intelligent, each one explains what it is. The AI fad was created by journalists and the media. After all, it is clickable.

The question remains - what is intelligence. Assuming that we can imitate brain functions, then a person makes mistakes. A student, when he does not know the answer, makes up. An employee, when he did not do something on time, makes up. Not every person can do everything and not every model can do everything. They are perfect in conversation, but why do they make mistakes? To err is human. ;)
 
If this surprises you, you've been lied to. Next, figure out why they wanted you to think "AI" was actually thinking in a way qualitatively similar to humans. Was it just for money? Was it to scare you and make you easier to control?

Love your icon. Ahh Portal. What an experience.
Have you seen this ? Volume up.
 
  • Like
Reactions: cjsuk
I hope this is part of Apple's diplomatic precursor to justify dropping ClosedAI and switching to Anthropic. It's still maddening and so off-brand that Apple is about to force its users into ClosedAI. Choosing GPT over Claude might be up there with the biggest brand and strategic errors since firing Jobs and opening their hardware to clones.
 
  • Haha
Reactions: wilhoitm
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.