Pure ********. I've tried all the problems in ChatGPT4 and it had 100% success.
That's fine and all, but sounds like sour grapes coming from Apple, who still has NOTHING. In what they offer users, they are years behind even freaking Grammarly.
That’s the beauty of these fragile tools. We can input the same thing and get two wildly different responses. It’s almost like they’re not reliable 🙄Pure ********. I've tried all the problems in ChatGPT4 and it had 100% success.
Missing the /s tag, I'm 100% sure.Pure ********. I've tried all the problems in ChatGPT4 and it had 100% success.
In my opinion AI is grossly overrated at the moment but I am very concerned about the implications on society and economics should they eventually perfect it. I do not think people are ready for the changes that would have to be made to these two items alone for the world to move forward.
Nonsense. I asked the same question and got a full and correct answer. Siri quoted Wikipedia. Maybe you should try speaking more clearly. And have you trained Siri to recognize your voice yet. Baloney on your post.
I recently checked out this other arXiv paper that shows the exact opposite: https://arxiv.org/abs/2407.01687
Seems there is still some work to do to before the consensus emerges. I wouldn't take any single paper as the absolute truth.
Many of the people who work in AI actually think this is how our minds work. They think we are LLMs just a little more refined.If this surprises you, you've been lied to. Next, figure out why they wanted you to think "AI" was actually thinking in a way qualitatively similar to humans. Was it just for money? Was it to scare you and make you easier to control?
Maybe you should stop attacking people for calling out how awful Siri is.Nonsense. I asked the same question and got a full and correct answer. Siri quoted Wikipedia. Maybe you should try speaking more clearly. And have you trained Siri to recognize your voice yet. Baloney on your post.
Many of the people who work in AI actually think this is how our minds work. They think we are LLMs just a little more refined.
Since when anyone is listening to Apple when it comes to AI? They have no credibility in this field whatsoever.
Why has no one else reported this? It took the “newcomer” Apple to figure it out and to tell the truth?
Well thanks for that Apple. Not to say others are not well aware of these limitations, and have been for years, your help is much appreciated
I suspect these problems will become less and less relevant as the models improve. Also nobody really understand what goes on in our brains, so it could well be we are also very good pattern finders and that’s all our ‘reasoning’ is.
Since when anyone is listening to Apple when it comes to AI? They have no credibility in this field whatsoever.
We should invest in this because language models are extremely useful - for example in finding patterns. However, no language model will say that it is intelligent, each one explains what it is. The AI fad was created by journalists and the media. After all, it is clickable.The current "AI" systems biggest success so far is fooling people that it is AI and should be invested in.
If this surprises you, you've been lied to. Next, figure out why they wanted you to think "AI" was actually thinking in a way qualitatively similar to humans. Was it just for money? Was it to scare you and make you easier to control?
It was news to me! If this is true, then it really devalues the current state of AI.Yea.. like this is news to anyone. Guessing the research paper is just an excuse for taking so long to match competitors…