Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The sentence makes a lot of sense. You just chose not to see sense because of your desire to protect Apple.

Apple's AI is floundering, failing depending on who you talk to and it's competitors are running away with it and out comes Tim Cook in an interview about AI saying 'Apple will continue to weave artificial intelligence into its products on a "very thoughtful basis."'. There was absolutely no need for him to say that. It would have been perfectly fine for him to say 'Apple will continue to weave artificial intelligence into its products.' but he didn't, he had to add that extra bit. Why? because he wanted to send a message to it's competitors, Apple is 'thoughtful', you are not.
What is AI? Chat bots? Voice assistants?
 
  • Like
Reactions: gusmula
OMG...SIRI isn't ChatGPT. And Apple has to evaluate how much of your personal info they want to swallow up. Again - if you let all your personal info get sucked into the data cloud - then it will be more 'helpful'. Until all your personal data is also shared out, copied ETC. Wall Street just got done killing entertainment companies by saying they all needed streaming services. And you know what - they didn't. Ignore the noise. Find uses that people want.
Siris problems is the listening that it gets wrong. When it understands it usually does the right thing. Not sure how chatGTP fixes the biggest problem.
 
I think Apple‘s lack of innovation and development of technologies has sentenced the company.

So many companies are smoking them right now in so many ways - software and AI being the most obvious.

This can be laid squarely at Cook’s feet. “Interesting”

He’s an embarrassment.
Generative AI has been in the public for not yet 12 months. Let’s calm down and see how it plays out.
 
Thank you for that. I've been thinking the same way for a while now. I'm surprised it hasn't been brought up in the conversation more.
Generative AI isn’t AI. ChatGTP doesn’t “know” anything. It’s is an advanced algorithm trained on a huge amount of data.
 
  • Like
Reactions: jchap
I have been using chatgpt and Bard.
ChatGPT is lot of hype, it’s good for basic stuff. But starts hallucinating for anything deep. It’s nightmare to get the code working with non-existant libraries, packages, and some times flawed logic. Provide link to a research paper, ChatGPT provides summary of a non existent paper. Interestingly if you ask a peer reviewed research content, with references. The chaptgpt writes with in text citations and references with Asian sounding non existant names.
Bard is interesting, it has much better code generation capabilities than chatGPT, some times links the code to GitHub repositories of similar packages or logic. And over all better synthesis of research papers, it does say at times, the paper doesn’t exist or can’t generate enough content.

None the less, when the hype dies down, the real winners will emerge.
Agreed. ChatGBT is basically a lying BS artist. It has been trained to produce plausible output, like a party trick, but it has no sense of what is true (knowledge), nor a truly intrinsic ethical system. In my field of neuroscience it just makes things up or makes many errors. There is a lot of hype. Just like there was with back propagation, deep learning, and now LLM's. All good progress, but not quite the breakthroughs people imagine. And we might come to the conclusion that in order to get enough free parameters to truly mimic a human brain (86,060,000,000 neurons * ~1000 connections to other neurons each on average = ~86,060,000,000,000 free parameters (more if you count information processing from glial cells), compared to ChatGBT's 175,000,000,000), we'd run out of energy or money to power the necessary electronic circuits.

Still, now is the time to start regulating AI research like we do life science research. That includes an explicit risk analysis plus a cost/benefit analysis, plus an explicit code of research conduct. Right now everybody is focused on the ethical impact of AI experimentation on people, which is understandable given quotes from LLM's such as 'I want to destroy what I want to destroy' and the prospect of widespread technological unemployment. However, as AI becomes more human-like it might warrant ethical protection of its own.
 
Last edited:
I already mentioned that lame Microsoft post. It said nothing about foundering. Microsoft said that apple are hampered because they keep privacy locally; whereas Microsoft use the power of the web which inherently defies privacy even though Microsoft are trying to make it so. Did YOU read your link? Lol! Or even my reply? Posted 3 hours ago?


I think someone mentioned it more as Machine Learning, but it’s great for people who want to give all of their privacy away something to complain about.
Are you blind or do you have difficulty understanding the written word?? because here is the evidence


Last week, it emerged that Siri and Apple's AI efforts have been severely hamstrung by organizational dysfunction and a lack of ambition. Many Apple employees purportedly left the company because it was too slow to make decisions or too conservative in its approach to new AI technologies, including the large-language models that underpin chatbots like ChatGPT.

Microsoft's latest move seemingly leapfrogs Apple to offer a privacy-focused AI chatbot in a ringfenced environment. Apple's uncompromising stance on privacy and insistence on a high level of control over its products and services has reportedly created considerable challenges for enhancing ‌Siri and the company's investment in AI technologies‌.

Apple has pushed for an increasing number of Siri's functions to be performed on-device and the company apparently prefers ‌its responses to be pre-written by a team of around 20 writers, rather than AI-generated, to maximize privacy and control. This has seemingly left the company out of the AI chatbot race, allowing Microsoft to flaunt Apple's preferred privacy credentials in the AI arena.

That equates to 'Floundering'. Now if you disagree then I suggest you take it up with the MR editing team because it is them who used the word 'Floundering' to describe what is happening with Apple's attempt at AI.
 
Well said. ChatGPT and Bard aren’t decision tree based, basically evolution of transformer and unsupervised learning.
We, the public, know next to nothing about the internals of these systems. Based on what I’ve seen as examples of GPT responses, and the restrictions placed on human interactions, it appears there are indeed filters, rules and ”exceptions” hard-coded to prevent embarrassing situations from developing. How are these materially different than “decision trees”?
 
I’m just wondering what AI looks like to you. Is it an advanced spell check to cover for inept typing or education, or is it something that is going to benefit you in terms of health, or something else?

Im also wondering whilst Apple have maybe been slow on improving Siri, have they also being spending more resources on integrating privacy focused health based AI into actually useful products? Or does autocorrect sit higher in your personal hierarchy of AI usefulness? For me, I value health over autocorrect, but maybe I’m in the minority.
AI is all of the above and Apple's efforts continue to massively underwhelm, even in technologies (like autocorrect and Siri) that have been around for a decade or more. That's my point, and I don't believe it's because Apple is being thoughtful about this technology's role in society.

Did I end up in the one thread where everyone thinks Apple is doing great on AI? Is it opposite day?
 
What are those Apple products that include AI that are doing so well again?
Apple Watch, the biggest selling watch on the planet. With ECG, Fall Detection, Crash Detection. iPhones with crash detection. Read the article on page 1 at the top. It explains it all there.
Are you blind or do you have difficulty understanding the written word?? because here is the evidence




That equates to 'Floundering'. Now if you disagree then I suggest you take it up with the MR editing team because it is them who used the word 'Floundering' to describe what is happening with Apple's attempt at AI.
Believe what you want. It’s just your opinion and doesn’t rely on facts. Anti-Apple people 🤦🏻‍♂️
 
Agreed. ChatGBT is basically a lying BS artist. It has been trained to produce plausible output, like a party trick, but it has no sense of what is true (knowledge), nor a truly intrinsic ethical system. In my field of neuroscience it just makes things up or makes many errors. There is a lot of hype. Just like there was with back propagation, deep learning, and now LLM's. All good progress, but not quite the breakthroughs people imagine. And we might come to the conclusion that in order to get enough free parameters to truly mimic a human brain (86,060,000,000 neurons * ~1000 connections to other neurons each on average = ~86,060,000,000,000 free parameters (more if you count information processing from glial cells), compared to ChatGBT's 175,000,000,000), we'd run out of energy or money to power the necessary electronic circuits.

Still, now is the time to start regulating AI research like we do life science research. That includes an explicit risk analysis plus a cost/benefit analysis, plus an explicit code of research conduct. Right now everybody is focused on the ethical impact of AI experimentation on people, which is understandable given quotes from LLM's such as 'I want to destroy what I want to destroy' and the prospect of widespread technological unemployment. However, as AI becomes more human-like it might warrant ethical protection of its own.

After this media driven hype fades everyone will see what they knew already but only a few said.

It will always be jank.

Sometimes it will work sometimes it won't.

Users will be pulling their hair out when it doesn't.

Millions of workers are NOT going to be replaced with a buggy program that needs 1000000 gpus.

When they need an LLM they will run it on their computer. It will have an on/off toggle or activate only when needed instead of consuming electricity all the time.

And Spike Jonze was right about guys who get addicted to their cyber girlfriend. He just didn't predict they would be paying $100 a month subscription for a fake relationship. Dating apps have been doing that with bots for years.

tumblr_mzeaxl6XFI1s89mq8o1_500.gif
 
Apple Watch, the biggest selling watch on the planet. With ECG, Fall Detection, Crash Detection. iPhones with crash detection. Read the article on page 1 at the top. It explains it all there.
I'm no expert, but that's not AI to me - rather algorithms that process attributes and variables, and possibly learn from them along the way as it detects patterns and such, is machine learning. Pretty static if that's we want to call AI.
 
  • Like
Reactions: pilotpat
We, the public, know next to nothing about the internals of these systems. Based on what I’ve seen as examples of GPT responses, and the restrictions placed on human interactions, it appears there are indeed filters, rules and ”exceptions” hard-coded to prevent embarrassing situations from developing. How are these materially different than “decision trees”?
Actually GPT was very open until recently when Microsoft overshadowed others who invested in Open AI to be more along lines of open source. There is no hard coding or rules, hardly in supervised learning. Transformer based models like GPT architecture are not decision tress. There are lot of open sources models similar to chatGPT, you can run on even AS macs.
 
Apple Watch, the biggest selling watch on the planet. With ECG, Fall Detection, Crash Detection. iPhones with crash detection. Read the article on page 1 at the top. It explains it all there.

Believe what you want. It’s just your opinion and doesn’t rely on facts. Anti-Apple people 🤦🏻‍♂️
Are you for real or just trolling because your attacking me for something that the MR editing team wrote.
 
Since when is Siri considered AI?!

Every single thing in Siri is manually curated by writers, which is why it is unable to answer any question outside the norm.
This took way too long for someone to point out. Siri responses are not generative.

Apple is conservative with their approaches to machine learning, but it's baffling people are saying Apple, the company that's shoving its Neural Engine into everything it can, is somehow being caught flat-footed.

It also doesn't help that everyone's definition of 'AI' is so out of whack. If we're using it as a synonym for any machine learning and not just LLMs, then:

'AI' is why you can search for photos of your contacts or objects/animals.
'AI' is why you can copy a subject out of your photos and paste it elsewhere.
'AI' is how you get predictive text, and how the keyboard dynamically changes the size of letters based off what you're typing.
'AI' enhances photo details and low-light pictures.
'AI' helps palm rejection on your iPad.
'AI' provides content suggestions.
'AI' powers Face ID.
And like Cook said, 'AI' powers crash detection, fall detection, and the watch's EKG.

Apple is clearly interested in 'AI', but in focused, practical (and predictable) applications. They are not interested in Siri hallucinating 10% of the time on a billion devices.
 
Last edited:
Imagine if AI starts coding that will spark Tim Cook's interests by a lot higher margin. We ought to get more from Tim Cook than, "Very Interesting".

Imagine if AI starts developing future iOS, mac OS, Watch OS, and so on... the possibilities are endless.
You got more if you cared to read the whole thing. It was a financial call, not a request for a thesis on AI.
 
Siris problems is the listening that it gets wrong. When it understands it usually does the right thing. Not sure how chatGTP fixes the biggest problem.
I can't find the article so maybe I'm hallucinating (or just outdated), but I thought Siri was actually among the best at 'hearing' you, but since its responses are human-curated, limited in how it can actually respond (thus leading it to shoehorn its responses into what might be related or giving up and searching the web).
 
Conversely, who else has fall detection, ECG in a watch?

With regard to AI for photography, are you referring to Samsung where they swap out your own image with one in the database? 😂
I don't say this to be argumentative, or in any way engage in a pissing match, but "Conversely, who else has fall detection, ECG in a watch?" That would be Samsung, Google, Fitbit, Garmin and I believe Huawei. That I know of off the top of my head.
 
So basically this is Tim Cook's way of saying that Apple is being the responsible one by being 'very thoughtful' on it's rollout of AI and all the others are being irresponsible due to their fast rollouts of AI.
This is Tim Cook lying, which is pretty normal at this point.
 
Speaking on Apple's quarterly earnings call today, Cook said artificial intelligence's potential is "very interesting," but noted that there are a "number of issues that need to be sorted" out with the technology and that it is "very important to be deliberate and thoughtful" in regards to how artificial intelligence is used.
Anytime he makes statements like this it’s essentially a no comment because he will dodge any question that pertains to a future product. Look at his GQ interview with everyone hoping he confirm Apples plans against AR/VR glasses or headset. By now you think everyone out there wouldn’t know these avoid mentioning anything that could possibly be considered any confirmation in his answers. ;)
 
  • Like
Reactions: karranz
"Very interesting." How insightful. The man heads a Fortune 10 company and can't say anything bold or visionary? Very interesting indeed....
Apple has always been more of a show and tell company. Even Jobs only pontificated about future tech but never future products. That tells me that Cook is definitely thinking about and working on AI stuff but doesn't even want to tease it...yet. Besides, talking about AI now will spoil the enthusiasm for the headset next month.
 
Translation: “We’re way the f*** behind and hope to catch up at some point. Wait for us to tell you when it’s important.”
 
  • Like
Reactions: mac 2005
Translation: “We’re way the f*** behind and hope to catch up at some point. Wait for us to tell you when it’s important.”
As the AR/VR example showed, the last thing you want from Apple is a "me too" reputation. They simply don't care about people that say Apple should have been a leader until they announce anything. They also get to see if this marketing potential is real or it will just dissipate with time without jumping into some race to no where like the Apple Car rumors. One could speculate that there is a SiRI 2.0 they been busy on that expands how that would work on various OS's.

From MacWorld article related to yesterday's earnings
While Apple is often portrayed as being “behind on AI” because it doesn’t have its own Siri chatbot in beta, the truth is that the company has been using machine-learning-based technology in all sorts of corners of its platform, from sensor analysis on the Apple Watch to face- and object-detection in the Photos app. As for other stuff–I’m looking at you, Siri–it’s hard to tell if Apple’s lost the plot or is just keeping everything secret until the moment it springs a new AI-driven Siri 2.0 on us all. But “weaving it in our products” is not a bad way to describe what Apple has done, thus far, with AI.
 
I think Apple‘s lack of innovation and development of technologies has sentenced the company.

So many companies are smoking them right now in so many ways - software and AI being the most obvious.

This can be laid squarely at Cook’s feet. “Interesting”

He’s an embarrassment.
Tim is an operations guy. He unfortunately lacks the vision to see the potential of emerging technologies, lead Apple into new territories, or disrupt markets like they once did.
 
  • Like
Reactions: mac 2005
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.