Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is the one thing Apple cannot and should not allow their LLM to do, whatever form it will take.

Now, if it's possible to make an LLM that do not hallucinate, with today's technology, is still an open question. I tend to believe it's not possible in a general purpose LLM, but possible for one that's specifically trained in an area. The former is basically how Chat-GPT, Gemini, Claude, et. al. works now, the latter I don't think we've seen, yet.

And therein lies Apple largest opportunity, IMHO, if - and that's a big if - they can get this right.

My worry is that they - in a haste to "get to market", exemplified by last year with the ads for a "personalised" Siri - will release something that just isn't good enough. (It may be on par with the best LLMs, which is possible for an "upstart" as shown by Deepseek, but as long as it hallucinates, and thus cannot be trusted, it's just not good enough.)
How you can define it is hallucinating? Our society is so soaked with so many lies and if you gave AI or especially AIG free hands and let her use clear logics it will expose those lies and be accussed from hallucination because it is we who are halucinating sometimes lol.
 
Needs a new name for one thing. The reputation of Siri has been ruined for years. It was a mistake to tie so much of Apple Intelligence to Siri in the first place. No matter how good it COULD be, people will always be put off by the association to Siri
I think thats an online only thing. Most people probably think siri is fine. The people on this forum tend to be far far more negative about specific apple things than the average person. There are a bunch of people taking about how bad text to speech is and I have tested siri and it is consistently about the same as dragon for my field. But the general assumption on this board is that it's just the worst thing ever....bias towards negativity.
 
  • Haha
Reactions: Steve Job's Cousin
This isn't quite accurate.

Apple has been an AI company for a long, long time.

AI is much, much more than Large Language Models. Much more.

Apple has a dedicated Machine Learning (ML) developer kit. They've had built in processors (neural engine) in their chips since at least the A11 bionic (first for iPhones/iPads, later for all Macs). The Photos app has had several ML enhancements for a long time. There are other examples, and among them: They - almost certainly - had a lot of AI engineers working on the self driving functions for project Titan.

What they've NOT been, so far, is an LLM company.
ML is NOT A.I. Never had been and never will be. ML is simply a more refined and complex information ingestion and database retrieval system with sometimes a limited but specific rule set depending on the task at hand or in the case of “fuzzy logic” a more relaxed rule set to primitively intimate a type of early inference situation whereby the information being gathered is not so “black and white” but somewhat “grey” and the algorithm has to use more of a blend or best guess.

That may seem “intelligent” to us…but it is fundamentally not what AI is really all about. ML is the algorithmic equivalent roughly akin to plants having the ability to receive hyper local geological, meteorological, climatic and energy source inputs and “know” when to sprout, leaf, fruit and to defoliate and go dormant if need be throughout the year and the plant’s lifecycle. It’s never going to evolve into a bipedal, sentient, thinking and philosophizing humanoid. That’s the ultimate goal of AI evangelists.

And even LLM’s are not intelligent. They’re simply ultra-sophisticated “Fuzzy Logic” units. The ultimate Mechanical Turk. Nothing intelligent. All artificial. And Apple will never be able to compete with the sole purpose built “AI companies. They can’t. They don’t have the funds nor do they have the focus as a multi-modal product company. So people need to stop right now trying to force Apple to compete with the likes of OpenAI, Google or Anthropic. Apple just needs to come to peace with their models hallucinating like all the “better” models out there and become hyper focused on fine tuning their models to specific OS and Apple app tasks and not try to compete on “benchmarks” which really are kind of meaningless.
 
  • Like
Reactions: mlayer
I was looking at getting a Garmin fēnix 8 and found myself thinking that Apple’s biggest problem is that they insist on doing everything proprietary. If they partnered up with Garmin, Apple Watch hardware would make dramatic leaps and Garmin can probably offer a ton of improvements to the garbage heap that is WatchOS.
The same can be said with a lot of the rest of their non-core (pun very intended) offerings.
Apple sells far more watches than Garmin and for much higher profits. This is such a crazy take. Apple is crushing the watch market so there is no reason for them to partner with a tiny company to "make dramatic leaps" in the specific way you hope they do.
 
Pretty sure “philosophical differences” means there’s likely a split: those who want to compete with others at all cost vs. those wanting to compete with others while maintaining a secure ecosystem. I’m not smart enough to know, but I can’t see how the latter could possibly compete with those who have zero concern for user privacy. I’m all for Apple working to compete with privacy in mind, as I’d like the option of a bit more secure of an LLM experience.
 
  • Like
Reactions: filmantopia
Aside from the fact that they were late to take this technology seriously, there are some fundamental challenges Apple has around AI considering their focus on quality, privacy and security-- all of which are difficult to control with today's highly capable LLM systems. I wonder if the executive debate is around the willingness to compromise somewhat on these core values, or continue to be willing to appear behind in terms of what are becoming the headlining software features of modern devices.
 
Pretty sure “philosophical differences” means there’s likely a split: those who want to compete with others at all cost vs. those wanting to compete with others while maintaining a secure ecosystem. I’m not smart enough to know, but I can’t see how the latter could possibly compete with those who have zero concern for user privacy. I’m all for Apple working to compete with privacy in mind, as I’d like the option of a bit more secure of an LLM experience.
Yeah, I just made a similar comment before reading yours. I think the problem is that making such key features (the ones that are obviously the most useful and standout) optional is still essentially a major compromise on core values. So I see that they're in a real bind here.
 
ML is NOT A.I. Never had been and never will be. ML is simply a more refined and complex information ingestion and database retrieval system with sometimes a limited but specific rule set depending on the task at hand or in the case of “fuzzy logic” a more relaxed rule set to primitively intimate a type of early inference situation whereby the information being gathered is not so “black and white” but somewhat “grey” and the algorithm has to use more of a blend or best guess.

That may seem “intelligent” to us…but it is fundamentally not what AI is really all about. ML is the algorithmic equivalent roughly akin to plants having the ability to receive hyper local geological, meteorological, climatic and energy source inputs and “know” when to sprout, leaf, fruit and to defoliate and go dormant if need be throughout the year and the plant’s lifecycle. It’s never going to evolve into a bipedal, sentient, thinking and philosophizing humanoid. That’s the ultimate goal of AI evangelists.

And even LLM’s are not intelligent. They’re simply ultra-sophisticated “Fuzzy Logic” units. The ultimate Mechanical Turk. Nothing intelligent. All artificial. And Apple will never be able to compete with the sole purpose built “AI companies. They can’t. They don’t have the funds nor do they have the focus as a multi-modal product company. So people need to stop right now trying to force Apple to compete with the likes of OpenAI, Google or Anthropic. Apple just needs to come to peace with their models hallucinating like all the “better” models out there and become hyper focused on fine tuning their models to specific OS and Apple app tasks and not try to compete on “benchmarks” which really are kind of meaningless.
"Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning"

 
  • Like
Reactions: johnsawyercjs
How you can define it is hallucinating? Our society is so soaked with so many lies and if you gave AI or especially AIG free hands and let her use clear logics it will expose those lies and be accussed from hallucination because it is we who are halucinating sometimes lol.

"LLM hallucination refers to instances when large language models generate outputs that are factually incorrect or nonsensical, despite appearing coherent and grammatically correct. This phenomenon occurs due to limitations in training data, biases, or the inherent complexity of language models."

From DuckDuck Go's AI Assist function ;)
 
  • Like
Reactions: johnsawyercjs
Aside from the fact that they were late to take this technology seriously, there are some fundamental challenges Apple has around AI considering their focus on quality, privacy and security-- all of which are difficult to control with today's highly capable LLM systems. I wonder if the executive debate is around the willingness to compromise somewhat on these core values, or continue to be willing to appear behind in terms of what are becoming the headlining software features of modern devices.
100 percent this. It’s a tricky situation for them to effectively navigate. Rightfully so, they’ve long hung their hat on privacy-first. That paints them into a bit of a corner given the privacy-be-damned aspect of LLMs. I’m all for Apple working to see how effective, and secure, they can make their offering. I’m curious the gen pub’s patience for the obvious work required. Should be interesting to see how this plays out.


competing with others that could care less about - or even the perception of - user privacy.
 
Last edited:
  • Like
Reactions: citysnaps
"LLM hallucination refers to instances when large language models generate outputs that are factually incorrect or nonsensical, despite appearing coherent and grammatically correct. This phenomenon occurs due to limitations in training data, biases, or the inherent complexity of language models."

From DuckDuck Go's AI Assist function ;)
I know how to define it but also know that we are in hallucinations and correct informations marking as bluf becaus we refuse to beleive it.
 
Focus on UI and hardware that runs whatever AI the consumer wants, plus helpfully integrated communication and filesystem assistant features. For example, true, natural language file system operation in MacOS would be awesome (e.g., “Siri, combine all these PDFs,” or “organize this folder into the appropriate sub-folders and rename the file if needed”). I get this can be implemented through some kind of AI workflow today. I’m talking about something seamless and easy for any user.

It seems like all the “assistants” (Siri, Alexa, Bixby, Sonos, Google assistant, Meta, etc.) all take a backseat to OpenAI, Google, Anthropic, xAI, open-source. These services are going to have strengths and weaknesses, with plenty of people subscribing and using more than one for awhile.

If younger generations are treating these services as operating systems, then that’s obviously where Apple is going to focus, rather than rushing out a new LLM model every six months. I bet Apple will eventually release some redesigned OS “models” that give it an edge. I’ve seen my boomer-aged relatives with new Pixel phones that barely even care (or know) that Gemini is embedded in the phone. Apple relatives want to know how to turn off categories in mail. IMO, Apple probably doesn’t care about having the “best” embedded AI assistant. This tracks with analysis that older generations are using AI as a search engine for now.

Or if Apple really really wants to compete in the SOTA model race, then it needs to go all-in, rather than half-baked Siri 2.0 nonsense. “Hey siri, call mom.” What else do we need it to do, use advanced ML to read brainwaves and call preemptively call? I agree Siri is “dumb” when people are talking about it, but I barely ever notice or care otherwise. Most power/advanced users aren’t relying on the default apps much anyways.
 
Just drop the AI nonsense altogether, IMO. Every LLM is supposedly getting better on metrics, but their real world performance, in my experience, is getting worse. It’s almost like how CPU and GPU makers optimize for benchmarking software instead of actual innovation that might not result in “number bigger now — bigger number good”.
I don’t think I have had a single LLM interaction in the past 2 months where I haven’t had to ask it if I was unclear (“No, you were very clear.”), ask it repeatedly to double-check and verify it’s clearly long answer, been gaslit on answers I know are wrong, have been told by the LLM to do a search for the answer, etc, etc.
The genie is out of the bottle so I don’t think they can simply drop AI. As with most new technologies it was seriously overhyped to draw in vast amounts of investment money. Now these models have to deliver we are seeing all their deficiencies.

I haven’t found a use for AI myself so far but I can see it’s potential.
 
Is anyone else sort of pissed about being blocked from Apple intelligence for claimed hardware reasons when it sounds like the models will now be in the cloud?
 


More details have emerged regarding Apple's plans to dramatically improve Siri by leveraging large language models (LLMs) that will make it more conversational and capable of nuanced reasoning. Meanwhile, Apple's work on a ChatGPT competitor model is also moving forward.

iPhone-Siri-Glow.jpeg

According to Bloomberg's Mark Gurman, the company is internally testing a broad range of models of varying complexity. Versions with 3 billion, 7 billion, 33 billion, and 150 billion parameters are now said to be "in active use."

Like ChatGPT, the 150 billion parameter model relies on the cloud, and its size means it is much more powerful than on-device Apple Intelligence, whose foundational models are 3 billion parameters.

With the help of an internal testing tool called "Playground," Apple has run benchmarks on the model that suggest it "approaches the quality of recent ChatGPT rollouts." However, there are still said to be concerns over its tendency to hallucinate. Meanwhile, "philosophical differences" remain among company executives, though Gurman provided no additional details on what they might be.

A previous report revealed that Apple has AI offices in Zurich, where employees are working on the all-new software architecture for Siri. The model is expected to eventually replace ‌Siri‌'s current "hybrid" architecture that has been incoherently layered up with different functionality.

Gurman reports that Apple is also testing a chatbot model dubbed "Knowledge" internally that can access the internet to gather and synthesize data from multiple sources. Presumably this would become another Siri capability, but the project is said to be led by Robby Walker, who recently saw Siri removed from his command. According to Gurman, employees familiar with the project say the chatbot project has also been dogged by the same problems that delayed the Siri overhaul.

It's still not clear when Apple will implement these technologies, and the company is unlikely to offer launch roadmaps at WWDC this month, given the blowback it received for announcing Apple Intelligence features at last year's conference that still have yet to launch.

In the meantime, Google's Gemini is expected to be added to iOS 26 as an alternative to ChatGPT in ‌Siri‌, and Apple is also said to be in talks with Perplexity to add their AI service as another option in the future, for both ‌Siri‌ and Safari search.

Article Link: Apple's ChatGPT Rival Moves Forward, But Siri's Future Still Uncertain
Hahaha. Oh wow. Let me see, Samsung is using Gemini with a 150 Billion parameters and the iPhone is stuck with siri. 😅
 
Meanwhile, "philosophical differences" remain among company executives, though Gurman provided no additional details on what they might be.

aka "how do we censor this so that *insert protected group here* aren't offended"
 
Just drop the AI nonsense altogether, IMO. Every LLM is supposedly getting better on metrics, but their real world performance, in my experience, is getting worse. It’s almost like how CPU and GPU makers optimize for benchmarking software instead of actual innovation that might not result in “number bigger now — bigger number good”.
I don’t think I have had a single LLM interaction in the past 2 months where I haven’t had to ask it if I was unclear (“No, you were very clear.”), ask it repeatedly to double-check and verify it’s clearly long answer, been gaslit on answers I know are wrong, have been told by the LLM to do a search for the answer, etc, etc.
I asked Google AI if vodka conducts electricity and was told "no." So I asked why spilling vodka on my mid-2009 MBP fried it and was told it's because the water in the vodka is conductive.

AI is ruining things, not making them better. The number of AI-produced videos on YouTube seems to be growing rapidly, making quality content harder to find. These soulless AI videos pronounce acronyms as words and they pronounce large numbers as strings "five four three seven meters" instead of "five thousand four hundred and thirty seven meters). The speed and cadence of narration is unnatural and jarring.

The "AI boom" is the most depressing tech saga I've ever witnessed.
 
One of the key elements for Apple to succeed with AI integration into their OSs, is to minimize the factional infighting and "philosophical differences" that have been happening between teams at Apple for years, which has led to so much stalled development. A mix of ideas can produce better products, but only if someone in charge is able to herd cats:

 
Last edited:
Apple has been an AI company for a long, long time.

AI is much, much more than Large Language Models. Much more.

Apple has a dedicated Machine Learning (ML) developer kit. They've had built in processors (neural engine) in their chips since at least the A11 bionic (first for iPhones/iPads, later for all Macs). The Photos app has had several ML enhancements for a long time. There are other examples, and among them: They - almost certainly - had a lot of AI engineers working on the self driving functions for project Titan.

What they've NOT been, so far, is an LLM company.
That's an accurate way of describing the situation.

But it could also be said that Apple should have been tracking the development of LLMs prior to the release of ChatGPT, and set aside resources to look into how to integrate them into their products, and even developing their own, and certainly once ChatGPT was released, Apple should have stepped up that work. From what we've been told, it was pretty much one guy in charge of Siri, John Giannandrea, Apple’s Senior Vice President of Machine Learning and AI Strategy, who dismissed LLMs as unimportant or irrelevant to Apple when ChatGPT was released to the public. Apple's organizational structure seems to be largely to blame if the misplaced perceptions of one person at the company are able to stall development on something for so long.
 
Many people own more than two Apple devices. I don't know why they don't extend Apple Intelligence across devices, so you can use the compute of an iPad or Mac to work alongside your iPhone in answering queries.

'Airtelligence?'
Absolutely!
 
That would only really make sense for older, non-Apple-Intelligence-capable iPhones, and that would in turn stand in the way of new iPhone sales. For the rest, Apple is splitting its AI functions into on-device functions and cloud-based functions. So the on-device capabilities are not inherently a limitation — just good for marketing/privacy/offline use. Furthermore, most iPhone users don’t own an iPad or Mac, nor carry them around with them all the time, so it would be a rather niche use-case.
I have an iPad mini and an M2 iPad Pro, and both of them are able to tell me the humidity in the room where the HomePod mini is. As there is no sensor in the iPad, it must be getting its info from the Home pod min, just as how you can turn off the alarm on one iPad using another one in another room. I would love to see this type of functionality extended. I get tired of the HomePod telling me it can’t do something and to try it on my iPad.
 
It has only been ruined on tech forums by those who love taking a swing at Apple everyday.

As for myself, i use Siri every day, multiple times, and have no issues. Ditto with my friends.
I use Siri every day multiple times a day also and for the most part have no issues. The HomePod is the worst, and my M2 iPad Pro is the best. I don’t know why they are not consistently great. I am anxious to see what additional helpful functionality Siri gets, and I am one of the few people who want to see it. Keep its Siri name. I would like to see that name be associated with success before it gets dropped. But then I just don’t like change unless it’s an improvement.
 
  • Like
Reactions: citysnaps
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.