Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Without LLMs, every single use case has to be separately integrated by an Apple developer. It’s not much more than sentence pattern matching currently.

Even with LLMs, it won’t be easy to make all sensible use cases work. I would expect many things to still not work when it launches in 2026, and to be added bit by bit over time.
Whatever it takes, I guess? If they’re marketing an AI that can do a lot then seemingly simple requests like that should be possible. Otherwise- it sucks. I’m being harsh but really, it can either do those “seemingly” simple things or people lose confidence in it.
 
  • Like
Reactions: amartinez1660
I was excited to see apple intelligence being added to to macOS 15.1, but I found it pretty useless without the 15.2 GPT integration.

I thought siri was going to be able to provide me with some ideas, or helping develop a topic, but it was only able to do some simple automations (such as adding reminders and whatnot).

I think the gpt integration will make siri at least a little useful.

I don't want to tout Microsoft's horn (a company I'm very critical of), but I found the free version of copilot that comes to microsoft edge much more useful, especially with teaching me how to do things or developing creative ideas, so much that I installed edge on my mac just to use copilot.

The thing is that apple has such a great potential into making a clean OS with some useful ai into it.

Windows 11 is super bloated, and the operative system feels like a big commercial for microsoft apps.
I also find the implementation of copilot to be confusing, as there's a free tier, a paid tier, a special tier for adding some pf copilot functionality to Office, an other tier with copilot studio.

Apple's implementation of AI within the system seems much cleaner and more elegant; what they'd need is just an in house competitive LLM.

I'd even pay for a premium version of the tool, if a discount would be made accessible via apple one.
 
Last edited:
Apple has to start working on making iPhone 16 Pro Max last longer than 2 hours.
My iphone 13 is more than 3 years old, but I noticed quite the reduction in battery since I installed iOS 18.

Not long ago, I could leave my phone on the night stand with 30% of battery and find it with 20% the next morning.

These days I find the phone almost dead each morning, with the same average battery starting point.

And I have been very gentle with my charging, charging 45% of the time with a slow 7.5w Qi charger, 45% of the time with a super slow 5w cable and only the remaining 10% of the time with a 20w cable.

Going out for any reasonable amount of time, with no powebank, has become impossible.
 
  • Wow
Reactions: amartinez1660
A bit behind the ball even for Apple, but if Apple Intelligence can't make Siri smarter, what will?
I'm quite sure the new siri will be quite good, but the 2026 release date clearly demonstrates what happened.

Apple was no doubt caught off-guard by the absolute insanity GPT 3 was at launch.
Microsoft (and other companies) decided to build around Open AI, while Apple clearly decided to use the ghat gpt integration as a temporary measure.

And that's a smart idea, has having no real AI in their OS by late 2025 would have been hell for the company.
 
At this point, I'll be happy with any improvements. I mean, Apple needs to get a lot more serious about this since Siri often can't even find music in my Apple Music library.
I grow ever so frustrated in seeing apple expanding into questionable new ventures (such as TVs and smart home hubs).

I'd invest all the budget allocated for these things into ipdating siri, otherwise the company will be destined to play forever catch up with open ai.
 
Here's Siri today, on my Macbook Pro M1 running macOS 15.2 beta 4, while Chrome was the frontmost app, Apple Intelligence was theoretically active, and Siri was linked to my paid ChatGPT account:

Screenshot 2024-11-21 at 4.44.10 PM, smaller.png


I mean, as if it had to repeat it twice
 
Last edited:
Even hardcore fans have to admit that the 14-year pioneering role with Siri was a bit of a waste :confused:
Yes and no.
Yes, Apple was the first company to create (purchase) a modern voice assistant, then spent years dropping the ball.
On the other hand, I don’t think any other company has really been doing much impressive.
And I don’t necessarily count ChatGPT in that, that’s a totally different animal.
But I more meant the things Siri has actually been advertised to do. The basic household tasks like timers and alarms, home control, Music.
When it comes to all of these tasks, from my experience, Siri, Alexa, and the various incarnations of Google Assistant/Google Now are all still and always have been hit or miss.
Alexa has a lot of advantages over the other two, but even it has the trade-off of being riddled with advertisements after every couple queries.
But none of them are great, they all are just passable.
 
My iphone 13 is more than 3 years old, but I noticed quite the reduction in battery since I installed iOS 18.

Not long ago, I could leave my phone on the night stand with 30% of battery and find it with 20% the next morning.

These days I find the phone almost dead each morning, with the same average battery starting point.

And I have been very gentle with my charging, charging 45% of the time with a slow 7.5w Qi charger, 45% of the time with a super slow 5w cable and only the remaining 10% of the time with a 20w cable.

Going out for any reasonable amount of time, with no powebank, has become impossible.
The battery life, after upgrading to iOS 18.1 is fracking terrible, it's a joke, on the 13(16) pro max. It isn't the phone, it's iOS' issue.
 
The iPhones need 16-24gb of ram first, then they can worry about AI. Plus, what happens when Siri gets pissed off because she doesn’t have enough ram?
Even more - long term this idea of on-device processing is pretty dumb. Is the network latency really that high that we need local llm’s?
 
Whatever it takes, I guess? If they’re marketing an AI that can do a lot then seemingly simple requests like that should be possible. Otherwise- it sucks. I’m being harsh but really, it can either do those “seemingly” simple things or people lose confidence in it.
Generative AI (like an LLM) is just a text/image/sound generation and transformation machine. Connecting it to actual functionality (like if it should control all kinds of iOS and app functions) in a robust way is not straightforward at all. It doesn’t have any concept of agency, or of time passing. Similarly for maintaining state/memory for it, because it doesn’t remember anything by itself. Everything it might need to know from prior contexts has to be re-fed to it internally for every single interaction. This is why most present-day AI functionality is oriented around generating or transforming some text or media, or around search.

There’s a lot of work to be done to make it perform functions well that would seem straightforward to a human.
 
We are also assuming that the competition will still be around two years from today. A lot of things can happen in that time. I recall Microsoft made quite a splash with their flashy AI announcements and investment of openAI, and today, we see more problems crop up with the company and controversy surrounding Copilot.

As the saying goes - he who laughs last, laughs best.
 
  • Like
Reactions: gusmula
Even more - long term this idea of on-device processing is pretty dumb. Is the network latency really that high that we need local llm’s?
On-device processing will become even more desirable in the long term, especially with increases in the amount of data crunched and the number of times we use AI in a day. There are several reasons, among them being:

• Network latency can be pretty minimal in most uses, but there are and will continue to be situations where latency is worse, including spotty or no Internet connection. And while we often get almost instant responses when we type or speak at an AI when we have a good Internet connection, often the response time can still sometimes be annoyingly slow for whatever reasons, and on-device processing is supposed to reduce this issue. You wouldn't want extra, annoying lag times while using FaceID, language translation, augmented reality, etc.

• Many people are concerned about privacy issues in which personal data they reveal about themselves is sent to the cloud while "conversing" with an AI. On-device processing can minimize this issue.
 
  • Like
Reactions: klasma
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.