Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I remember when Apple Maps was the most disappointing embarrassment in software… this Siri/AI business dwarfs that. And Forstall got canned for that mess. Who’s taking the fall for this?
 
There’s millions of people out there with law degrees that have never practiced law. His MBA was a prerequisite to getting his job.

You’re right though in the buck stops with him, and he’s likewise responsible for all of Apple’s recent debacles and failures.

Failures like Apple Lisa, Apple iPod Sox, and Apple HiFi?

Regarding Apple Intelligence... It's hardly a failure as it hasn't been brought to market yet.

You may not realize many product introductions occasionally suffer from introduction delays. All tech companies experience that.

That's certainly understandable considering Apple's *privacy focused* entry into AI. I would much rather have a delay on an Apple AI product than have it introduced prematurely with major issues - especially issues that are privacy related.

What's really funny is the same people who are unhappy about Apple's AI delay are the same people who vociferously claim in multiple threads they don't want Apple AI on their Apple products. And will quickly turn it off.

Why be unhappy with something delivered late that you don't want to begin with? And will immediately turn off? That makes no sense.
 
  • Like
Reactions: I7guy
Compare to Carl Pei's recent comments on AI when talking about the new Nothing 3a phones. He was asked why AI isn't mentioned in marketing (like other companies are), and he said how AI should only be used to bring meaningful benefits to the user and not to just band the phrase around as a marketing buzzword. The 3a does use AI, but they've found a useful feature for it "Essential Space", which is not specifically marketed as being AI, though it is. it looks like a genuinely useful feature. Apple in the past would always have been more like this. The whole Apple intelligence fracas is so un-Apple and so dissapointing
I haven’t really followed Pei’s career, beyond picking up a pair of his original nothing 1 earbuds, but he does come across well in quite a few interviews I’ve read/seen.

And this comment is very much along those lines. You’d hope he will make the shortlist when Cook is eventually forced into retirement.
 
I suppose the truth is that personal context is easy to do if you give up privacy. However, Apple's stance on privacy means that they need to do it on device or via the private cloud, which is a lot harder to do.

Which goes back to my earlier point of why announce until it's ready. Apple has never been first to market. Heck - they didn't even announce iPhone until they were basically ready to ship. Why are they doing this crap early now? What's the rush to AI?
 
  • Haha
Reactions: BigDO
"This feature is in development and will be available with a future software update."

Apple should have done this from the very start when the features wouldn't have been available on day one of iOS 18 release.
True. But instead, it was HELLO APPLE INTELLIGENCE everywhere you looked...
 
I suppose the truth is that personal context is easy to do if you give up privacy. However, Apple's stance on privacy means that they need to do it on device or via the private cloud, which is a lot harder to do.
Somehow I don't think privacy is the entire problem. For one, the personal context AI needs to scan all incoming messages/emails/communications for interesting tidbits and classify them appropriately.
So, if your dad is flying in next week, the AI knows it's a calendar event. And if your kid asks you to buy avocados on your way back from work, it's classified as a shopping list item.
You'd think a larger model running in the cloud might do a better job at classifying this stuff.

But as far as I understand LLMs construct their answers by finding out how strongly different bits of information are associated in the material they are trained on.
Imagine you ask Gemini the year Steve died. Gemini has been trained on a massive amount of data where the association of Jobs and 2011 recurs many times, and it's very likely to give you the correct answer.
Now imagine you ask the LLM what you need to get at the grocery store on your way back home.
Your kids asked for avocados.
Your wife asked for toothpaste, but she also asked if you want to get pizza on saturday night (today it's wednesday).
There's a party at the office and someone asked who's getting the cake.
Here you don't have a strong association that drives the LLM's answer. It has to pick avocados and toothpaste, ignore pizza. And maybe remind you that there's a question about cake.

One problem is how to let a LLM in the cloud securely scan all your info. But I think the other problem is that personal context kind of answers are harder for LLMs to build correctly.
Now, if someone with real experience on LLMs chimed in...
 
  • Like
Reactions: CalMin and dannys1
Not really. Life will go on. Apple will get the features into a release they feel works the way the want it.

Two posters. One says it’s a disaster, the other says it’s not. People who comment on Apple each have their own views.
My view is that I spent $1200 on a phone that is supposed to have intelligence, now it’s not going to arrive until the iPhone 17 comes out.
It’s amazing that they launched ad campaigns, etc., around this when they had no clue when it would be done.
Fraud.
 
My guess is that the CEOs decided the iPhone 16 had to have AI when it was already too late, leaving the software team without enough time to build properly functioning models. "Built for Apple Intelligence" is nothing but a lie.
The "CEOs" - there's only one CEO, Tim Cook.
 
Somehow I don't think privacy is the entire problem. For one, the personal context AI needs to scan all incoming messages/emails/communications for interesting tidbits and classify them appropriately.
So, if your dad is flying in next week, the AI knows it's a calendar event. And if your kid asks you to buy avocados on your way back from work, it's classified as a shopping list item.
You'd think a larger model running in the cloud might do a better job at classifying this stuff.

But as far as I understand LLMs construct their answers by finding out how strongly different bits of information are associated in the material they are trained on.
Imagine you ask Gemini the year Steve died. Gemini has been trained on a massive amount of data where the association of Jobs and 2011 recurs many times, and it's very likely to give you the correct answer.
Now imagine you ask the LLM what you need to get at the grocery store on your way back home.
Your kids asked for avocados.
Your wife asked for toothpaste, but she also asked if you want to get pizza on saturday night (today it's wednesday).
There's a party at the office and someone asked who's getting the cake.
Here you don't have a strong association that drives the LLM's answer. It has to pick avocados and toothpaste, ignore pizza. And maybe remind you that there's a question about cake.

One problem is how to let a LLM in the cloud securely scan all your info. But I think the other problem is that personal context kind of answers are harder for LLMs to build correctly.
Now, if someone with real experience on LLMs chimed in...
Disclaimer: I'm not an LLM/AI expert. In fact, I'd consider myself a bit of an LLM/AI novice. But I previously worked on AI (back when we called it ML) and have picked up some knowledge...

Basically, you're correct. But it's possible to give a higher weight to user submitted data, and rank it higher than generic training data. And if your wife asks for pizza on Saturdays, and the LLM can read that from your text message database, it can infer that on Saturdays, maybe you will want to get a pizza.

There's where the privacy nightmare starts:

Because not only can I ask it about dinner on Saturday nights. A malicious actor (or app with text message privileges) can ask it what I like to eat on which nights, and the LLM will happily tell them that my wife likes pizza on Saturdays with an 89% probability. e.g. Pizza Hut can then use that data to send pizza ads to my wife on Friday evenings or Saturday afternoon, ensuring that Saturday night we order from Pizza Hut.

The worst part is that you have algorithms mining my phone for my wife's data, and targeting her, based off of data I own. So every friend you talk to, is a potential vector for a privacy nightmare. It's not just your data you need to worry about, but your friends' data security too.

With everything having AI "cores" now a days, it's not computationally expensive to run the algorithm overnight. If the app that is profiling you doesn't need 100% accuracy, but is OK selling you ads based off of 85% accuracy, or even 75% accuracy, then they can use a much smaller model, meaning you might not even notice that your phone's apps are spying on you.
 
My view is that I spent $1200 on a phone that is supposed to have intelligence, now it’s not going to arrive until the iPhone 17 comes out.
It’s amazing that they launched ad campaigns, etc., around this when they had no clue when it would be done.
Fraud.
I spent the same $1200 and mine has useful, basic stuff, which serve me well.
 
Embarrassing. It feels like they’ve been announcing things that don’t yet work and have the audacity to use those features as some of the main ones to sell new products.
 
  • Like
Reactions: Beepster
Expecting 100% perfection, 100% of the time, especially when humans are involved, is not reasonable. Stuff happens.

There are laws in place to protect people from false advertising.

It’s not about things going wrong. They literally sold a device based on false information and lied to consumers.

Now they are removing all evidence from their site lol.
 
There are laws in place to protect people from false advertising.

It’s not about things going wrong. They literally sold a device based on false information and lied to consumers.

Now they are removing all evidence from their site lol.

And absolutely nothing will happen as Apple stated its AI will be rolled out slowly once the remote server infrastructure is in place. In the meantime some things are handled on phone. And that works great for me.

And... All the people here are stating they don't want AI on their phone, and will be turning it off when it comes. So nothing to get upset about if it comes late. Nobody here wants it.
 
  • Like
Reactions: I7guy
If this becomes vaporware, many, many customers are going to be very angry. And they will show up with pitchforks at Apple Stores.
 
The summarized emails and messages, genmoji, and image playgroundare not basic stuff that can be done except on the iphone 15PM and newer.
All that stuff is a mess in my opinion, especially the summaries. Seems like every reviewer has turned the summaries off because they are so dumb and misleading.
Cheers.
 
  • Like
  • Haha
Reactions: I7guy and rmadsen3
It’s not impossible to promise something to someone whom doesn’t really know the process of AI and all it encompasses.

Tim Cook even says he delegates a lot of responsibility to his people, but he only deals with the vague details. So he has to trust someone almost blindly in this case as he is no AI expert and would understand what technical processes would be involved.

That said, it was damn concerning to show advertising on features not yet fully baked. This is AirPower all over again.
 
I have to wonder if 8GB is simply not enough to do this on device.
In this year's WWDC, Craig will introduce Apple Intelligence 2, which requires 16GB RAM and only available for device with at least 512GB storage space.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.