Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't care about the generative super-duper fancy AI features. For now, I just want my dumb Siri to get better at understanding my simple requests.
Stuff like:
"Turn on TV and turn off corner light" (combining 2 actions)
"Close the blinds every day at 2pm this week"
 
  • Like
Reactions: adrianlondon
If this has been many years in the making, they probably understood the hardware requirements a looooong time ago, no?
Who said they have spent years on this?

GPT 4 and Google Bard came out in March 2023. I am positive Apple did not even start development until after that point, because there was no AI hype yet. Apple does not invent. They eye the market, figure out what the world is doing that they could do better, and then perfect it. That's been their MO for decades. Which is why everyone is saying they are so far behind.

Meanwhile, iPhone 15 development started in Fall 2022 after the iPhone 14 launched. No one was talking about AI in 2022.

There is no conspiracy here. Hardware and software development both take time and the calendar just doesn't line up.
 
Is this thing anywhere close to being better than Gemini ?

Gemini is pretty good, but it does change its mind.
 
It makes perfect sense if you know anything about LLMs. To run models locally you need a lot of RAM, there's no way around it. 15 Pro is the first one with 8GB which allows to run 3B parameter model locally. Apple avoids saying it that it's about RAM but in reality, this is the sole reason that all the M-chips and A17 Pro allow it.
You only need 1,5gb of RAM to run a 3B LLM with 4 bit quants.
 
I don't care about the generative super-duper fancy AI features. For now, I just want my dumb Siri to get better at understanding my simple requests.
Stuff like:
"Turn on TV and turn off corner light" (combining 2 actions)
"Close the blinds every day at 2pm this week"

That’s why I’d wait to see if all this Apple intelligence pans out before worrying why is it not on older hardware. Even on new hardware it’ll be years to come before seeing it.
 
  • Like
Reactions: HouseLannister
Quite funny how the defenders are basically saying either Apple was late to the party with AI, or late to the party with HW requirements to run AI. Either way, they "win" by making the features exclusive to new devices.
I don't see anyone defending it. The only thing I see is a few people who have at least a minimal understanding of LLM and know that the fact that Apple Intelligence cannot run locally on older devices is true.
 
It'll be interesting to see if Apple outright prevents users of older iPhones to use Intelligence or simply allow them to use it at a less than ideal experience.
Apple clearly said it decided for AI not to be supported on iPhone 15 and below. But again, not because these other devices couldn't, but because they'd be slower to answer to users' requests.
 
A16 = 16-core Neural Engine capable of 17 TOPS
M1 = 16-core Neural Engine capable of 11 TOPS

So it's just the 2GB of RAM then..?

I've yet to see anywhere what TOPS means directly when it comes to what you can do. I know what the definition is (trillion operation per second), but not how it applies to the end user.

Can I sit down at a machine and see/feel the difference when using something capable of 11 vs 17 TOPS? What can I do, specifically, with 17 over 11?

It sounds like MHz wars all over again.
 
My wife had her last Pixel (model 2) for 6 years. She never reset it, and it was working fine when she retired it for her 8. It was not noticeably slower than the day she got it.
Strange, my Pixel 2 slowed down considerably after 1 month!
 
Remains to be seen, it is curious though that they restricted AI to only the newest iPhone chip. The M1 is supported yet has a weaker neural engine than the A16, for example - so it has to just be RAM if we do believe they aren't just using it as a scheme to sell new phones.
It could also be running on CPU not just NPU. In terms of multicore, A17 Pro still hasn’t caught up to M1

Hopefully the high requirements mean a better model. Who knows maybe they’re pushing even A17 and M1 to their limits and they designed it mostly with future hardware in mind? I hope that’s the case, Siri really needs to leapfrog the others to fix it’s reputation
 
Years of "iPhones does not need more RAM because it is more efficient than Android" has now come to bare. RAM is the issue, they got away with putting small amounts of RAM in their devices and charging a lot for RAM upgrade on MAC. Compared to Android phones which have up to 16GB of RAM in the same price point.

Google Pixel 8a which is a low-end device can run AI models comfortably while an iPhone 14 Pro Max can't because of tiny 6GB RAM. Time for an upgrade I guess which is a plus for Apple because they get to sell more devices.
 
I've yet to see anywhere what TOPS means directly when it comes to what you can do. I know what the definition is (trillion operation per second), but not how it applies to the end user.

Can I sit down at a machine and see/feel the difference when using something capable of 11 vs 17 TOPS? What can I do, specifically, with 17 over 11?
Interesting question. To test this with certainty, you would have to first know which tasks are even sent to the NPU rather than being processed by the CPU or the GPU. Not sure if that's even possible on macOS.

Pixelmator has some workflows, that are made possible with "AI". Maybe those are good candidates to compare Apple SOCs from different generations (e.g. M1 vs Mx).
 
Google Pixel 8a which is a low-end device can run AI models comfortably while an iPhone 14 Pro Max can't because of tiny 6GB RAM.
Good point. Gemini Nano must also require 8 GB RAM, which the Pixel 8a has. Interestingly, Copilot+ requires 16 GB RAM.
 
  • Like
Reactions: DOD250
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.