Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is interesting considering the announcements that Bing and Google just had. Wonder what Apple has up their sleeve?

My bet is that Bard and chatGPT are mirages that won’t really go anywhere. So Apple simply has to continue doing what they are already doing - integrate AI in their products such that it benefits users in small but meaningful ways.
 
  • Like
Reactions: Arran
Once Microsoft, Google, and Apple integrate AI into their browsers and AI assistants, complex algorithms will tell you what you're searching for. Once Meta integrates AI into Facebook, complex algorithms will tell you who you're searching for. Disembodied AI may end up more consequential than the AI-powered robots of science-fiction!
 
Once Microsoft, Google, and Apple integrate AI into their browsers and AI assistants, complex algorithms will tell you what you're searching for. Once Meta integrates AI into Facebook, complex algorithms will tell you who you're searching for. Disembodied AI may end up more consequential than the AI-powered robots of science-fiction!
I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.
 
I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.
Unfortunately, when your manager is still deciding how to put you to use, he'll know "there's an app for that".
 
Unfortunately, when your manager is still deciding how to put you to use, he'll know "there's an app for that".
It’s a quote from HAL, an AI in the Stanley Kubrick movie “2001: A Space Odyssey”
But calling HAL an App would get you a ticket out the airlock lol
 
  • Haha
Reactions: 257Loner
Two things I really want Apple to discuss internally.

Fixing & Improving Siri
Fixing iOS! Fixing all the nonsense bugs
Like last year I’d be happy Apple banishes and ENDS Siri.

She’s polite and cute and for the first few years served us well. But she’s like Patty from Charlie Brown.

The bugs in iOS should get fixed.

Personally:
Why the hell does iOS 16.2/3 have Home connected lights show up as buttons you have to open to raise or lower brightness?! Moreover why when I adjust 1 light from Control center does it move right to the next light I’m about to adjust manually?!!!!??? Grrrr.

Also why do lights auto hide from trip center for no damn reason?!
 
They key point here is it’s private ONLY to Apple employees. I find that a bit unsettling.

Apple's reshuffle of executives showed that there are issues internally, and the continually poor quality of iOS15, 16, and macOS Ventura are not sprouting confidence. Heck, we don't even see Craig anymore. Wonder if he's on timeout...

Craig was sidelined after the whole backlash of CSAM. If you recall he was the one that brought it during a special event and also a few interviews - in particular with The Verge. After that it was close to 7 mths not hearing anything from him in Apple events.

Siri is just a frontend. The search itself uses Bing, and Microsoft is going to marry Bing with ChatGPT, so Siri will get some benefit as well.

Apple outsourcing even their "AI"... 🤣
Where do you get Siri uses Microsoft’s Bing? I had thought IBM was working with Apple years ago to help bolster its power for business applications. With a potential for IBMs Watson??

Agreed! Going to miss these Apple transitions. They were fun!

View attachment 2154845

I always found it strange that Apple, who’s against anything sexual or sexuality tends to continue to front up Craig like he’s some tweeter girls heart throb. It’s icky and best and really getting creepy as time continues on.

I posted about this months ago and it seems Apple has stopped. For now.
 
"AI Summit"

Funny thing here is that the Mac isn't a machine learning platform. Apple banned Nvidia and introduced its own Metal based machine learning support.

But "the world" uses Nvidia and CUDA to develop and train AI solutions. Apple banned Nvidia long time ago to "lock out competition". Funny thing is that OpenGL (also marked deprecated by Apple) is still a part of Ventura, cause simply no one wants to use Metal.
This is the result of Apples strategy, it is sitting all alone on its island, while the world around evolves and Apple is falling behind.

I asked Apple a dozen times: "Why don't you support open standards?", "Why doesn't Apple simply implement the BEST OpenGL/Vulkan support on earth?", "Why doesn't Apple support CUDA solutions?". The point is - Apple doesn't want any competition on its system, since than it would have to compete.
 
Last edited:
  • Wow
Reactions: freedomlinux
Because
- no-one has an *LLM* that reliably does arithmetic. Wolfram Alpha does, but it’s a different sort of creature…
- LLMs are LARGE. The AI models that Apple deploys on-machine are substantially smaller and substantially more limited;the largest are probably the image+text models that classify and label images, and these are a few 10s of MB.

This stuff is not secret if you switch off the snark long enough to spend a few hours reading the pages on the Apple Machine Learning website…

Maybe I’m dumb but doesn’t the iPhone have a calculator app Siri could use? The voice recognition is supposed to work offline, couldn’t it just take that as input?

I know we’re talking about AI here but there is so much low hanging fruit with Siri we don’t have to get fancy to make improvements.
 
Apple appears to be falling far behind in the AI race, but at least they’ll be holding this summit in a cool building. That’s got to count for something.
 
Maybe I’m dumb but doesn’t the iPhone have a calculator app Siri could use? The voice recognition is supposed to work offline, couldn’t it just take that as input?

I know we’re talking about AI here but there is so much low hanging fruit with Siri we don’t have to get fancy to make improvements.
As I understand it, right now
- the audio model (translate audio signal to text) happens locally
- the language model (translate text into "what needs to be done") happens on Apple servers
So it's irrelevant whether "2+2", after "understanding", is handled on the server or by sending a message to the calculator app; the important issue is that the language model is currently too large to execute locally.
 
As I understand it, right now
- the audio model (translate audio signal to text) happens locally
- the language model (translate text into "what needs to be done") happens on Apple servers
So it's irrelevant whether "2+2", after "understanding", is handled on the server or by sending a message to the calculator app; the important issue is that the language model is currently too large to execute locally.

I thought the thing about these models is that training them takes massive resources and huge datasets but, once trained, they can fit and be executed on a phone. Maybe I'm wrong but that's what I've heard on the tech podcasts.

But to take a step back though, it really takes and advanced AI language model to do the basic things offline that Siri has for years not been able to do? Once the text is translated, the device should be able to act on that text in the traditional Unix way without requiring advanced AI.

Isn't that essentially what the Shortcuts app is?
 
Apple appears to be falling far behind in the AI race, but at least they’ll be holding this summit in a cool building. That’s got to count for something.
Only if you insist on comparing other companies' vaporware against Apple's shipping products...

For the most part (not always, but for the most part) Apple likes to ship stuff when it's ready, and to announce/preview/demo nothing until that point.

The thing you have to ask yourself if WHY hold a meeting like this? It's not like Apple especially cares whether or not the entire company believes or does not believe Apple is doing well in AI. The logical answer is that they have a new and much improved set of AI capabilities and API's, and think these are significant enough that the entire company should be aware of them and thus consider how this functionality might be used. The AI team can only create baseline capabilities, they can't also be responsible for imagining how these might be used by Maps or TV.app or Mail or Spotlight. The goal is probably something like
(a) let the entire company work on this so that higher level (eg Map-appropriate) APIs are ready for WWDC
(b) discover if there are any missing pieces before the low level APIs are described at WWDC.
 
I thought the thing about these models is that training them takes massive resources and huge datasets but, once trained, they can fit and be executed on a phone. Maybe I'm wrong but that's what I've heard on the tech podcasts.

But to take a step back though, it really takes and advanced AI language model to do the basic things offline that Siri has for years not been able to do? Once the text is translated, the device should be able to act on that text in the traditional Unix way without requiring advanced AI.

Isn't that essentially what the Shortcuts app is?
LARGE Language Model are, by definition, LARGE!
Yes, training them is a massive task, but they remain massive even at that point. GPT3 takes 800GB to store!
Now, once a model works, you can try to optimize its size in various ways, like pruning and quantization. This was done for vision models a few years ago, and we got image recognition on our phones. But consider that the "mainline" model Apple uses for imagery (there's a single model with different "heads" that does everything from the camera-side of taking photos to recognizing people in photos to handling image search) is about 30MB in size. That's a HUGE difference...
How much would Apple be willing to sacrifice in storage for a really good language model? Maybe, I don't know, 10GB of storage and 1GB of RAM? GPT3 is nowhere close to that, and while (perhaps) it could be shrunk to that point, right now, server execution is the only option.
 
My bet is that Bard and chatGPT are mirages that won’t really go anywhere. So Apple simply has to continue doing what they are already doing - integrate AI in their products such that it benefits users in small but meaningful ways.
I wouldn't say that. I'd rather say that LLM's are a part of the solution, but other parts need to be worked out to get to better search. LLM's are already pretty good (not perfect, but pretty good) at the tasks for which they were designed, but front-end to web search was not one of those tasks.

The real problem is that these things are not oracles, but they are being treated as such, especially by some of the press and internet, which are being astonishingly silly in how they think of them.
We've gone from ignorant people in the 50s seeing computer arithmetic and thinking "computers understand math" to ignorant people in the 2020s seeing computer language processing and thinking "computers understand language", with zero learning from those 70 years!
 
  • Like
Reactions: StyxMaker
LARGE Language Model are, by definition, LARGE!
Yes, training them is a massive task, but they remain massive even at that point. GPT3 takes 800GB to store!
Now, once a model works, you can try to optimize its size in various ways, like pruning and quantization. This was done for vision models a few years ago, and we got image recognition on our phones. But consider that the "mainline" model Apple uses for imagery (there's a single model with different "heads" that does everything from the camera-side of taking photos to recognizing people in photos to handling image search) is about 30MB in size. That's a HUGE difference...
How much would Apple be willing to sacrifice in storage for a really good language model? Maybe, I don't know, 10GB of storage and 1GB of RAM? GPT3 is nowhere close to that, and while (perhaps) it could be shrunk to that point, right now, server execution is the only option.
And it's not just the storage that the phone would need, right? To work efficiently, the model probably should be loaded to RAM.
 
GPT3 takes 800GB to store!

How much would Apple be willing to sacrifice in storage for a really good language model? Maybe, I don't know, 10GB of storage and 1GB of RAM? GPT3 is nowhere close to that

Siri’s language model runs on the cloud, not on the device. Siri doesn’t work without an internet connection.

Nobody is suggesting Apple is going to try to run a GPT-like language model directly on your Watch. Not in the foreseeable future, anyway!
 
Siri’s language model runs on the cloud, not on the device. Siri doesn’t work without an internet connection.

Nobody is suggesting Apple is going to try to run a GPT-like language model directly on your Watch. Not in the foreseeable future, anyway!
Uh, people ARE suggesting it! That's what this entire damn thread is about! I am responding to comments like
"Maybe I’m dumb but doesn’t the iPhone have a calculator app Siri could use? The voice recognition is supposed to work offline, couldn’t it just take that as input?"
I am trying to explain what pieces need to run in the cloud for a request like "Hey Siri, what is 2+2" to work.

But if there's one thing that's VERY clear from this thread it is that most people
- have zero interest in understanding HOW/WHY anything works but
- don't let that stop them from having "expert" opinions on how it can and should be changed/fixed.
 
  • Like
Reactions: jnngr
Imagine if Siri was even half as smart as ChatGPT… 🤔
ChatGPT isn't smart at all as it doesn't understand anything. Still, answers may sound smart to people who aren't that smart. This is both dangerous and unhelpful. In any case, I don't want to have to constantly check my AI assistant to make sure everything is correct. Sure, to produce cheap disposable content, ChatGPT is OK.
 
  • Love
Reactions: Arran
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.