Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple is not just sitting idly by enjoying this arrangement. They are developing their own version of ChatGPT. Later down the road they will announce their version and end the relationship. They did that with Internet Explorer and then suddenly Safari was released.
While that sounds nice…. Building a competent model that even comes close to ChatGPT is not easy. In fact it’s harder now than it was before ChatGPT launched because you can’t scrape for the data nearly as easily since lots of web sites and web apps have implemented AI firewalls to block access from systems like ChatGPT without human intervention or specific API integrations.

It will cost Apple way way way more money to get the same volume of training data and even then they still have to train the models with human validation which can be slow and painful. It will take them years to do it and by then how advanced will ChatGPT be?

They’re hoping for a miracle and missed the AI boat big time. Likely they will try to entice developers to use Apple chips to develop AI on their hardware but they still can’t hold a candle to Nvidia’s performance for machine learning operations… I just don’t see an easy way “out” of a ChatGPT arrangement now that they’ve committed to it.

Maybe they will try to trial sending some queries to their own GPT while still using ChatGPT but it’s still just going to result in many queries getting handled less elegantly.
 
While that sounds nice…. Building a competent model that even comes close to ChatGPT is not easy. In fact it’s harder now than it was before ChatGPT launched because you can’t scrape for the data nearly as easily since lots of web sites and web apps have implemented AI firewalls to block access from systems like ChatGPT without human intervention or specific API integrations.

It will cost Apple way way way more money to get the same volume of training data and even then they still have to train the models with human validation which can be slow and painful. It will take them years to do it and by then how advanced will ChatGPT be?

They’re hoping for a miracle and missed the AI boat big time. Likely they will try to entice developers to use Apple chips to develop AI on their hardware but they still can’t hold a candle to Nvidia’s performance for machine learning operations… I just don’t see an easy way “out” of a ChatGPT arrangement now that they’ve committed to it.

Maybe they will try to trial sending some queries to their own GPT while still using ChatGPT but it’s still just going to result in many queries getting handled less elegantly.
Why does apple need their own LLM? They never made a search engine and have done just fine.
 
  • Like
Reactions: Tagbert
While that sounds nice…. Building a competent model that even comes close to ChatGPT is not easy. In fact it’s harder now than it was before ChatGPT launched because you can’t scrape for the data nearly as easily since lots of web sites and web apps have implemented AI firewalls to block access from systems like ChatGPT without human intervention or specific API integrations.

It will cost Apple way way way more money to get the same volume of training data and even then they still have to train the models with human validation which can be slow and painful. It will take them years to do it and by then how advanced will ChatGPT be?

They’re hoping for a miracle and missed the AI boat big time. Likely they will try to entice developers to use Apple chips to develop AI on their hardware but they still can’t hold a candle to Nvidia’s performance for machine learning operations… I just don’t see an easy way “out” of a ChatGPT arrangement now that they’ve committed to it.

Maybe they will try to trial sending some queries to their own GPT while still using ChatGPT but it’s still just going to result in many queries getting handled less elegantly.
As someone mentioned before in this thread, Apples model is really good already: https://machinelearning.apple.com/research/introducing-apple-foundation-models
 
Why does apple need their own LLM? They never made a search engine and have done just fine.
Oh there’s lots of reasons they would want their own. Frankly most major enterprises are currently actively building or acquiring LLMs for proprietary use cases. And no corporation feels warm and fuzzy about blindly sending their data into a ChatGPT void.

But since ChatGPT is so advanced and capable it’s almost crippling to try and offer comparative feature sets through a new LLM…

In the end they may not be able to build a competing solution to displace ChatGPT…. I can’t predict the future in this case but if they spend enough money and stick to it, it may take years but they will likely pull something off.
 
As someone mentioned before in this thread, Apples model is really good already: https://machinelearning.apple.com/research/introducing-apple-foundation-models
None of the models have been benchmarked against GPT-4o and we all know that even 4o isn’t as advanced as what OpenAI has cooking as we speak.

It’s good they have a framework.. they needed one. But all those models don’t make up a single LLM anywhere near ChatGPT’s current capability. They’re more “foundational”.

Good for local on device processes but genuinely intelligent queries and conversations will have to happen via OpenAI until some breakthroughs occur to accelerate model development…. Specifically advances in training.
 
ChatGPT will get less capable of producing anything resembling accurate information the longer it exists. We are at peak ChatGPT now because most of the unlabeled text data it first scraped was mildly accurate.

The model it uses has little capability of classifying information as “accurate” vs “inaccurate” and this will continue to get worse not better. OpenAI has bamboozled uninformed people into believing GPT resembles thinking. Apple is wise to pay nothing, and I hope by iPhone 18 the GPT bubble has burst.
 
  • Like
Reactions: nt5672 and trusso
Yes Apple has two LLMs: one for on device processing and one for server side. Contrary to what a lot of people are saying those models are quite good. Local one is state of art and server one is not that far behind gpt4.
No one's calling those models bad. But let's face it, nobody's really tested them except for Apple themselves. I'm totally on board with Apple's work in on-device models for iOS 18, but what's their local model bringing to the table that Gemini nano on Android or nano 3.8B in the next release can't do?

Apple's chart cherry-picks data without laying out the specifics of what tasks those LLMs were tackling. So, we're left in the dark about just how capable those models really are.

Who exactly were the human evaluators?

And what about all those other benchmarks like Math, MMLU, MMMU, HellaSwag, Natural2Code, BBHard, GPQA, GSM-8K, Big-Bench Hard, WMT23, FLEURS, EgoSchema? Did Apple toss their models into the Chatbot Arena?

They skipped comparing to Llama, the big name in open models.

They're not making an appearance on any benchmark leaderboards, and you can even submit entries anonymously.

There are lots of questions than answers at this point on what those models are capable of. Their big models are not close to what GPT4o or Gemini Advance 1.4 pro can do in terms of multimodality but that's ok, they don't need to be for the tasks they want them to do. AI is not their business, iPhones are.

Company tier list:
1. Apple
2. OpenAI
3. Google
4. Microsoft

Those above receive money and/or services from those below.
Microsoft owns 49% of OpenAI
 
1.5B people don’t own the hardware required to access Apple Intelligence.

Yes but 1.5B people eventually will have access to Apple Intelligence, that is the beauty of it, it does not happen overnight! It may take 5 years or more which is good for Apple! Time does not stand still!
 
ChatGPT will get less capable of producing anything resembling accurate information the longer it exists. We are at peak ChatGPT now because most of the unlabeled text data it first scraped was mildly accurate.

The model it uses has little capability of classifying information as “accurate” vs “inaccurate” and this will continue to get worse not better. OpenAI has bamboozled uninformed people into believing GPT resembles thinking. Apple is wise to pay nothing, and I hope by iPhone 18 the GPT bubble has burst.
So it will become more human like is what your saying.
 
Google is also paying Apple for Safari search default engine
Apple is getting paid
Google is not "paying" Apple. Apple gets a percentage cut from revenue share generated from Google being safari default search. It makes sense as it drives Apple users to use google search by default and you get advertised to when you do that as well as all the data google gets from that.
 
Apple has extensive AI expertise spanning back more than a fifteen years ago. They don't broadcast their plans until they have long term visions already in place. Even then it is revealed bit by bit over the course of several years.
 
I don't quite understand how Apple's system is set up.

  • There is on-device processing for most tasks
  • There is cloud processing for some heavy tasks on Apple-owned & operated servers
  • Some requests on "world data" are optionally sent to ChatGPT (in a way that OpenAI can't build up a picture of your requests over time)
So, does this mean that Apple have developed their own genAI framework and trained an LLM in 18 months? Or have Open AI provided the ChatGPT framework to Apple which they then run on Apple servers and train with their own LLMs? Is this going to be like Maps where the Apple servers/LLMs start out quite limited (hence "world data" questions are sent to ChatGPT), but over time Apple will add more servers and train the LLMs with wider data sets so they can eventually operate an end-to-end Apple AI service?
Apple has their own Language Model. The have been building it for over a decade, and with the advancements in the last year or two, were able to drastically overhaul the structure, and that is what Apple Intelligence is. If it is outside of the scope of Apple Intelligent, then you can optionally send your request to ChatGPT.
 
  • Like
Reactions: kiranmk2


Alongside its Apple Intelligence feature set, Apple on Monday announced a partnership with OpenAI that will allow Siri to access ChatGPT directly in iOS 18, iPadOS 18, and macOS Sequoia to provide better responses in relevant situations. But according to a new Bloomberg report, nobody is paying cash to anybody for the arrangement.

open-ai-logo.jpg

From Mark Gurman's report:
Apple isn't paying OpenAI as part of the partnership, said the people, who asked not to be identified because the deal terms are private. Instead, Apple believes pushing OpenAI's brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments, these people said.
Despite the agreement, the partnership for OpenAI could become expensive, as more and more Apple device users tap into ChatGPT's capabilities and suck up more of the company's compute power and budget.

Apple Intelligence doesn't run on hundreds of millions of devices. If it is confined to A17Pro ( iPhone 15 Pro), iPhone 16 , and M series, that won't be 'hundreds.'. Over an extended period of time, but if Apple has pruned off the vast majority of the current "billion devices" from exposing ChatGPT there is not much direct advertising here.
That is a bit nuts to deliver services now for advertising Apple might do 1-3 years into the future.

That 'millions of users' is off by an order of magnitude. 10's not 100's. At least for the intermediate future.

Unless current tech Siri is getting a major upgrade to start hyping ChatGPT , the vast bulk of the Apple userbase isn't going to see it past the 'hype bubble' of the WWDC introduction and odd-ball user manual references to features their device can't use. Sure Apple Execs are saying stuff like "We think ChatGPT is the best", but they are also saying they are open to deals with other people. When get to the state Apple asks users "do you want to use ChatGPT , Foo , or Bar" that whole 'best' thing is going to get watered down.



Additionally, Apple plans to increase its earnings from AI through revenue-sharing agreements with its partners. According to the report,

How much Apple subtly or actively herding folks into paid OpenAI subscriptions that pragmatically is paying. ( OpenAI is probably getting the bulk of the subscription money that Apple would collect. The bigger haggling would have been how high was Apple's percentage. )

If classic , non "Apple Intelligence", Siri implementations start hustling OpenAI subscriptions then it effectively is paying over the whole user base.

But the demos so far of a "stop interaction flow, ask user every single time transition to ChatGPT". Warnings that the data needs to be considered suspect on every transition also. I guess it is like USA cigarette advertising.


The terms of the deal are secret but "as part of the partnership" may not be the whole deal. Apple donates money to OpenAI which gets some chunk of cloud vendor X with hardware that OpenAI gets to use to service some of the load could be looped in there also. Not directly 'cash' but also a bartered exchange.
 
  • Like
Reactions: nt5672
You missed my point. The money that Google has been paying Apple could be the money that's helping Apple to not come up with any AI solution of its own.

More likely Google's money is one reason the Apple Car project burned billions before Apple killed it. But that actually in part was an "AI solution". ( A car with no steering wheel is extremely probably using AI in some way.)

AI is not the same thing as large language models (LLMs) . Apple has even mentions on numerous occasions that they are doing Machine Learning (ML).

Google would have be delusional to thing that none of the billions they were giving Apple didn't go into a single AI project at Apple. ( of which there are dozens. Health ML, Vision Pro ML , Apple generative voice replacement (Personal Voice). etc. )

Practically free money that falls out of the sky that Apple has to do next to nothing for ( Apple's users are doing the work and are the 'product') is exactly the kind of money Apple can throw at somewhat speculative research that may or may not pay off.
 
Anyone else read this in the same lens of influencers trying to get free stuff from various vendors?

They're not going to pay, but doing it for the exposure
 
So Apple is paying with exposure? LMAO.

I think this only works as long as Google is their common enemy. Apple was probably like “Yo, we hate Google, you hate Google, I think we can kill search together.”

This will surely become cost prohibitive. I like the idea of building it such that you can plug in other AI, but I can’t help but think this is only a short term deal until Apple has their world model up and running and makes that the default for Siri, with the option to pick something different, like a search engine. Or maybe you could just ask Siri to use whichever one based on the context.
 
Is everyone missing the point here? OpenAI gets billions of queries into their system to add to ChatGPTs training data. The data they will gather on queries made are invaluable compared to pennies per transaction. Then as they advance their model way past anyone else it will be nearly impossible for anyone else’s LLMs to catch up.

Microsoft, on the other hand, practically needs to pay users to engage in their Co-Pilot features or shove them down their throat via app integration. I don’t think OpenAI believes they will get nearly as much data from the Microsoft arrangement….

Another possibility is Microsoft is paying for more exclusive access which they offer to enterprise customers for data segregation. Apple conversely offers no such feature and instead are likely going to try to make/acquire some other LLM…. I question how successful they will be given the rate of evolution these platforms are going through.
Microsoft agreement with OpenAI is very different from what Apple have. It's obvious that Apple will provide a large pool of users compared to MS. But the agreement with MS provides the infrastructure and datacenters that Apple don't have. At the end, OpenAI benefit from both agreements.
Microsoft, on the other hand, practically needs to pay users to engage in their Co-Pilot features or shove them down their throat via app integration. I don’t think OpenAI believes they will get nearly as much data from the Microsoft arrangement….
We could say Apple is "shoving" ChatGPT by integrating it in iOS and macOS. How is that different from MS approach with Copilot?
 
I don't quite understand how Apple's system is set up.

  • There is on-device processing for most tasks
  • There is cloud processing for some heavy tasks on Apple-owned & operated servers
  • Some requests on "world data" are optionally sent to ChatGPT (in a way that OpenAI can't build up a picture of your requests over time)
So, does this mean that Apple have developed their own genAI framework and trained an LLM in 18 months?

Apple has had LLM (Large Language Model ) for years. What has evolved in the last couple of years is just how large is 'Large'. Apple has been more focused on making the LLM have a smaller footprint for inference (when end users are getting helpful assistance for a specific problem). OpenAI and some others have been more focused on piling higher and deeper. Bigger and Bigger models. Bigger is dual edged. It has more useful data , but it also has more 'junk' ( e.g., potential to simply just make things up. 'hallucinate' , etc. )

There is now an 'arms race' of who can build the biggest model that 'knows everything'. Apple appears not particularly interested in that race. ( for several good reasons. )


Or have Open AI provided the ChatGPT framework to Apple which they then run on Apple servers and train with their own LLMs? Is this going to be like Maps where the Apple servers/LLMs start out quite limited (hence "world data" questions are sent to ChatGPT), but over time Apple will add more servers and train the LLMs with wider data sets so they can eventually operate an end-to-end Apple AI service?

Can think of it as Apple is building more specialist language "AI". It knows how to casually talk about issues associated with Apple products , but it isn't going to get into a deep conversation about the differences between Nietzsche and Socrates perspectives on Philosophy. Do a quick math problem , but not explain how to do it yourself.

There is an open issue of just how 'Large' does a model need to be , to be highly accurate and useful enough .
Apple's track is more so on building a larger number of "just big enough" models that are specialists in some relatively narrow areas. Apple's models are mostly going to run just on Apple hardware.

LLM training does not have to happen on same hardware as LLM inference is done on. What Apple is doing on 'inference' doesn't constraint what they are doing on training. It is a substantively different compute and storage issue. ( Apple's private clould compute is throwing away the data at the end of each compute session. You don't do that in Training at all. )
 
I don't quite understand how Apple's system is set up.

  • There is on-device processing for most tasks
  • There is cloud processing for some heavy tasks on Apple-owned & operated servers
  • Some requests on "world data" are optionally sent to ChatGPT (in a way that OpenAI can't build up a picture of your requests over time)
So, does this mean that Apple have developed their own genAI framework and trained an LLM in 18 months? Or have Open AI provided the ChatGPT framework to Apple which they then run on Apple servers and train with their own LLMs? Is this going to be like Maps where the Apple servers/LLMs start out quite limited (hence "world data" questions are sent to ChatGPT), but over time Apple will add more servers and train the LLMs with wider data sets so they can eventually operate an end-to-end Apple AI service?

We know Apple does have at the very least a small LLM because they have open-sourced it. Apple calls it openELM. You can download it and run it on whatever computer you like. It is quite good in that it compares with the 8B parameter Llama2 but they did it with far fewer parameters It is sized to run "on device"

I got a copy and quickly read through the code to see what it does and there were some interesting comments about Apple's training infrastructure. First off, training runs on Linux but there is some effort to convert to MacOS. They are porting The Slurm Workload Manager to Mac but had not finished. This tells us they are in-process of doing all the above and have the on-device model out the door but work continues but they don't have the infrastructure in place to compete with OpenAI, they are just now setting up a Mac-based system to do that. It looks like they want to not have to pay Nvidia for hardware, they can make their own.

My experience is that the Mac does well relative to Nvidia if the measure of goodness is computation per Watt. I'm running the Llama3 LLM in Mac M2-Pro and getting about 20 tokens per second.
 
agree with this, not least because I think OpenAI's business model will change over time when they decide it's time to make money. They're in the grow-your-user-base period at the moment, giving their product away for free, but eventually they'll expect users to pay for it either directly or through adverts. Now Apple aren't going to want OpenAI's adverts appearing in Siri, so at that point what happens?
Exactly. Openai needs to grow their user base and need distribution. They will learn alot about B2C when working with Apple and will have their tech integrated to a level that's not possible through a third party app in the app store.

They might even partner with apple to offer Sora capabilities through final cut pro.

In 2-3 years foundational models will have been commoditised for 80% of B2C use cases. At that point Apple won't need openai and openai will have found product market fit whether it's in gaming, video generation, B2B or augmented reality.
 
ChatGPT will get less capable of producing anything resembling accurate information the longer it exists. We are at peak ChatGPT now because most of the unlabeled text data it first scraped was mildly accurate.

The model it uses has little capability of classifying information as “accurate” vs “inaccurate” and this will continue to get worse not better. OpenAI has bamboozled uninformed people into believing GPT resembles thinking. Apple is wise to pay nothing, and I hope by iPhone 18 the GPT bubble has burst.
I disagree. LLMs are getting dramatically smarter each year. There is no evidence as yet that LLM researchers are failing to maintain the hygiene of their training data.

There is a whole separate discussion of course that as the years go by and the majority of content is either generated by LLMs - or quality-checked by LLMs at the very least - how that will impact the training data being used. It's not about accuracy, which is easier to control, but to what extent we want training data to self-reinforce.

If neural networks can inadvertently develop behaviour remarkably reminiscent of human intelligence and reasoning, purely by sampling text excerpts of human activity, it suggests that we do not understand human intelligence.

Perhaps the core of human cognition fundamentally operates on similar principles under the hood - a widely distributed self-learning system that progressively distils the statistical patterns of our experiences into higher-level models of abstraction, reasoning and generalised intelligence.


So Apple is paying with exposure? LMAO.

I think this only works as long as Google is their common enemy. Apple was probably like “Yo, we hate Google, you hate Google, I think we can kill search together.”

This will surely become cost prohibitive. I like the idea of building it such that you can plug in other AI, but I can’t help but think this is only a short term deal until Apple has their world model up and running and makes that the default for Siri, with the option to pick something different, like a search engine. Or maybe you could just ask Siri to use whichever one based on the context.
ChatGPT is free to use though. The free tier of service is what is being baked into Apple's devices, with the same privacy afforded to enterprise customers. If you want the full-fat GPT-4o model (with its higher context size and higher usage limits) you still need to pay $20/mo for ChatGPT Plus and link it to your Apple Account.
 
  • Like
Reactions: AlexMac89
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.