Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As a developer working in the private/local AI space I find it disappointing that, evidently, neither Google nor Apple are providing this technology to third-party developers to integrate with and leverage. In the case of Apple, I really don't know what the implementation context for the technology is at all. Ostensibly, the Foundation Models run entirely on-device, which is why the models are puny with puny context, really only appropriate for highly localized, targeted purposes (not to mention ridiculously strict guardrails). Does this imply Apple is going to deploy more substantial models, server-side, through the Foundation Models framework, that would leverage it's Private Compute? Not in the documentation that I could find, for either FM or Cloud Compute. Similarly, it seems Google's only stated intent for its Private AI Compute is in service of its own products. What's interesting about the pitch, in that context, is that essentially Google is promoting how its tech protects you from...them. Of course, a Pixel user interacting with Google's AI features has no visibility into the source code, and so were Google to want to gather a user's personal information they could simply send it on a parallel channel to an unsecured environment, at that moment, or in batch at a later time.
 
It is time to call Apple Intelligence. Apple Doomsday.
Apple I. Is absolutely useless, it just consumes space and CPU and memory for doing useless stuff on the phone.
Not even half-baked, it is an alpha demo at most.
Even photo editing produces nightmares.
Please Apple, just Lowe iPhone and watches prices because they are just color variations.
“Ready for AI” was and has been a lie.
 
Kudos to the designer of the article’s picture for the illusory motion when scrolling. I have to imagine that was by design.
 
”For decades, Google has developed privacy-enhancing technologies (PETs) to improve a wide range of AI-related use cases……”

Yeah… we trust you Google!
 
Private…

It all depends on how that word is defined
Only you, Google’s employees, and a select few intelligence agencies and their contractors from the five eyes and Israel can access this. Plus some black hat hackers but we won’t detect them for 4 years anyways so they’re harmless. What you don’t know can’t hurt you! (Don’t worry they’re just running it to train for targeting data for drones. Google’s a military company now you can uh trust them!)
It’s fine! Why do you need privacy anyway? You’re proud of who you voted for right? Why do you need to hide anyway?!

Privacy is when you don’t know what they know about you.
 
This is what I want companies to be competing on! Who can make their OS and services the most private. I hope they allow third parties to verify the privacy like Apple does.
 
Whenever I think this is open source it’s must have solid privacy standards I remind myself Android is an open source data collection and extraction tool
 
  • Like
Reactions: HazeAndHahmahneez
But I thought Apple was hopelessly behind on AI. Now I am starting to believe Apple is ahead!
 
But I thought Apple was hopelessly behind on AI. Now I am starting to believe Apple is ahead!
Apple in my opinion was always ahead AI and machine learning technologies. There were behind generative Ai like generating images and LLMs. This isn’t bc Apple was incapable it wasn’t a priority their machine learning was focused on things like camera device search etc. in addition Apple gets a different level of scrutiny bc of apples customer base so they don’t tolerate certain things. Gemini and open Ai made up entire laws and headlines. No prob Apple summaries are incorrect for news notifications disastrous headlines everywhere.

But Apple has ALWAYS had a key advantage no other company can match: Apple has been including neural accelerators in its phones since iPhone XS. So apples ability to deploy AI to devices and run them offline is signifanrly better. Plus a better developed mobile NN api than anything on Android helps as well.

I know the person I’m replying to knows this but it’s a pet peeve of mine. I have plenty of critiques for Apple but AI isn’t the top one. I want them to if anything slow down.
 
As a developer working in the private/local AI space I find it disappointing that, evidently, neither Google nor Apple are providing this technology to third-party developers to integrate with and leverage. In the case of Apple, I really don't know what the implementation context for the technology is at all. Ostensibly, the Foundation Models run entirely on-device,

It really isn't a developer facility.

" ...
First, we intentionally did not include remote shell or interactive debugging mechanisms on the PCC node. Our Code Signing machinery prevents such mechanisms from loading additional code, but this sort of open-ended access would provide a broad attack surface to subvert the system’s security or privacy. Beyond simply not including a shell, remote or otherwise, PCC nodes cannot enable Developer Mode and do not include the tools needed by debugging workflows.
..."

No debugger access. No additional code access. The node is a complete black box in operational deployment. Can send data into but it largely goes there to terminate (no persistent storage there either) with just an 'answer' sent back.

Not really local only. A stated objective of PCC is that the models are in the cloud.

"...When Apple Intelligence needs to draw on Private Cloud Compute, it constructs a request — consisting of the prompt, plus the desired model and inferencing parameters — that will serve as input to the cloud model. ..."

More so that it is exclusive access for Apple Intelligence. It decides when to escalate to a larger model. ( A substantial number of LLMs are very reluctant to say "I don't know". This hand-off/escalate metric is something that they have worked on. )


It is seems pretty close to how Apple delegates ProRes requests to an Afterburner ( or M-series fixed function logic) or non-accelerated code transparently to an application. You call Apple's library and it decides where to send the requested work to.

which is why the models are puny with puny context, really only appropriate for highly localized, targeted purposes (not to mention ridiculously strict guardrails). Does this imply Apple is going to deploy more substantial models, server-side, through the Foundation Models framework, that would leverage it's Private Compute?

Back closer to the introduction of the article.

"... However, to process more sophisticated requests, Apple Intelligence needs to be able to enlist help from larger, more complex models in the cloud. For these cloud requests to live up to the security and privacy guarantees that our users expect from our devices, the traditional cloud service security model isn't a viable starting point. Instead, we need to bring our industry-leading device security model, for the first time ever, to the cloud. ..."

Apple knows their on device models are small. This is their work around for small size and/or limited power budget for them base models.



Not in the documentation that I could find, for either FM or Cloud Compute. Similarly, it seems Google's only stated intent for its Private AI Compute is in service of its own products. What's interesting about the pitch, in that context, is that essentially Google is promoting how its tech protects you from...them.

Apple never published the inner workings of Afterburner or their embedded A/V accelerators. Similar 'boat'.

If Apple didn't have this abstraction it would have made it much harder to sub in Anthropic/Google models on the PCC side also later. Folks building hand-offs would have had presumptions that are not likely to hold up at this point.
the criteria on model hand off (why esclating/punting ) and the data matchup likely are going to change from when core Apple only presumptions were involved in building both sides of the transition.


Of course, a Pixel user interacting with Google's AI features has no visibility into the source code, and so were Google to want to gather a user's personal information they could simply send it on a parallel channel to an unsecured environment, at that moment, or in batch at a later time.

that is the role Apple's PCC validation VM images are suppose to do. Google could provide a image to source code validators to inspect. But since it natively runs on Google Hardware that is tougher than Apple's case were representative hardware is far more broadly deployed.

Finally, given Apple PCC status of 'free' access, I'm not sure why folks were thinking that arbitrary code (with arbitrary workload) would be run by Apple entirely free of charge. If the workloads are consistent then the load balances and resource management all get substantively less expensive. Apple certainly is going to 'gate' what gets loaded here to control costs.
 
Privacy is important and good to see Google following a similar approach. Hopefully what Google is saying will remain true and it improves privacy.
 
  • Like
Reactions: mganu
Most likely this was part of the deal where Apple pays Google a billion dollars, but on the premise that Google gets their privacy sh*t together.

Pretty good chance Apple isn’t paying $1B in cash here at all. Part of Apple Google joint deal on default search Google owes Apple about $20B . Instead of cash Apple takes $1B in barter trade. Apple gets the porting services and rights to use software along with updates. Google takes some amount resembling $1B to pays staff and for training of models for Apple.

The cloud privacy and service for Apple’s web service is run on Apple hardware, not in Google’s data center ( as this thread’s source article outlines. )

I think this spawns more so because of publicity of Apple running going to run inference stuff in a private mode and Google not having a similar common end user service service. It doesn’t necessarily work exactly like Apple’s model .


I haven’t deeply dug into the Google cloud services archives ,but it wouldn’t be surprising if they already had something very similar for training ( rent training compute on their cloud where they don’t snoop the Training data ) . They have various tools for encrypting models under development and option to manage your own keys . It is more technically compleciated and user invlovled than this new end user inference service

[ I wouldn’t be surpried if Apple was doing model training on Google Cloud services in si,liar barter trade earlier . There is a brouhaha about Apple not buying buckets of Nvidia hardware to do training. Apple didn’t have to if Google had sufficient Tensor capacity to lease out. ]


This inadvertently helps Android users. Not bad at all.


This Google Cloud core service here doesn’t seem to be solely hooked to Android. Their demonstrative examples are a couple of Android apps , but there is little in there tech overview that ties the client to being android only . Google cloud apps probably can use this also and be relatively platform neutral . I doubt Google Cloud services is going to be looking to provide free access to any app ,
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.