Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

gold333

macrumors member
Original poster
Feb 2, 2010
45
35
Forgive me but I don't understand this privacy thing. Why do people care if their AI correspondence is on device or sent to an AI cloud?

I have both a Windows PC, a Macbook Pro and an iPad 13" M4. Ever since the option became available years ago I have personalised tracking off on both MacOS, IOS, iPadOS and on Google Dashboard.

I also use a cocktail of ad blockers.

The very few ads that get through are never personalised. I have never (ever) seen an ad relevant to me, and have never spent a single dollar on something I saw in an ad.

If you just switch off tracking (the user choice is legally required) and never see a relevant ad, what difference does it make if AI requests are on device or in the cloud?

I may seem naive in this but I genuinely don't understand.
 
Last edited:
That is not the point, I think. The point is that - depending on what you are doing, and level of detail that you provide AI when prompting it to do work, you might leak data to it (either proprietary or private) and it is unclear what is used to train AI or not.

If it is all on device and it does not leave your device, it is - well, on your device.

Not the same but - for example, my company blocks Grammarly by corp policy. Because if used on work devices, bits of text, documents etc. might be sent to a place that we do not have control over.

It can also be a data governance issue.

IMO, Apple would do well to provide a way to disable integration to 3rd party "in the cloud" AI models via MDM controls (and, actually, user set toggles).
 
People seem to be under the impression that any and all AI would be trained against their data. I understand that concern—after all, we’ve seen companies like Google and Meta hoovering up user data for over a decade. ChatGPT and Gemini are upfront about using user conversations to help improve the model.

We neglect to remember that Apple has a lot of weight (and money) to throw around. Tim says they negotiated anonymized access to ChatGPT-4o, I'm inclined to believe them. It serves no purpose for Apple to lie on Sam Altman's behalf.
 
Besides the privacy concerns... any kind of digital assistant that constantly needs to be connected to servers elsewhere, can become very useless when you have a slow/bad or even no internet connection. I've had moments with iOS 16 and 17 when internet was down for quite some time and Siri could not be used for something simple like turning on lights. Especially when using Siri on the HomePod's.

Actually... I would very much like to see something like a Mac Mini Server that could support some of services on the local network. If "Apple Intelligence" does use Apple Silicon server, why not have supporting services on the local network too by using a Mac M3/M4 for example? A local server that can process Siri requests, keep the data local and also so do some work for HomeKit, iCloud, pre-downloading new updates (I miss the old SUS) and so on. It might even support older devices that have not enough power by themselves to support Apple Intelligence, like older HomePod's.
 
Last edited:
People seem to be under the impression that any and all AI would be trained against their data. I understand that concern—after all, we’ve seen companies like Google and Meta hoovering up user data for over a decade. ChatGPT and Gemini are upfront about using user conversations to help improve the model.

We neglect to remember that Apple has a lot of weight (and money) to throw around. Tim says they negotiated anonymized access to ChatGPT-4o, I'm inclined to believe them. It serves no purpose for Apple to lie on Sam Altman's behalf.

Didn't know this, and it's good to know.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.