It’s litteraly sending the photo to google or chatgpt, I don’t see why not every iPhone would be able to do that.
I’ve tried it twice and forgot it’s there, just like the camera-not-button.Honest question has anyone actually even used it on the 16 series cause I know I havent
Great point!As the owner of a 15 Pro, I’m happy to hear this — although I’m assuming this is less of a “let’s throw a bone to the 15 Pro owners” and more of a “the adoption rate on this AI stuff is behind target, let’s get more people using Visual Intelligence so we can get more data for training and development.”
Because the major visual intelligence feature is not done on device but in the cloud by other servicesNot that I was one of the people that said that, but how do you know it's not a nerfed down model or a much more optimized model that they just finished cooking up? At the time, it probably was limited by hardware.
No, what Deepseek has taught us is that with better ways of training you can get similar performance using less hardware and data to train. Deepseek SOTA models are similar size to what Google, Open AI have. Getting higher performance from similar sized smaller models is not a new phenomenon.Because if deepseek taught us anything, it shows that we can get higher performance in smaller models. Apple found a way to bring something that wasn't originally planned for 15 Pro.
We don't need to defend a multi trillion dollar corporation that was simply withholding features from iPhone 15 Pro users to market them exclusively to iPhone 16 because of the camera button. Now that the 16e model without the button is released they can now bring the feature to iPhone 15 Pro models. Its greed, that is literally the modus operandi for any corporations.I guess anything that shows APPLE IS BAD AND GREEDY, we'll just beat that drum.
Because the major visual intelligence feature is not done on device but in the cloud by other services
You're not understanding the difference between visual lookup and visual intelligence. Visual lookup looks up the category of the object. So if you point it at a plant, visual lookup on device will recognize it's a plant. It then connects to the internet, uploads a picture to identify the plant.Other features like plant and pet ID has been a part of iOS 15 Visual Look Up on iPhone X and up.
No, what Deepseek has taught us is that with better ways of training you can get similar performance using less hardware and data to train. Deepseek SOTA models are similar size to what Google, Open AI have. Getting higher performance from similar sized smaller models is not a new phenomenon.
They could port it to XS if they wanted it’s not even a new feature just repackaged into a cleaner interfaceWouldn’t mind them throwing a bone to the 14Pro too…
There is no difference, reviews have compared visual lookup and intelligence and they get the exact same results, there is no software changeMany recognition tasks are done on device, offline, then ping the cloud for additional data like from foursquare
You're not understanding the difference between visual lookup and visual intelligence. Visual lookup looks up the category of the object. So if you point it at a plant, visual lookup on device will recognize it's a plant. It then connects to the internet, uploads a picture to identify the plant.
Visual Intelligence on the other hand attempts to identify the plant on device. If it can't, it expands the task to the cloud.
You're not understanding. It increased the accuracy of the models of the same size so that when you distill/quantize the models, you get relatively same performance as the bigger non-deepseek models with less computational/memory requirements.
I literally explained the difference. Visual lookup uses a limited predefined models to categorize objects. Visual Intelligence uses a generalized model and expands to cloud when it needs a larger model to identify objects.There is no difference, reviews have compared visual lookup and intelligence and they get the exact same results, there is no software change
that’s not how it works there is no difference between the models of visual intelligence and lookup it’s only the ability to share the image with ChatGPT and google that is different 🤦I literally explained the difference. Visual lookup uses a limited predefined models to categorize objects. Visual Intelligence uses a generalized model and expands to cloud when it needs a larger model to identify objects.
🤦♂️that’s not how it works there is no difference between the models of visual intelligence and lookup it’s only the ability to share the image with ChatGPT and google that is different 🤦
It’s actually not as you can access ChatGPT via Siri and actually get something half useful out of it.I have turned off Apple Intelligence on all my devices. It is beyond useless.
It’s actually not as you can access ChatGPT via Siri and actually get something half useful out of it.
Pretty much. Bad business decision for sureWhen the differences between 15 Pro & 16 Pro are so minimal ... holding back something artificially and then advertising that same feature as one of the few "new" things you get when going 16 Pro...
Is just a bad bad look
View attachment 2484231
Or Apple found a way to quantize the models with similar performance and is now bringing it back to old devices yet people still complain.Pretty much. Bad business decision for sure
What are you talking about? The models were designed with iPhone 15 Pro in mind already, both the A17 Pro and A18 Pro have the same 16 NPU cores and 35 TOPs. All the on device performance data they provided was based on the models running on iPhone 15 Pro.Or Apple found a way to quantize the models with similar performance and is now bringing it back to old devices yet people still complain.
Let me address this screenshot you posted. In the video you got this screenshot from, Stephen Robles tells you that visual intelligence does not recognize that location until you move closer to it, which makes them believe it is using gps and map data not visual recognition. The whole video is just him stating how visual intelligence did not trigger anything but chatgpt and google search worked better.Many recognition tasks are done on device, offline, then ping the cloud for additional data like from foursquare
What are you talking about? The models were designed with iPhone 15 Pro in mind already, both the A17 Pro and A18 Pro have the same 16 NPU cores and 35 TOPs. All the on device performance data they provided was based on the models running on iPhone 15 Pro.
The on device model uses low-bit quantization already, a mixed 2-bit and 4-bit, averaging 3.7 bits per weight.
They intentionally marketed the feature only for iPhone 16 Pro to sell the camera button but since they introduced the 16e without the camera button now they want to enable the shortcut button for 15 Pro. It is really that simple.
Let me address this screenshot you posted. In the video you got this screenshot from, Stephen Robles tells you that visual intelligence does not recognize that location until you move closer to it, which makes them believe it is using gps and map data not visual recognition. The whole video is just him stating how visual intelligence did not trigger anything but chatgpt and google search worked better.
From that same video visual intelligence is "recognizing" CVS and other businesses inside the Kaseya Center which again means its using GPS and not visual recognition. They are relying a databases they have in maps and in the cloud for visual intelligence. The only things i will say works on devices are optical character recognition related functions like text recognition.
View attachment 2488130
Apple tells you themselves the images iPhones uses to identify objects and places are not stored on device and are only shared with Apple to process what's in view, no doubt using their private cloud compute.
View attachment 2488132