Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It's not replacing anything, it recognizes what it sees and enhances it as best as it can(upscale resolution, details), that's why results vary. Google and Samsung have an AI based features that fixes blurred photos, does that mean that the resulting photos are fake? NO, ABSOLUTELY NOT. Samsung has a feature that removes shadows or reflections from photos, does that mean that the result is a fake photo?

I'd be interested in seeing an example of disproving the original experiment. Not being able to reproduce it is not the same as disproving it by the way. The twitter link doesn't say anything other than "it didn't work as good for me".

There are ways to deconvolve blur (traditional super-resolution) and enhance details in shadows and through reflections (HDR, for example) that don't involve faking anything, but yeah, creating detail that doesn't exist in the scene is a fake photo. That's what "generative" means in generative AI-- it's generating/creating/making stuff up.

That's a statement of fact, not a statement of value. It might serve the goal of making a happier customer with a better picture, or saving that customer from having to spend 20 minutes in photoshop to do the same task, but it is absolutely fake.


This little line at the bottom of the Samsung explainer is rather telling:

"Samsung is continuously improving Scene Optimiser to reduce any potential confusion that may occur between taking a picture of the real moon and an image of the moon."​
Samsung is trying to make it harder to trick their system with photos of images versus photos of the physical moon. Why wouldn't they want to also improve photos of photos? My guess is that it's because then it's apparent that the output doesn't match the input.
 
Last edited:
Kuo explained that the company will likely not dedicate much time to discussion of AI during its earnings call due to its lack of progress in the area. There is reportedly no sign that Apple has plans to launch or integrate AI computing or hardware products in 2024, indicating that AI is unlikely to boost the company's stock price or supply chain in the immediate future.

..... Cook also said that Apple views AI as "huge," and plans to "continue weaving it in products on a very thoughtful basis."


Kuo is a bit clueless here. There is a substantive difference between actually doing something with AI and getting on the AI-hype train to boost your stock price.

The Vision Pro's major differencing features all critically depend upon AI processing. That is coming in 2024. So to say that there is no "major AI" coming from Apple in 2024 is largely nonsensical if talking about actual technology usage. Is Apple going to spin Vision Pro to get onto the AI-stock-price hype train??? Nope.

Similarly at WWDC 2023 Apple outlined that there is a major change coming in keyboard text insertion. Autocorrect runs on oompa-loopas and pixie dust ? Or a ML/AI algorithm? Again it is announced as a feature and unless carefully look at what really makes the feature work then going to miss the 'AI' usages there.

WWDC 2023 live transcription of voice mail as it comes in.... oompa-loopas or AI ?

Apple is going to do nothing in pushing computational photography/videography forward in 2024 ? Probably not.

Apple also has a tendency to want to sell some features as "magical" / "magic" / "wonderous". If you explain something it is harder to sweep up that characteristic.


Apple puts a lot more work into AI inference than AI training. Mainly because more preoccupied in delivering the features to common/general end users.


Large 'chatty' language models are not the whole scope of AI. Siri is lagging. ( Autocorrect is/was even worse though. Once it snags misspellings or odd contexts assumptions it won't let go. ) [ A LLLM is not necessarily a 'silver bullet' that will solve Siri's major failings of understanding/scope. ]
 
Last edited:
I'd be interested in seeing an example of disproving the original experiment. Not being able to reproduce it is not the same as disproving it by the way.
What's important is the premise of that experiment and how after so many months users that don't own a Samsung phone that has AI Moon Mode are still convinced that the resulting pictures are just something Samsung found online and slapped on the original moon. That's what I disapproved and showed to be incorrect beyond any doubt.

The twitter link doesn't say anything other than "it didn't work as good for me".
It says enough. It proves that it's an AI model with varying results not just a "replace the original moon with some picture from the internet mode = a completely fake photo" like it was being suggested here. Pay attention at least.

There are ways to deconvolve blur (traditional super-resolution) and enhance details in shadows and through reflections (HDR, for example) that don't involve faking anything, but yeah, creating detail that doesn't exist in the scene is a fake photo. That's what "generative" means in generative AI-- it's generating/creating/making stuff up.
The details exist, they were just blurred in that particular case. The Moon is real and looks the same, it's a single object so the AI was able to recreate those blurred details in that case.
Also take a look again at the tweet below, S23 Ultra's hardware is perfectly able to resolve quite a lot of details when pointed to the Moon, so in any real case(the actual real Moon) it's AI just enhances what the hardware sees, it doesn't create "fake photos". The conversation was in general(feels weird to have to point that out) not just and only about that "experiment".

That's a statement of fact, not a statement of value. It might serve the goal of making a happier customer with a better picture, or saving that customer from having to spend 20 minutes in photoshop to do the same task, but it is absolutely fake.
Not in general it's not. For the regular consumer its just an AI enhanced photo of the real Moon, not a "fake photo".


Samsung is trying to make it harder to trick their system with photos of images versus photos of the physical moon. Why wouldn't they want to also improve photos of photos? My guess is that it's because then it's apparent that the output doesn't match the input.​

No it doesn't try to do that, that's a malevolent interpretation. All the drama started with a blurred photo of the Moon, but in reality I haven't noticed an instance when the phone was not able to focus on the Moon(the real Moon) so most likely what Samsung is improving is: when the moon is blurred even when focused on it, its most likely fake.
 
Last edited:
At least they aren't instantly basing their whole business strategy around it like Microsoft is.
 
If Apple truly is so behind others on AI, I believe it might be the thing that will end Apple’s dominance as we know it.

Nobody will choose a smartphone based on a slightly better or worse camera, but when a smartphone come with a truly smart assistant built in, that will be the new be-all end-all killer app.

I would truly be surprised if Apple really is so far behind on this.
 
What's important is the premise of that experiment and how after so many months users that don't own a Samsung phone that has AI Moon Mode are still convinced that the resulting pictures are just something Samsung found online and slapped on the original moon. That's what I disapproved and showed to be incorrect beyond any doubt.

It says enough. It proves that it's an AI model with varying results not just a "replace the original moon with some picture from the internet mode = a completely fake photo" like it was being suggested here. Pay attention at least.

The details exist, they were just blurred in that particular case. The Moon is real and looks the same, it's a single object so the AI was able to recreate those blurred details in that case.
Also take a look again at the tweet below, S23 Ultra's hardware is perfectly able to resolve quite a lot of details when pointed to the Moon, so in any real case(the actual real Moon) it's AI just enhances what the hardware sees, it doesn't create "fake photos". The conversation was in general(feels weird to have to point that out) not just and only about that "experiment".

Not in general it's not. For the regular consumer its just an AI enhanced photo of the real Moon, not a "fake photo".
There's no further to go here. No point arguing about what's "fake" and what "replacing" means. You want a narrow definition for fake and replace, but seem to take issue with the phrasing used about "from the internet". I didn't read that initial comment to mean that Samsung does a "moon picture" Google search every time you push the shutter, because that would be dumb. I read it as "AI training is very data intensive, so Samsung likely scraped the internet for training data which got encoded in into it's CNN and is later inserted into the user image where detail needs to be faked enhanced."

Maybe I'm wrong, maybe Samsung set about creating a private database of images of the moon they commissioned.

Doesn't matter in this case-- Samsung is not merely enhancing existing detail, they're creating content. That was the message I took from the original post and that appears to be true.

No it doesn't try to do that, that's a malevolent interpretation. All the drama started with a blurred photo of the Moon, but in reality I haven't noticed an instance when the phone was not able to focus on the Moon(the real Moon) so most likely what Samsung is improving is: when the moon is blurred even when focused on it, it most likely fake.

Fake? So adding detail that doesn't exist in the scene isn't fake, but blurring and removing detail that does exist is fake? We have very different definitions of fake...

That aside, why bother detecting fake moons if all you're really doing is enhancing detail that is already there?
 
Well the thing is, the current ChatGPT isn’t even perfect yet. Sometimes it gives some outdated or inaccurate information. It may seem very smart, but it’s still not perfect, and most of the time it just makes up answers, so it can’t even be considered as a reliable source.

So I don’t really want a ChatGPT-like AI from Apple as the current tech is not even perfect for that yet, but I want them to at least improve Siri. Siri is already pretty good for basic stuff (i.e setting alarms, notes, reminders, etc). Though it does have some flaws as it sometimes mishears you or gets activated by accident which is pretty annoying. I’d like them to at least fix this, and maybe also give it more integration with third-party apps.
And the fact that it usually just throws web results at you when being asked about something.. like what’s the point of that when I could’ve simply looked it up on Google ? At least make it more capable than that. Make it actually give you an ANSWER instead of a throwing a bunch of web results and calling it a day.
 
Last edited:
The thing is, machine learning makes massive sense in the Apple ecosphere, is partially already in use (e.g. photos, transcription and other services) and will find its use in many softwares and services the company offers. Remodeling Siri will be probably the hardest part, as the software is something else entirely and need to be re-invented from the ground up yet still be «siri-esque». This alone might touch almost any Apple product these days, mostly the VisionPro which will be able to make good use of a digital (voice) assistant along the lines of Iron-Mans «Jarvis» ;-).
But there will also be lots of other smaller things were «AI» might be fun:
— scheduling and tasks in the calendar (à la Motion, but deeply integrated with reminder, contacts, mail and other apps)
— task management / todo / projectmanagement
— Translation and auto-correction (already partially implemented) but also anything else to do with writing and editing text.
— Text to speech / Speech to Text
— Telling the system what to do and getting results (e.g. programing a workflow or automation just by describing what you want it to do)
— composing music in logic
— creating playlists in Apple Music

and so on. Rather than have Apple clone GPT, Midjourney and other already existing applications, or just a ham-fisted add-on as with Bing, I'd rather see them use the almost magical possibilities large learning models offer in the way they have always implemented technological progress – in the most natural, subtle and charismatic way, so that it feels as if it had almost always been there, even if it is brand new. This human-scaled, easy touch is what makes Apple so magical – it all feels like an extension of you, your work, of what you already so. No manual needed (moooostly),intuitive and easy. It just works. It might take some time, but until then we have GPT and all the other toys to get play with.

Except, of course, when it does not work at all.
Which might also be an option :-D
 
  • Like
Reactions: Pinkyyy 💜🍎
AI hype was in the spring.

It's already dying now.

Francois Chollet predicted the interest would die quickly because the claims being made were over the top.

We saw the result. Tons of AI spam all over social media. AI grifters lying through their teeth.

AI generated porn on unrelated hashtags. AI "photography" on unrelated hashtags. AI generated ******** articles with errors all over them.

Trolls farms paid by AI companies to threaten creatives that 'their jobs are going to die unless they subscribe monthly to this service'.

Untalented morons claiming to be 'artists' now because a machine spat out random images.

Unknowledgable morons claiming movies could be generated with prompts. Whole movies! 24 frames per second! In high definition 4K! This claims don't even understand how many GPUs would be needed, how much electricity would be consumed, and how many errors there would be.

And we thought NFT people were dumb, fake and corrupt. These people took it to another level.
Exactly ! While AI might seem like “the future” for the most part, it does actually have lots of problems. AI can now generate photos, videos, music, art .. well what about the actual people who have those skills !? In a few years, you probably won’t be able to tell if a drawing is made by AI or a real human anymore. As an artist myself, this alone makes me very worried.. :(
It only keeps getting more dangerous as it evolves, and will probably also end up replacing lots of human jobs.. leaving a bunch of people jobless.

And while we thought all those bot/automated fake accounts and the NFT/Crypto scammers were bad, turns out it could get even more dangerous than just this..
 
Last edited:
There's no further to go here. No point arguing about what's "fake" and what "replacing" means.

I agree, my explanation is logical and there's no point to continue.
All this never ending talk about about taking photos of blurred photos is pointless, I take photos of the real Moon and that's what every single Samsung user than has capable hardware does.
Also Samsung's AI is great at recognizing all kinds of other tings(for example the Sun), it will even recognize a single white circle or a square and upscale the digitally zoomed photo(and no, it doesn't add a Moon on top like some users here would believe). Besides the Moon what Samsung's AI is great at is: Text, especially with zoom beyond 30x, is does everything it can to enhance the text and make it as readable and recognizable as possible, according to the logic of some users here such resulting photos would be "fake".

Maybe I'm wrong, maybe Samsung set about creating a private database of images of the moon they commissioned.
Well you are just spinning in circles with implausible suppositions because you don't actually want to admit you are wrong.

Doesn't matter in this case-- Samsung is not merely enhancing existing detail, they're creating content. That was the message I took from the original post and that appears to be true.
They "created content" in a certain specific scenario but users don't usually take pictures of blurred Moon pictures so in the actual intended use scenarios Samsung is indeed merely just enhancing existing detail. Case closed.

That aside, why bother detecting fake moons if all you're really doing is enhancing detail that is already there?
😂 Yeah sure.
Anyway this is a picture I took of a blurred Moon photo just now. What I noticed, if I just slightly blur the Moon when I make it smaller it becomes more clear and easier to distinguish even if it's blurred, this is also true with Scene Optimizer Off so no AI, the phone lowers the exposure a lot and that enhances the contrast making the dark spots pop. In order to notice this you need to have the proper phone for it which is generally not the case for the most avid users accusing Samsung here. Also taking a photo of a photo in a room from a small distance doesn't feel right, the real scenario is different.
20230802_230850.jpg
 
For example I can say "Hey Siri set kitchen lights to 85%" and the dimmer is adjusted to the desired brightness. If I change any of those words there is a chance of error. So I learn to say the phrases exactly as Siri wants. Same with sending texts and setting destination for maps app. You need to use the exactly "corect" words and speak each word clearly.
And specifically relating to home automation, a LOT of what makes that successful is the work that goes into setting up all the rooms and relationships. In that specific instance, the personal assistant is really only as smart as the person that forgot that they didn’t configure one device or another to a specific room and is wondering why SIRI doesn’t work. :)
 
AI can now generate photos, videos, music, art .. well what about the actual people who have those skills !? In a few years, you probably won’t be able to tell if a drawing is made by AI or a real human anymore. As an artist myself, this alone makes me very worried.. :(

I comfort myself by looking at how artists have adapted in the past— think of painters having to adapt to photography, and photographers and journalists then having to adapt to photoshop. Musicians having to adapt to sampling, drum machines and auto tune. Woodworkers adapting to CNC.

I’m not sure that this next step is the same as those, but I suspect the fear is mostly about the unknown and in a decade we’re going to see these tools used, by humans, to great effect to bring new forms and character to art and the messages they want that art to convey.

What makes a painter a painter isn’t that they have a paint brush, it’s that they can use it better than the ordinary bloke. I would expect AI to be a tool in the same way— some people will be able to generate art with AI better than the ordinary bloke because, well, they’re artists.
 
  • Like
Reactions: Pinkyyy 💜🍎
It IS Tim Cook’s biggest failing. If he were not so focused on vision pro, which a large number of Apple users cannot afford, and wouldn’t want if they could, then he might see that this feature, which is available in every device, and is the face of these devices, should be the absolute best thing about all of them.
So you are saying you would prefer to have Siri control your information than you controlling what information Siri gives you? I don’t believe Siri is a fail at all. Could it be better.. sure. Could AI be better? Hell yeah. The difference between Siri failings and AI failings is huge.

AI failings are worse than shooting an arrow in the air and expecting it to fall on the intended target. Siri is only about using the correct syntax.

The funny thing is Apple refers to it as what all this is, Machine Learning.
Machine Learning is a far better expression than Artificial Intelligence. It is all learning from data recovered. A.I. Cannot understand human context through its own intelligence. I think the example of the Trolley Problem shows this.
 
So you are saying you would prefer to have Siri control your information than you controlling what information Siri gives you? I don’t believe Siri is a fail at all. Could it be better.. sure. Could AI be better? Hell yeah. The difference between Siri failings and AI failings is huge.

AI failings are worse than shooting an arrow in the air and expecting it to fall on the intended target. Siri is only about using the correct syntax.


Machine Learning is a far better expression than Artificial Intelligence. It is all learning from data recovered. A.I. Cannot understand human context through its own intelligence. I think the example of the Trolley Problem shows this.
At the end of the day, all the marketing terms aside, ALL we currently have is applied statistics. There is no knowledge anywhere in these models, be it a language model like ChatGPT/Bard/whatever or image or video “AI” like StableDiffusionor adobes tools. No brains, just statistics.

Normally I’m not a stickler for terminology but the fact that in 2023 so many people think ChatGPT “knows” anything at all is scary to me. It allows comfort to hand over decision making, which is NOT what we need. This is just a beefed up version of ELiza from the 70’s, itself an updated version of psychology experiments from the 50’s & 60’s (tools, not manipulations).


 
Last edited:
At the end of the day, all the marketing terms aside, ALL we currently have is applied statistics. There is no knowledge anywhere in these models, be it a language model like ChatGPT/Bard/whatever or image or video “AI” like StableDiffusionor adobes tools. No brains, just statistics.

Normally I’m not a stickler for terminology but the fact that in 2023 so many people think ChatGPT “knows” anything at all is scary to me. It allows comfort to hand over decision making, which is NOT what we need. This is just a beefed up version of ELiza from the 70’s, itself an updated version of psychology experiments from the 50’s & 60’s (tools, not manipulations).


If ChatGPT does not know anything how does it manage to answer correctly so many questions (arguably way more questions than any human can answer)?
 
  • Like
Reactions: arkitect
I agree, my explanation is logical and there's no point to continue.
All this never ending talk about about taking photos of blurred photos is pointless, I take photos of the real Moon and that's what every single Samsung user than has capable hardware does.
Also Samsung's AI is great at recognizing all kinds of other tings(for example the Sun), it will even recognize a single white circle or a square and upscale the digitally zoomed photo(and no, it doesn't add a Moon on top like some users here would believe). Besides the Moon what Samsung's AI is great at is: Text, especially with zoom beyond 30x, is does everything it can to enhance the text and make it as readable and recognizable as possible, according to the logic of some users here such resulting photos would be "fake".


Well you are just spinning in circles with implausible suppositions because you don't actually want to admit you are wrong.


They "created content" in a certain specific scenario but users don't usually take pictures of blurred Moon pictures so in the actual intended use scenarios Samsung is indeed merely just enhancing existing detail. Case closed.


😂 Yeah sure.
Anyway this is a picture I took of a blurred Moon photo just now. What I noticed, if I just slightly blur the Moon when I make it smaller it becomes more clear and easier to distinguish even if it's blurred, this is also true with Scene Optimizer Off so no AI, the phone lowers the exposure a lot and that enhances the contrast making the dark spots pop. In order to notice this you need to have the proper phone for it which is generally not the case for the most avid users accusing Samsung here. Also taking a photo of a photo in a room from a small distance doesn't feel right, the real scenario is different.
I’m not sure you understand pixel interpolation & extrapolation. It’s case closed only if you have a closed mind on this issue.

Your last Para is hilarious. "when I make it smaller it becomes more clear and easier to distinguish even if it's blurred".
That’s only because your eye cannot distinguish the detail, not because it’s gets clearer as it gets smaller.

If ChatGPT does not know anything how does it manage to answer correctly so many questions (arguably way more questions than any human can answer)?
It’s a very clever search engine that returns results in a more informal (human) style. It doesn’t make it more accurate than any other (group of) search engines.
 
Last edited:
If ChatGPT does not know anything how does it manage to answer correctly so many questions (arguably way more questions than any human can answer)?
Being able to respond to a prompt does not mean it conceptually “knows” something. Statistically it hands you the most likely result to your language prompt.

ChatGPT is a *language* model, it generates speech. It is not “smart”, it’s just good at generating human sounding speech after being trained on it. Being able to spit out a fact does not mean it “understands” a given topic.
 

Attachments

  • Chomsky on ChatGPT.pdf
    424.7 KB · Views: 130
I’m not sure you understand pixel interpolation & extrapolation. It’s case closed only if you have a closed mind on this issue.

Your last Para is hilarious. "when I make it smaller it becomes more clear and easier to distinguish even if it's blurred".
That’s only because your eye cannot distinguish the detail, not because it’s gets clearer as it gets smaller.


It’s a very clever search engine that returns results in a more informal (human) style. It doesn’t make it more accurate than any other (group of) search engines.
It's not a search engine. Search engine can only search for stuff on the internet. ChatGPT has a lot of internal knowledge but it can also do things that no search engine can, for example, it can code (i.e. write programs).
 
Being able to respond to a prompt does not mean it conceptually “knows” something. Statistically it hands you the most likely result to your language prompt.

ChatGPT is a *language* model, it generates speech. It is not “smart”, it’s just good at generating human sounding speech after being trained on it. Being able to spit out a fact does not mean it “understands” a given topic.
Based on your definition, humans also just generate speach. How would you explain ChatGPT ability to write programs when given a spec for it? It definitely involves understanding. One can't devise a solution without understanding a problem.
 
Based on your definition, humans also just generate speach. How would you explain ChatGPT ability to write programs when given a spec for it? It definitely involves understanding. One can't devise a solution without understanding a problem.
It doesn't. It input is from programming books acquired from dark web libraries. The programs written by ChatGPT are very simple and doesn't follow security best practices - almost as if it's a novice programmer.

For speech, parrots can also generate human sounding speech.
 
Based on your definition, humans also just generate speach. How would you explain ChatGPT ability to write programs when given a spec for it? It definitely involves understanding. One can't devise a solution without understanding a problem.
Being able to follow rules doesn’t speak at all to why they exist, what they mean, etc. Applying mathematical rules quickly doesn’t even remotely approach understanding something.

A sleeker version of Clippy doesn’t not equate *understanding* a concept.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.