Yeah, if it was sharpening, then, as there’s no detail in that gray square at all, it should be sharpened to a gray square, not have a moon texture added to it
Last edited:
Yeah, if it was sharpening, then, as there’s no detail in that gray square at all, it should be sharpened to a gray square, not have a moon texture added to it
So it’s an overlayed image and the original image combined?
So it’s an overlayed image and the original image combined?
I can’t argue with that. But essentially, it is using known images to convert a new image into a combined image. Something photoshop has been able to do for many years."overlayed" is the problematic term here... Samsung doesn't show how their scene optimizer works other than to claim it's a convolutional neural network-- if that's true, and there's no reason to believe it isn't, then what's happening is less an image overlay and more inpainting from moon images that network was trained on-- believable data being inserted and blended.
The CNN here is essentially a transform that takes the source image and outputs a moon tailored to the match the source context.
I can’t argue with that. But essentially, it is using known images to convert a new image into a combined image. Something photoshop has been able to do for many years.
People here are suggesting Samsung is doing some kind of magic with AI, which it isn’t, otherwise it could do the same with any scene, faces etc.
Yeah, so many people actually think ChatGPT is smart and that it’s a reliable source of information while it’s clearly not as it makes stuff up and has lots of outdated information. Even students are now starting to use it for writing essays which is a very bad idea considering it doesn’t always provide accurate information. It’s better to actually do some research from trusted sources, a “Generative AI” doesn’t seem like a trusted source for me though lol.This lawyer used ChatGPT. It made up the cases cited and now he has been sanctioned and may get disbarred. 🤣
![]()
New York lawyers sanctioned for using fake ChatGPT cases in legal brief
A U.S. judge on Thursday imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by an artificial intelligence chatbot, ChatGPT.www.reuters.com
I hate doing case research but I'd NEVER even think of using ChatGPT to do my research. LexusNexus does a fairly good job if you know how to use it. It's much better than going to a library like the olden days.
I agree with pretty much what you said. The part about what’s better… I also agree: It’s completely subjective. I do however, find that it is more impressive to take an image and read the text, enable it to be copy and pasted, like I use regularly on the iPhone (and probably every other smart phone out there), and even more impressively take the image on the fly and translate it to your preferred language.It's a question of what your definition of magic is... It's pretty remarkable that it can do that infill without human intervention and give you a more gratifying picture between the moment you press the shutter and when it gets to your library. The concept isn't new, but automation of it is.
Everything is something that people were kind of able to do before...
I don't know what other ways they're using AI in their camera, but it's possible they do clean up other parts of images to give results people like. The moon is a good use of something like this though because it doesn't really change much-- we're always looking at essentially the same face of it and it's far enough away we don't get perspective distortions or parallax changes to cope with.
It got attention because it was easy to spoof and learn a bit more about what it's doing and how-- and it's a good example to point to in the debate about where the lines should be in digital photography. Is this acceptable, or is it cheating? Is there a difference between auto exposure, white balance, HDR, contrast and vibrance adjustments and moon fixing, or are they all matters of degree?
As someone who enjoys photography, I like to have control of the image so I'm not keen on the machine creating something that wasn't actually there. If I was someone who just wanted a cute instagram photo that showed a romantic night out, then maybe I'd welcome the help in making the moon in the sky like a big pizza pie...
I seem to remember another example of a building facade where the camera converted the window frames into Chinese characters. Maybe it was Korean, I can't remember. And I can't find that image with a search so I don't know if it was Samsung or a different phone.
People have lambasted me to compare it to a search engine, but it is exactly that, that apparently puts the information together in some clever way. It’s not clever, it's just programmed to use syntax (grammar) commonly used by normal people, to make it look like it’s clever. The way that it brings information or images together may be clever, but it’s not AI.Yeah, so many people actually think ChatGPT is smart and that it’s a reliable source of information while it’s clearly not as it makes stuff up and has lots of outdated information. Even students are now starting to use it for writing essays which is a very bad idea considering it doesn’t always provide accurate information. It’s better to actually do some research from trusted sources, a “Generative AI” doesn’t seem like a trusted source for me though lol.
Personally, I’d never rely on ChatGPT for any type of research lol. I just don’t trust it. You can now ask it to write a positive review about a product on Amazon, and it’ll actually do it despite not being told what the product even is.. 💀
I agree with pretty much what you said. The part about what’s better… I also agree: It’s completely subjective. I do however, find that it is more impressive to take an image and read the text, enable it to be copy and pasted, like I use regularly on the iPhone (and probably every other smart phone out there), and even more impressively take the image on the fly and translate it to your preferred language.
Doing this moon/pizza thing might mean amore to some people but I am less impressed.
People have lambasted me to compare it to a search engine, but it is exactly that, that apparently puts the information together in some clever way. It’s not clever, it's just programmed to use syntax (grammar) commonly used by normal people, to make it look like it’s clever. The way that it brings information or images together may be clever, but it’s not AI.
Its no more AI than me spending half an hour each in several car yards over a day and suddenly getting new car ads on my feeds.
A very thoughtful and wonderful reply. I acknowledge everything you say, and will certainly and honestly use your post as an introduction to learn more. 👍. I’m not aware of any machine yet passing the Turing Test, but I also wonder if something can imitate a human, is that enough to assume it has any kind of human intelligence regardless where that threshold is?It's worth being cognizant of the fact that we keep moving the goalposts on what's really "intelligent".
If you did a lot of moonlit honeymoon shots on the Venetian canals, I'd imagine you might hire a moon specialist who was very good at Photoshopping better moons into images. Someone who was quite good at it, and could give good results quickly might be able to make a living at such a thing. We'd call that person talented.
No test is without criticism, but one common go to for these things is the Turing test. I think these chat bots probably pass depending on how we choose to define it.
It's interesting that a lot of the attention now is on the human participants in the Turing test and how the humans can convince the judge that they're truly human to not be confused with the machine. It's as though the reason programs are passing, or nearly passing, the Turing test isn't because they're good at acting human but because humans are bad at it.
Turing chose natural language as a key indicator. He apparently felt that knowledge and facts weren't sufficient indication, and the ability to express that information conversationally was important. Sure enough that was very, very hard... until now.
The Chinese Room view was that it's not sufficient to be able to process symbolic information, there needs to be some concept of understanding that information. As presented in that scenario, it's reasonably clear what the problem is: given input written in Chinese and an english instruction manual explaining how to convert the incoming symbols to response symbols, it's possible for a non-Chinese speaker to appear conversant without actually understanding any of the conversation.
It gets a little less clear when you look at a neural network where the path from input to output isn't a linear program but a complex function where the weights are determined by exposure to examples and trial and error-- not the same but not all that different from how we learn. Would there be a difference in how it "understands" Chinese and how a human would learn to understand it as a means for communicating knowledge back and forth in a Turing test?
What makes things a bit harder yet is that we don't really understand how our own intelligence works. Our consciousness sits on the surface of a very deep system and we're not terribly good at introspection.
I like Chomsky's point that many of these systems can't really explain how they reached the conclusion they did when answering a question. This is a problem in making functionally safe autonomous vehicles. It's not always clear from examining a neural network why it gave the output that it did. This makes safety people crazy because the system can't be made provably safe-- even it if outperforms a human in practical tests.
The truth is humans very often reach a conclusion first and then construct a reasoning framework to explain it later. Humans do stupid **** all the time, but we strangely hold them to a lower standard than we do our machines-- I believe it's because we have an empathy for human failings that we don't for machines that we imagine to be designed for a purpose.
Anyway, I don't think it's helpful to view intelligence as a threshold, because we just keep moving the threshold. It can compute, but can't communicate. It can communicate but by rote program. It can learn, but doesn't understand. It's convincing at one task, but lacks the ability to generalize. We keep looking for why we're better.
I think it's better to just acknowledge that machines are getting more intelligent with time. We need to be careful we don't give credit to the machines for having more intelligence than they do, but also be careful we don't give ourselves more credit than we deserve.
We should also start to acknowledge that we're easily convinced by smooth talking machines and realize that probably also means we're easily convinced by smooth talking humans-- both of which strike me as vulnerabilities.
If Apple truly is so behind others on AI, I believe it might be the thing that will end Apple’s dominance as we know it.
Nobody will choose a smartphone based on a slightly better or worse camera, but when a smartphone come with a truly smart assistant built in, that will be the new be-all end-all killer app.
I would truly be surprised if Apple really is so far behind on this.
A very thoughtful and wonderful reply. I acknowledge everything you say, and will certainly and honestly use your post as an introduction to learn more. 👍. I’m not aware of any machine yet passing the Turing Test, but I also wonder if something can imitate a human, is that enough to assume it has any kind of human intelligence regardless where that threshold is?
I think we have to be careful when we say machines are getting more intelligent. I would prefer to think machines are getting better access to data and able to handle and communicate that data with more precision.
You said we ask why we are better? I will add something that Steve Wozniak said a couple of years ago. Does this so called AI say each day, "what will I do today?". We talk about self awareness, but more than that, we are intelligent enough to make a genuine choice, not based on reason, or randomness, but on a feeling or 'just because we want to." For me, that’s why Steve Wozniak says AI yet doesn’t have the intelligence of an ant. It is not enough to be self aware. Descartes said, "I think therefore I am" In reply, Nietzsche says "Do we really think?" (and a whole lot more…) Being able to show a level of apparent intelligence because we access to a vast library of data and grammar, does not make us actually intelligent. Actual Intelligence (no matter how dumb we are) and knowledge are two entirely different things.
So let's lay to rest this ridiculous saga with Samsung's Moon Mode.
I will provide a bunch of images that haven't been seen before because, you know I own an S23U and I can test the phone and all it's features(unlike the users that were arguing with me here).
I said a few things in my comments which are very important and I will now prove beyond any doubt that: Samsung's hardware is absolutely capable of taking very good photos of the Moon without any problems(and no tripod or anything just plain handheld) and without any special help(especially cheating like it's being suggested here) and taking pictures of the real thing is not the same as taking pictures of a picture.
I will start with a picture I took with Pro Mode. Now Pro Mode does very little procession(even for plain jpg's) so obviously no Moon Mode. Now I admit I did a lazy job, I didn't have the patience to play with the settings enough or I would have gotten a better picture(especially in terms of details). Also in Pro Mode digital zoom maxes out at 20x so you can't get that close.
Anyway I made it obvious the pic was taken with the S23U and Pro Mode.
Now the second picture it the real star. I took it with Gcam. I hope people here know what Gcam is and the very obvious undeniable fact that it's doesn't have any special optimization or code to take better pictures of the Moon and even so I took quite an impressive Moon pic with this 3rd party software. Now Gcam is obviously worse than Samsung's stock camera app at this, even if I can zoom just as close I can't decrease the brightness enough but the picture I got is way way better than any non-Samsung phone can hope to take, it's not even close. This pic was also taken handheld, I only tapped on the Moon and decreased the brightness to the minimum and took the photo. Oh and I took 3 other pictures just like this one after that, so it's not just some miracle or anything like that. Also this and the picture above are taken at the same time because I first took the pic in Pro Mode(quickly adjusted some settings) and after that quickly opened Gcam zoomed in and snapped another quick pic.
This third picture is something I don't think I've seen much of and that original random Reddit user bothered explore it. It's a Scene Optimized On and Off comparison, quite crazy that wasn't explored enough but hey I can suspect why. Anyway even with Scene Optimized Off(so no Moon mode) the S23U still takes amazing Moon pictures, the phone does the same camera settings adjustments when you zoom very close on the Moon(or any bright object for that matter like a light bulb, I mention this so I don't get any crazy gotcha theories again). The picture with Moon Mode option turned On does look better, obviously enhanced but not by a crazy amount(not like users here believe and theorize about with conviction), I would say it's maybe not even absolutely necessary in the first place(but I suspect Samsung really wanted to promote the Moon pictures taking capabilities of their phones).
These last 2 pics are with Scene Optimizer Off. The thing is I didn't realize I took this pics without Moon Mode turned On until I did the comparison above, at some point I turned Scene Optimizer Off and didn't realize it, still the Moon Pictures were so good I didn't notice the weren't enhanced.
Now I get it, Samsung's Zoom capabilities are so impressive a lot of people(especially those that have an obvious bias against Samsung) don't believe them to be real(I remember seeing users on Twitter calling 30x pics from the S23U fake because they were too good). The thing is, does Samsung need to cheat in order to objectively get the best Moon Pics of any smartphone? NO, absolutely not. Did that random user prove in a very concise, technical and beyond any doubt manned how Samsung's Moon Mode works behind the scenes or under the hood, also the fact that Samsung cheats, it's moon pictures are fake, Samsung only overlays things over the original, presumably terrible Moon pictures they phone are actually able to take? Definitely NOT. Or the pictures I took with Scene Optimized Off, Gcam and pro Mode wouldn't be possible. The idea is that I concentrated on the real thing like any normal users that has this feature would. I also suspect that Samsung didn't really get involved in this because at the end of the day its just child's talk, nothing more, kids online making assumptions about things they don't own or know.
Well Steve you tried but you obviously don't have any fangs left.Okay, I’ll bite…
Well Steve, you did imply I'm close minded and after that, tried to humiliate me by making fun of me because I described what happens when you zoom on the Moon with a Galaxy S Ultra, so let's say this is karma.But I’m not sure about this obsession with the moon.
Nobody really cares or asked you. The discussion is about how a camera feature works.I’ve taken like a dozen ever, with a real camera. My favourite use for a long zoom is landscape. Can the Samsung take images as good of mountain ranges or does it suffer with chromatic aberration and lack of colour contrast? Instead they pick a monochromic subject. The amazing thing isn’t the quality, it’s that they managed to squeeze a near useless zoom into a phone.
Well Steve it was obviously implied. I already explained that the features enhances what the hardware sees and clearly you guys didn't agree with that.That said,
1. No one said you couldn’t take a decent photo with the Samsung. They look fine for a mobile phone. Maybe even good for a mobile phone. Very good objectively? Well…
Well again it was implied(and I was talking about Samsung not myself), if the "moon photos are fake", if Samsung does something fishy in the background to "enhance" the photos beyond what they explained in the description of the feature, if it just adds the details because the hardware can't actually capture them, that's obviously cheating. For example: that’s not the Samsung type of AI where using AI to “enhance” your picture means finding a better one on the internet and replacing it with that.2. I haven’t seen anyone say you were cheating.
They are not blurry just no super sharp(I mean it is a phone and it's 100x digital zoom enlarged to 12MP). Anyway I already explained it: hand healed(in a non-optimal position I would add), didn't take my time.3. Why do your photos look blurry?
Gcam, I already explained it.4. Photo 2 is low contrast, over exposed.
Yeah but how steady it can hold the device and how much patience they have can make a difference in how detailed the result is. Anyway even a really quick Moon pic with zero AI help from the S23U is well beyond any phone on the market. I don't understand the problem here, the software obviously has a decent amount of data to work with, "it doesn't have to make up data".5. Most people would take an image hand held anyway. Base this on the Sunny 16 rule.
Well steve if you can prove that it adds images over the original and how it adds them, I'm all ears and eyes. But I really really doubt it. And no, I don't want to see again that post from 5 months ago from that random user, surely you can come with new info like I did, that is, if you are right.To make this clear. I don’t know anyone who has said that the camera isn’t capable of taking decent photos. I am only aware that there is some subjectivity as to how it manipulates or adds images to improve a less than optimum image.
Of course you wouldn't be surprised. I only took like 3-4 photos that night anyway. I mean you don't even know how it works but here you are making assumptions, so I will help you, I will show you how I did it as I don't just talk for the sake of talkingI would be surprised if you didn't pick the best of a bunch that you took, which is fine.
And for those people who want a photo of the moon and didn’t take a decent one, they are happy with Samsung having the ability to take one from its extensive resource and using it to improve the one they took. Can you deny it has that ability?
The best thing about this, is that it’s got you fired up to take good images. And that’s great. Samsung have worked their magic in inspiring you to fight for them. Cool. 👍
I can’t agree. I don’t believe you’ve proven anything.Well Steve you tried but you obviously don't have any fangs left.
I really don't understand the purpose of your replay, you didn't add anything to the subject, just took what I wrote and tried to spin it somehow, maybe maybe it lands against me.
Well Steve, you did imply I'm close minded and after that, tried to humiliate me by making fun of me because I described what happens when you zoom on the Moon with a Galaxy S Ultra, so let's say this is karma.
Nobody really cares or asked you. The discussion is about how a camera feature works.
And you can think what you like, I could very easily show you how the 10X on the S23U isn't useless at all(both video and photo) but I'm sure it's not worth the effort. I recommend https://www.reddit.com/r/S23Ultra_Photography/ if you aren't "close minded" like I supposedly am.
Well Steve it was obviously implied. I already explained that the features enhances what the hardware sees and clearly you guys didn't agree with that.
And of course I'm saying that the Moon pictures look good(very good, whatever) for a phone, I even made it clear at the end. Especially if you compare the S22 and S23 Ultra with any other phone is like 1 vs 10 difference.
Well again it was implied(and I was talking about Samsung not myself), if the "moon photos are fake", if Samsung does something fishy in the background to "enhance" the photos beyond what they explained in the description of the feature, if it just adds the details because the hardware can't actually capture them, that's obviously cheating. For example: that’s not the Samsung type of AI where using AI to “enhance” your picture means finding a better one on the internet and replacing it with that.
They are not blurry just no super sharp(I mean it is a phone and it's 100x digital zoom enlarged to 12MP). Anyway I already explained it: hand healed(in a non-optimal position I would add), didn't take my time.
Gcam, I already explained it.
Yeah but how steady it can hold the device and how much patience they have can make a difference in how detailed the result is. Anyway even a really quick Moon pic with zero AI help from the S23U is well beyond any phone on the market. I don't understand the problem here, the software obviously has a decent amount of data to work with, "it doesn't have to make up data".
Well steve if you can prove that it adds images over the original and how it adds them, I'm all ears and eyes. But I really really doubt it. And no, I don't want to see again that post from 5 months ago from that random user, surely you can come with new info like I did, that is, if you are right.
Of course you wouldn't be surprised. I only took like 3-4 photos that night anyway. I mean you don't even know how it works but here you are making assumptions, so I will help you, I will show you how I did it as I don't just talk for the sake of talking
Actually the photos I took with Scene Optimizer Off and Gcam are quite decent(definitely not unusable) and look good on social media or something that it's not a big screen(+27inches full screen). No other smartphone users can get such pictures of the Moon, Samsung's special mode only makes these pictures even nicer, it doesn't turn crap into something OK as pictures from the start obviously aren't crap.
Well Steve I'm generally inspired to fight for the truth(as cliche as it sounds) unlike some users here.
Also I will go a little further. Remember those posts that started all this drama on Reddit? Well this is what I've got.
And it's funny that on the first reply I was asked to provide a "better source than Samsung", as "Samsung is untrustworthy". But a random a user on Reddit is definitely trustworthy. Well maybe if I continued to take pictures I would have eventually gotten something similar with him(and just to be clear all the pictures you see when I open the gallery are like the one I took and I can prove it if its necessary), that's what he recommended a user in the comments, to continue to take pictures, he should get one like his, eventually. That's how rock solid his experiment was.