Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What did people think was happening? The lens can't resolve features that fine. The only option they have is to invent the missing data. It was pretty clear they weren't getting that enhancement from sub-pixel shifting and super resolution.
One of the things Samsung claimed about their moon shot feature is that it's using multiple frames to create the final image, but since this experiment used a permanently blurred image it's impossible for the phone to be using multiple frames to reconstruct a sharper image. I'd be interested to hear how Samsung explain that discrepancy.
Anyone have a link to this claim? This was a great experiment to prove that claim wrong...
 
How could someone think this is in any way comparable to computational photography on iPhone? Like, would you look at someone who got a minor rhinoplasty and is wearing low coverage foundation, and call them the same amount of fake as someone who literally ripped someone else’s face out and is wearing it as a mask?
 
Folks are surprised by this? Well I guess people with limited knowledge of digital photography would be surprised.

I figured that zoom was actually a cropped image awhile back. (The only logical possibility.) I figured AI was involved because, ain't no way you can get a clear image with the amount of shaking at 100x zoom. It's hard enough to get a clear image with a tiny sensor, but using only a small part of the sensor? Unpossible. The technology doesn't exist yet. We'll have to wait until Kirk's time (prime timeline, no STD, no JarJar Abram)

Anyhow, in camera/phone processing isn't anything new. Compare RAW from the phone/camera to a jpeg from the same camera. The RAW looks flat and dull, whereas the jpeg pops. While I don't like the idea of AI altering an image, it's gonna be a thing in the future. I'll forever be a RAW shooter though.
 
  • Like
Reactions: gusmula
Oh, and this whole conversation is stupid. Funny how Apple fans are so quick to jump all over this.
YouTubers made full-length videos about how iPhone pictures sometimes look oversharpened and flat and people wouldn’t shut up about how iPhone cameras “actually suck” for almost a month. Samsung literally overimposes a jpeg of the moon whenever it sees a non-descript light gray circle and the most there is is a short Verge article and a post in an Apple fan site where even half of the commenters will still do the mental gymnastics to somehow equate it to light-to-medium post processing.
 
I remember reading an article about some Chinese? Maybe it was Samsung idk a couple years ago that if the camera sensed a known feature landmark I.e Golden Gate Bridge or Empire State Building etc it would actually overlay an “AI” conceived better than what the optics could actually manage version in those areas with the landmark of the image. So it appeared far more detailed than it really should have been and was no longer just an image from the sensor. Summarized: No thank you! 🤮
Edit also: What if phones were capable of a dual focus that you can tap to lock 2 areas of the photo before you take the picture? Another main camera? (3D pictures ❤️) Issue I see with moon shots is the phone either can’t focus or keep the focus. Because it is trying to reconcile the sky around the moon at the same time. This would also be handy for sunsets also. Phones can only focus on the brightest or darkest area. And so the image is only half the representation of the sunset. Even with hdr because it’s missing that other area you didn’t focus on I.e the darker or lighter. Just imagine dual focus next time you take a shot and you’ll see how it could be game changing.
 
Last edited:
Let's not let iPhone photography off the hook here. They might not be inserting fake images, but the post-processing is so over the top now that I hesitate to even open my phone camera around sunset. "Golden hour" photos of people take on this horrible faked HDR look that is so processed it looks like an illustration more than a photo.
No, let’s. Because what is happening in this case with “space zoom” on these Galaxy phones is not at all what is being done on the iPhones. On the Galaxy phones with this feature, they are [almost certainly] using a machine learning model that has been trained specifically on the moon to do the equivalent of “Oh! That’s the moon! Let me just grab this high resolution photo and sort of blend it in here.” Really it is probably just recognizing the form and structure and hallucinating details in various areas because it knows what is supposed to go there. It knows it is the moon and it is specifically enhancing the moon in the direction of structure it understands is there. And this was demonstrated brilliantly by this commenter in the Reddit post. (Bonus chuckle for all the people commenting about “deconvolution” without understanding at all how “deconvolution” works and why it cannot explain what is happening here.) But this process is decoupled from data captured by the phone, and is producing images which the phone, its sensor, its lenses never could have captured short of riding on the back of a telescope or being attached to some other sort of high-magnification optics.

What the iPhone is doing is capturing a whole mess of exposures in different regards (no need to go into the details as Apple has covered it well) to work around concerns like the limited dynamic range, limited light collection, etc. and enhance the image, using the lens and camera it has, beyond the limits of what it could have achieved with a single exposure. A more aggressive approach with something in common shared with what some modern cameras can do with “high resolution mode” exposures where they combine multiple exposures to reduce noise and enhance resolution through sub-pixel sampling. Except in the case of the iPhone, they’re also capturing more data, enhancing dynamic range by exposure stacking, etc. Some challenging interpretation is still necessary in that the software needs to know how to blend that data to keep the expanded dynamic range natural and workable (using one consideration as a point), but nothing close to the same sort of thing.

Now, say, we take this to the next level. We use machine learning to create an upscaling or detail enhancing model. Sort of as is done with Topaz products like Gigapixel or DeNoise. In this case, the machine learning model is trained on photography and begins to recognize common structures like hair, leaves, a human eye, and in sharpening will make choices informed, to some capacity, of what those structures look like. You could throw this sort of thing at the moon and it could make informed choices in bringing out detail around craters and the ring of the moon in sharpening. And this would still be an interesting and honest (provided it is presented honestly) sort of feature, and it would still fall short of the dishonesty Samsung is employing in this case. Why? Because this sort of model isn’t saying, “Oh, this is the moon. I know what this is supposed to look like, so let’s go ham adding details that are supposed to be there.” Instead it is saying, “Okay, these details can be pushed a bit in this direction and this looks like a contiguous ridge or edge so let’s push it in this direction.”

This is just the predicable latest move form a company (Samsung) who has repeatedly demonstrated an enthusiasm in misrepresenting their products for the sake of marketing. The same company that has been caught multiple times using DSLR photographs as examples of images captured from their phone cameras. And I suspect something similar, actually, is being done with the Milky Way on the phones.
 
So Samsung was working on their cameras and discovered something that every photographer everywhere knows - their sensors & software were unable to deal with the moon having an exposure equal to what we see as full daylight falling on it while the rest of the frame is at nighttime darkness. It's beyond the ability of their system to make a single image that can get a good exposure of both (that is called dynamic range). I'm sure they can do multiple exposures in some cases to do a kind of HDR, but this is about something different. Basically the AI threw up its hands and said "this moon is too blurry so just pop the fake moon in while processing and make it look sharp because we touted this phone camera as being great, problem solved."

They add sharp, detailed and well-exposed photos of the moon to the mix and "improve" the photos their users take. Well I guess that's one way to make your customers happy.

The linked Reddit post has comments from someone who wondered why they took a photo of the moon and it looked great but streetlights in a photo were a blurry mess. Well, if Samsung's AI also had a library of stock streetlight images to rely on, that could have fixed their problem too, I guess. 🤣
 
Last edited:
I think the point is, though, that because we only ever see one side of the moon, faking a “full moon” image is the easiest image to surreptitiously fake. I think anyone wanting to show what the sensor is capable of (rather than just capturing a full moon) would have the moon being occluded by a hill, tree, or other object OR a non-full moon.
A shot of the moon i took with my samsung in 2021.

View attachment 2172973

Haven't tried it on my S23 Ultra because i just don't care about looking up and taking a moon shot but i expect it be better than this. The zooms are truly impressive and i don't think it fakes the images.

The Reddit post includes links to Samsung's own documentation that clearly states they fake these images, using machine learning that relies on many images of the moon as we would see it from earth, not just the full moon. I assume that means Samsung's AI would be able to account for a building, tree, hill, etc. blocking part of the moon and simply add in sharp, well exposed moon imagery to the parts that are seen in the photo. The linked document says they've been doing this since the S21.
 
Indeed - so much so that now I really don't want to use my phone for photos because what comes out usually looks... too good. A while back people said that they wanted Apple to make an SLR, but it would be awful - all that processing added to full frame images? Helping you accentuate what is there is fine, but making up detail or replacing detail really doesn't seem like photography anymore.

(I feel the same way about photographers who replace skies in images. You're an artist that works with photography, but the output is no longer a photo)
Even editing of existing skies in images bothers me. People neglect to think about the fact that the sky is a big part of the original "mood" of the surroundings when the photo was snapped. If it's kind of "bleah" looking on an overcast day? Attempts to brighten it up or change the hue to make it more visually pleasing are dishonest.

I see so much of that in photo uploads of places people visit, I can't even tell what the real weather conditions were or the time of day the picture was taken.
 
Lets do a level set here. Most of the time companies brag about their AI computational photography. Apple does it Google does it and people love the outcome.

I think Samsung's mistake here was not being upfront and honest about it.

I mean is bokeh effects fake? Is Googles Magic Eraser pictures fake?

Look over in the iPhone pictures thread here on MR. How many of those are edited in some kind of editing software to enhance the original? does that make them fake?
 
What Samsung misleading customers? No, couldn't be. They should hire that "dude your getting a Dell Samsung guy"

Seriously, why do this? Their cameras are decent, but even with all the extra pixels, they still lose out to iPhones cameras on professional photography review sites (or score even, depending upon your scale of results, so pretty close). turns out teeny tiny itty bitty weenie pixels are not that great versus bigger pixels

But they do make good cameras and get good results
 
What the iPhone is doing is capturing a whole mess of exposures in different regards (no need to go into the details as Apple has covered it well) to work around concerns like the limited dynamic range, limited light collection, etc. and enhance the image, using the lens and camera it has, beyond the limits of what it could have achieved with a single exposure.
Yeah, no kidding. We all get what HDR photography is. The effect is being applied very heavy-handedly at times -- to the point where the resulting image looks overcooked and unnatural.

What we're all used to, is that sometimes parts of a photo will naturally be over- or under-exposed, which reflects real-life lighting conditions. That's what we've all been used to seeing in photos for decades. Apple is HDR stacking so aggressively that it tries to weed out all clipped parts of an image, and the result can at times just doesn't end up looking like a naturalistic photo at all, but more like some freakish AI interpretation (which is what it is). Again, go out on a nice golden hour when it's clear, plop someone in the direct sunlight, and see what happens to their face when you take a photo of them with a recent iPhone. It's... not great.

I'm not the only one who thinks so:
In January, I traded my iPhone 7 for an iPhone 12 Pro, and I’ve been dismayed by the camera’s performance. On the 7, the slight roughness of the images I took seemed like a logical product of the camera’s limited capabilities. I didn’t mind imperfections like the “digital noise” that occurred when a subject was underlit or too far away, and I liked that any editing of photos was up to me. On the 12 Pro, by contrast, the digital manipulations are aggressive and unsolicited. One expects a person’s face in front of a sunlit window to appear darkened, for instance, since a traditional camera lens, like the human eye, can only let light in through a single aperture size in a given instant. But on my iPhone 12 Pro even a backlit face appears strangely illuminated. The editing might make for a theoretically improved photo—it’s nice to see faces—yet the effect is creepy. When I press the shutter button to take a picture, the image in the frame often appears for an instant as it did to my naked eye. Then it clarifies and brightens into something unrecognizable, and there’s no way of reversing the process. David Fitt, a professional photographer based in Paris, also went from an iPhone 7 to a 12 Pro, in 2020, and he still prefers the 7’s less powerful camera. On the 12 Pro, “I shoot it and it looks overprocessed,” he said. “They bring details back in the highlights and in the shadows that often are more than what you see in real life. It looks over-real.”
 
Last edited:
  • Like
Reactions: harshar and gusmula
Hi Folks,

Below, some of my own personal insights and knowledge with telephoto lunar photography. FWIW, my experience in observing and recording the night sky spans a lifetime of amateur astronomy since before I got my first telescope in 1967.

A few quick items that folks should be educated on and familiar with before casting one's public opinion on this thread's topic...

1. The, er, "evidentiary" image of the moon purportedly shot with a Sony a7R III/200-600mm lens here...

Fake Samsung Galaxy S21 Ultra moon shots debunked - MSPoweruser
https://mspoweruser.com/fake-samsung-galaxy-s21-ultra-moon-shots-debunked/

...speaks to either A. the photographer's inability to know how to use that camera/lens combo for excellent results, and/or B. an attempt to use fake or misleading evidence for their argument. A simple search of the Sony 200-600mm moon images will reveal such...

Search: Sony 200-600mm moon | Flickr
https://www.flickr.com/search/?group_id=14620456@N20&view_all=1&text=moon

...clearly, that lens and pretty much any Sony body can provide a much more detailed lunar image than the one posted as evidence.

2. The use of combining numerous images and using deconvolution methods in post processing has been a, um, "thing" in the world of astrophotography since the dawn of digital imaging...

moon deconvolution at DuckDuckGo
https://duckduckgo.com/?t=ffab&q=moon+deconvolution&ia=web

My take on Samsung's "100x Space Zoom" feature is that it's nothing more than an automated stacking and deconvolution software routine labeled/marketed as "AI".

TL;DR...There is no trickery here except on the part of folks claiming such of Samsung.

I hope that helps clear the air of any further misconceptions on this subject and, hopefully, any erroneous conclusions and/or opinions based on that unfamiliarity.

Best,
Jimmy G
 
I assume that means Samsung's AI would be able to account for a building, tree, hill, etc. blocking part of the moon and simply add in sharp, well exposed moon imagery to the parts that are seen in the photo. The linked document says they've been doing this since the S21.
Normally, I’d think the same, it’s just that there’s a dearth of examples of these types of photos from Samsung phones. That makes me think they haven’t fared well (plus, the example posted where someone drew a smiley face on the moon and Samsung turned those into smiley face craters :)
 
  • Like
Reactions: Jumpthesnark
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.