Let's not let iPhone photography off the hook here. They might not be inserting fake images, but the post-processing is so over the top now that I hesitate to even open my phone camera around sunset. "Golden hour" photos of people take on this horrible faked HDR look that is so processed it looks like an illustration more than a photo.
No, let’s. Because what is happening in this case with “space zoom” on these Galaxy phones is not at all what is being done on the iPhones. On the Galaxy phones with this feature, they are [almost certainly] using a machine learning model that has been trained specifically on the moon to do the equivalent of “Oh! That’s the moon! Let me just grab this high resolution photo and sort of blend it in here.” Really it is probably just recognizing the form and structure and hallucinating details in various areas because it knows what is supposed to go there. It knows it is the moon and it is specifically enhancing the moon in the direction of structure it understands is there. And this was demonstrated brilliantly by this commenter in the Reddit post. (Bonus chuckle for all the people commenting about “deconvolution” without understanding at all how “deconvolution” works and why it cannot explain what is happening here.) But this process is decoupled from data captured by the phone, and is producing images which the phone, its sensor, its lenses never could have captured short of riding on the back of a telescope or being attached to some other sort of high-magnification optics.
What the iPhone is doing is capturing a whole mess of exposures in different regards (no need to go into the details as Apple has covered it well) to work around concerns like the limited dynamic range, limited light collection, etc. and enhance the image, using the lens and camera it has, beyond the limits of what it could have achieved with a single exposure. A more aggressive approach with something in common shared with what some modern cameras can do with “high resolution mode” exposures where they combine multiple exposures to reduce noise and enhance resolution through sub-pixel sampling. Except in the case of the iPhone, they’re also capturing more data, enhancing dynamic range by exposure stacking, etc. Some challenging interpretation is still necessary in that the software needs to know how to blend that data to keep the expanded dynamic range natural and workable (using one consideration as a point), but nothing close to the same sort of thing.
Now, say, we take this to the next level. We use machine learning to create an upscaling or detail enhancing model. Sort of as is done with Topaz products like Gigapixel or DeNoise. In this case, the machine learning model is trained on photography and begins to recognize common structures like hair, leaves, a human eye, and in sharpening will make choices informed, to some capacity, of what those structures look like. You could throw this sort of thing at the moon and it could make informed choices in bringing out detail around craters and the ring of the moon in sharpening. And this would still be an interesting and honest (provided it is presented honestly) sort of feature, and it would still fall short of the dishonesty Samsung is employing in this case. Why? Because this sort of model isn’t saying, “Oh, this is the moon. I know what this is supposed to look like, so let’s go ham adding details that are supposed to be there.” Instead it is saying, “Okay, these details can be pushed a bit in this direction and this looks like a contiguous ridge or edge so let’s push it in this direction.”
This is just the predicable latest move form a company (Samsung) who has repeatedly demonstrated an enthusiasm in misrepresenting their products for the sake of marketing. The same company that has been caught multiple times using DSLR photographs as examples of images captured from their phone cameras. And I suspect something similar, actually, is being done with the Milky Way on the phones.