Thanks, that answered a lot. "Stack and merge"-- so they're taking many short images, aligning them, and then adding them all together.Good insight from Google camera team on the engineering behind Pixel's new camera features.
Computational Super Res Zoom that's equivalent to optical 2x zoom.
https://ai.googleblog.com/2018/10/see-better-and-further-with-super-res.html
Very impressive but at some point we need to stop calling these photos. These are going through so much post-processing and modification that they no longer bear resemblance to traditional photos that capture what we are seeing with our eyes. From the photo mode smoothing to the google feature - its a post-photo world.
I think shot noise works the other way around. The noise goes up with the number of photons captured-- it's Poisson distributed. SNR in low light images is dominated by thermal noise (and dark currents, etc) while noise in brighter images is dominated by shot noise.yep, that is right.
by the way lots of people upthread are tripping over high ISO creating noise in photographs. high ISO doesn't actually produce noise, but it does reduce the dynamic range of the sensor. the noise in low-light or short-shutter speed images is actually due to the properties of light itself, photon emission being a statistical process at it's core. the problem is called 'photon shot noise' and is essentially an unavoidable property of our universe.
https://en.wikipedia.org/wiki/Shot_noise
the more target photons you are able to capture with a sensor, the more accurate your knowledge of the actual intensity of the target. the fewer photons you capture, the higher the uncertainty (noise)
once the number of target photons is so limited, other sensor issues start coming into play - noise created by heat (Johnson noise: https://en.wikipedia.org/wiki/Johnson–Nyquist_noise) and other artifacts intrinsic to the sensor (bias patterns, lens shading) contribute to degradation of the image.
anyway, the high ISO setting allows you to actually sense these smaller number of incident photons which would otherwise create voltages too low to be detected at lower gain settings.
Tell me, since you were there to see what it really looked like to the naked eye, how should the photos look?To me all these photo comparisons look completely falsified. Just looks like they played with the levels and other minor adjustments.
I’ll make my own decisions using ?Photoshop, thank you.
Now there’s a company that actually knows how to innovate![]()
I think shot noise works the other way around. The noise goes up with the number of photons captured-- it's Poisson distributed. SNR in low light images is dominated by thermal noise (and dark currents, etc) while noise in brighter images is dominated by shot noise.
They will wait and offer it with next year's phone as an "amazing and expensive pro innovation" lolSo Apple could technically do the same right?
The variance is equal to the expected value. As you expect more photons (higher exposure level), you expect more variation from sample to sample (noise).absolutely not. you need to capture enough photons to overcome the thermal noise and the sensor artifacts, but once you do that you're shot-noise limited. the more photons you capture, the smaller the noise (uncertainty) is. that's how the universe works. as you integrate more and more photons (either by leaving the shutter open or by averaging a bunch of short images) the SNR increases, not decreases.
edit: from the standpoint of the poisson article you linked, what happens is that the standard deviation of the distribution gets smaller and smaller as you integrate more photons. in other words that bell curve narrows down more and more until the most likely value for the particular pixel is very near the peak of the curve.
I know what you mean. When I see most of the photo comparisons, I think to myself I almost never take pictures as bad as some of the comparison photos I'll see in these articles. And bear in mind much of the time I'm looking at my photos not on my camera but on my 21.5 4K display.Sorry but I have to call BS where I see it. Unless I'm missing something specific about the lighting in these images (please feel free to correct me if I am), they're ********. I've been taking plenty of low light shots on my iPhone XS without flash because I never use flash, and they're great.
I'm not saying Google hasn't made some kind of advancement, but it's hard to tell when they're obviously faking these images.
EDIT: I get it its The Verge that posted these pictures, but the fact that there's something screwy still stands. I've never taken awful low light photos like what is in the article.
Good joke.Now there’s a company that actually knows how to innovate![]()
The variance is equal to the expected value. As you expect more photons (higher exposure level), you expect more variation from sample to sample (noise).
If your exposure is set such that a group of pixels are all expecting an average of 25 photons each then among them they'll have a standard deviation (noise) of 5.
If you increase the exposure level so that you expect an average of 2500 photons in each pixel, they'll have a standard deviation (noise) of 50. You have more noise.
If you image a grey card for 1 second (well above the thermal/pattern noise but without clipping), then image the same grey card for 2 seconds (still without clipping), your average value goes up by a factor of 2 and your noise goes up by a factor of sqrt(2).
It's a function of the photon inter-arrival times. It's the same reason that queue length varies more as the number of people going through increases. It's why you need larger, rather than smaller, FIFOs as your data traffic increases.
You're right that the SNR increases as you integrate longer, but it's not because you have less noise, it's because you have more signal. Signal goes up linearly with count, noise goes up as the square root. But, again, it does go up. You are less certain of the true value.
The standard deviation does not get smaller as you integrate more photons. The "bell curve" (it's Poisson, not Gaussian, so it's very un-bell like closer to zero) moves right and gets wider simultaneously the longer you integrate. The most likely value is, by definition, exactly at the peak of the curve.
As your exposure decreases, the noise level decreases by the square root of photon count until you get close to the thermal/pattern noise floor of the sensor at which point it eventually stops decreasing at all. As your exposure decreases, your SNR decreases with the square root of signal until you approach the thermal noise floor at which point the decline asymptotically approaches linear.
Photos in pitch black darkness. OK. Kinda cool. Not anti-android/google nor pro-Apple by any stretch. But I'll be curious to hear of any outcry for Apple to incorporate a response. Will customers soon demand their phone be able to jumpstart their car or walk the dog next?
I don't get it. This looks extremely useful. No, maybe not in pitch darkness, but if the lighting isn't perfect I find iphones take fairly crappy pictures. Well at least on my old X, as I haven't been able to put my Xs Max to the test yet.
nIghtVISION
At a media event in New York City earlier this month, Google previewed a new low-light camera feature called "Night Sight" that uses machine learning to choose the right colors based on the content of the image. The result is much brighter photos in low-light conditions, without having to use flash.
Google showed a side-by-side comparison of two unedited photos shot in low light with an iPhone XS and its latest Pixel 3 smartphone with Night Sight, and the photo shot on the latter device is much brighter.
![]()
Google said Night Sight will be available next month for its Pixel smartphones, but an XDA Developers forum member managed to get the feature to work ahead of time, and The Verge's Vlad Savov tested out the pre-release software on a Pixel 3 XL. The results, pictured below, are simply remarkable.
![]()
Without Night Sight
![]()
With Night Sight
![]()
Without Night Sight
![]()
With Night Sight
![]()
Without Night Sight
![]()
With Night Sight
Google and Apple are both heavily invested in computational photography. On the latest iPhones, for example, Smart HDR results in photos with more highlight and shadow detail, while Depth Control significantly improves Portrait Mode. But, Night Sight takes low-light smartphone photography to a whole new level.
Article Link: Google's Upcoming 'Night Sight' Mode for Pixel Phones Captures Remarkable Low-Light Photos
Now there’s a company that actually knows how to innovate![]()
I left for the Pixel 3. Case in point:I’ll admit, I initially misinterpreted the intent of the usefulness of that feature. The way the article is set up, it seem like this was more about taking pictures in total darkness, which I thought was just creating innovation just to create. But seeing improved pictures and low light situations, I can see a big benefit there.
Just curious, what are you and is sleepy of iOS? Went to android? What did it? There was a time after iOS 7 that I almost left that’s for sure.
That's actually amazing, let's see if Apple adds something similar.
Mobile phone cameras have a hard time with dark lighting. They cannot see in the dark as well as the human eye. Did you ever notice that the photo you took in low light looks nothing like what your eyes are seeing? Google is just trying to replicate what your eyes are seeing. They aren't adding any light that isn't there. Instead they are trying to make the camera see as much light as the naked eye sees.