Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
To me all these photo comparisons look completely falsified. Just looks like they played with the levels and other minor adjustments.
I’ll make my own decisions using ?Photoshop, thank you.
 
Here is my night mode compared with normal mode with the Pixel 3. You can snag the meta data for your self if you'd like.
 

Attachments

  • MVIMG_20181026_001726.jpg
    MVIMG_20181026_001726.jpg
    671.3 KB · Views: 190
  • IMG_20181026_001740.jpg
    IMG_20181026_001740.jpg
    1.4 MB · Views: 190
  • Like
Reactions: mi7chy
Good insight from Google camera team on the engineering behind Pixel's new camera features.


Computational Super Res Zoom that's equivalent to optical 2x zoom.

https://ai.googleblog.com/2018/10/see-better-and-further-with-super-res.html
Thanks, that answered a lot. "Stack and merge"-- so they're taking many short images, aligning them, and then adding them all together.

Image stabilization can help reduce blur from camera motion, but long shutters mean blur from subject movement, which is almost impossible to remove. Image stacking of short exposure images is used for astro photography to avoid star trails)but that's also hard to do with a moving camera and complex motion. It works for astro photography because the whole sky moves together, so it's a simple alignment issue.

I suspect we'll find that NS doesn't do well with moving subjects.

They're also doing some "computational white balance" which I'm assuming is something like this:

https://arxiv.org/abs/1805.01934


Very impressive but at some point we need to stop calling these photos. These are going through so much post-processing and modification that they no longer bear resemblance to traditional photos that capture what we are seeing with our eyes. From the photo mode smoothing to the google feature - its a post-photo world.

I think what you're saying is true about "portrait mode" where information is being selectively destroyed for artistic affect.

In this case, though, the algorithms are simply pulling more information out of the data that's there. Our eyes do very similar things. Ever notice how much clearer video is at night than still shots taken on the same device? If you look at individual frames of the video, each one is noisy as heck but we don't perceive it that way.

Same is true for our vision at night. When it gets dark our pupils widen (larger aperture), our eyes track the subject (image stabilization), and the chemistry of our rods and cones has a long time constant (low pass filtering, or averaging). We've gotten pretty good at mimicking those in our technology.

There's another perception layer happening though that we're not good at mimicing yet. The fundamental noise reduction problem is distinguishing detail from noise. Right now, we either treat it all the same and destroy detail and noise together, or we take a naive approach and find things that look like continuous edges and try to preserve them. If we actually had a template to match against, we could do much better. Our brain does this so much better that any of our traditional algorithms. The paper I link to at the top is a good step in the right direction, I think.

A good photographer uses their brain in post and identifies details to preserve and regions of uniformity where high frequency content can be suppressed.

yep, that is right.

by the way lots of people upthread are tripping over high ISO creating noise in photographs. high ISO doesn't actually produce noise, but it does reduce the dynamic range of the sensor. the noise in low-light or short-shutter speed images is actually due to the properties of light itself, photon emission being a statistical process at it's core. the problem is called 'photon shot noise' and is essentially an unavoidable property of our universe.

https://en.wikipedia.org/wiki/Shot_noise

the more target photons you are able to capture with a sensor, the more accurate your knowledge of the actual intensity of the target. the fewer photons you capture, the higher the uncertainty (noise)

once the number of target photons is so limited, other sensor issues start coming into play - noise created by heat (Johnson noise: https://en.wikipedia.org/wiki/Johnson–Nyquist_noise) and other artifacts intrinsic to the sensor (bias patterns, lens shading) contribute to degradation of the image.

anyway, the high ISO setting allows you to actually sense these smaller number of incident photons which would otherwise create voltages too low to be detected at lower gain settings.
I think shot noise works the other way around. The noise goes up with the number of photons captured-- it's Poisson distributed. SNR in low light images is dominated by thermal noise (and dark currents, etc) while noise in brighter images is dominated by shot noise.
 
To me all these photo comparisons look completely falsified. Just looks like they played with the levels and other minor adjustments.
I’ll make my own decisions using ?Photoshop, thank you.
Tell me, since you were there to see what it really looked like to the naked eye, how should the photos look?
 
Now there’s a company that actually knows how to innovate :eek:

Yes, by taking what's been around for years (as an app) and making it part of the default camera app.

As has been pointed out previously, Hydra does just this, and has done so for some time. It's doing a terrific job at very low light on my 7+, I can only imagine what it could do with the fast lens and sensitive sensor of the Xs or Xr.
 
I think shot noise works the other way around. The noise goes up with the number of photons captured-- it's Poisson distributed. SNR in low light images is dominated by thermal noise (and dark currents, etc) while noise in brighter images is dominated by shot noise.

absolutely not. you need to capture enough photons to overcome the thermal noise and the sensor artifacts, but once you do that you're shot-noise limited. the more photons you capture, the smaller the noise (uncertainty) is. that's how the universe works. as you integrate more and more photons (either by leaving the shutter open or by averaging a bunch of short images) the SNR increases, not decreases.

edit: from the standpoint of the poisson article you linked, what happens is that the standard deviation of the distribution gets smaller and smaller as you integrate more photons. in other words that bell curve narrows down more and more until the most likely value for the particular pixel is very near the peak of the curve.
 
I've installed this on my first-gen Pixel XL, and the results are amazing, especially for an older sensor. To those saying they'd prefer to do this in post: you can't. The data's just not there. It's not a long exposure, as has been covered. It's a series of them. You cannot manipulate anything any iPhone can take to get this... yet. Apple will have it next year. Regardless of which apps have done this before, this is a first for a manufacturer's built-in app, and, as mentioned, Apple will follow, which will make me happy, because then my iPhone will have it.

It's much more true-to-life than those who haven't seen the results seem to think, and while it's not exactly what your eyes see, it's closer to that than night photos have been before, and this will enable shots that weren't practical before. It definitely has limits - you need something that doesn't move appreciably for a few seconds. But there are still a lot of shots you can now expect to see that you couldn't otherwise.

Yes, you can do better with a real camera. No, you can't do better with a smartphone.

Give this one to Google. It's good for everyone that each of the elephants in the game challenge the others.
 
absolutely not. you need to capture enough photons to overcome the thermal noise and the sensor artifacts, but once you do that you're shot-noise limited. the more photons you capture, the smaller the noise (uncertainty) is. that's how the universe works. as you integrate more and more photons (either by leaving the shutter open or by averaging a bunch of short images) the SNR increases, not decreases.

edit: from the standpoint of the poisson article you linked, what happens is that the standard deviation of the distribution gets smaller and smaller as you integrate more photons. in other words that bell curve narrows down more and more until the most likely value for the particular pixel is very near the peak of the curve.
The variance is equal to the expected value. As you expect more photons (higher exposure level), you expect more variation from sample to sample (noise).

If your exposure is set such that a group of pixels are all expecting an average of 25 photons each then among them they'll have a standard deviation (noise) of 5.

If you increase the exposure level so that you expect an average of 2500 photons in each pixel, they'll have a standard deviation (noise) of 50. You have more noise.

If you image a grey card for 1 second (well above the thermal/pattern noise but without clipping), then image the same grey card for 2 seconds (still without clipping), your average value goes up by a factor of 2 and your noise goes up by a factor of sqrt(2).

It's a function of the photon inter-arrival times. It's the same reason that queue length varies more as the number of people going through increases. It's why you need larger, rather than smaller, FIFOs as your data traffic increases.

You're right that the SNR increases as you integrate longer, but it's not because you have less noise, it's because you have more signal. Signal goes up linearly with count, noise goes up as the square root. But, again, it does go up. You are less certain of the true value.

The standard deviation does not get smaller as you integrate more photons. The "bell curve" (it's Poisson, not Gaussian, so it's very un-bell like closer to zero) moves right and gets wider simultaneously the longer you integrate. The most likely value is, by definition, exactly at the peak of the curve.

As your exposure decreases, the noise level decreases by the square root of photon count until you get close to the thermal/pattern noise floor of the sensor at which point it eventually stops decreasing at all. As your exposure decreases, your SNR decreases with the square root of signal until you approach the thermal noise floor at which point the decline asymptotically approaches linear.
 
Last edited:
Sorry but I have to call BS where I see it. Unless I'm missing something specific about the lighting in these images (please feel free to correct me if I am), they're ********. I've been taking plenty of low light shots on my iPhone XS without flash because I never use flash, and they're great.

I'm not saying Google hasn't made some kind of advancement, but it's hard to tell when they're obviously faking these images.

EDIT: I get it its The Verge that posted these pictures, but the fact that there's something screwy still stands. I've never taken awful low light photos like what is in the article.
I know what you mean. When I see most of the photo comparisons, I think to myself I almost never take pictures as bad as some of the comparison photos I'll see in these articles. And bear in mind much of the time I'm looking at my photos not on my camera but on my 21.5 4K display.
 
The variance is equal to the expected value. As you expect more photons (higher exposure level), you expect more variation from sample to sample (noise).

If your exposure is set such that a group of pixels are all expecting an average of 25 photons each then among them they'll have a standard deviation (noise) of 5.

If you increase the exposure level so that you expect an average of 2500 photons in each pixel, they'll have a standard deviation (noise) of 50. You have more noise.

If you image a grey card for 1 second (well above the thermal/pattern noise but without clipping), then image the same grey card for 2 seconds (still without clipping), your average value goes up by a factor of 2 and your noise goes up by a factor of sqrt(2).

It's a function of the photon inter-arrival times. It's the same reason that queue length varies more as the number of people going through increases. It's why you need larger, rather than smaller, FIFOs as your data traffic increases.

You're right that the SNR increases as you integrate longer, but it's not because you have less noise, it's because you have more signal. Signal goes up linearly with count, noise goes up as the square root. But, again, it does go up. You are less certain of the true value.

The standard deviation does not get smaller as you integrate more photons. The "bell curve" (it's Poisson, not Gaussian, so it's very un-bell like closer to zero) moves right and gets wider simultaneously the longer you integrate. The most likely value is, by definition, exactly at the peak of the curve.

As your exposure decreases, the noise level decreases by the square root of photon count until you get close to the thermal/pattern noise floor of the sensor at which point it eventually stops decreasing at all. As your exposure decreases, your SNR decreases with the square root of signal until you approach the thermal noise floor at which point the decline asymptotically approaches linear.

yes you are right, i got this all wrong. noise increases as sqrt of the signal, thus SNR is increased but so is noise. everything else you are saying is also correct.

but we might be talking about two different things - i was thinking of the histogram of an image rather than the statistical properties of a single pixel. meaning that if you image a perfectly uniform grey wall, at short exposure times the histogram starts off with with pixel values all over the place. as the SNR of the image increases through longer exposure times, the histogram becomes narrower (and taller) as more and more pixels converge on the correct value. in the limit the histogram becomes a vertical straight line at X=true value and height Y=total #pixels in the image.

regardless what's important is that the SNR is increased with longer integration times. my point was only that people generally blame high ISO for the graininess of images. while increased amplification of the dark signal certainly contributes to this, low SNR caused by photon statistics is just as important - after all, the whole point of increasing the gain is to permit short exposures in low light. without the high ISO setting you'd basically just record nothing as whatever photoelectrons you have only move the ADC by a handful of DN.
 
Photos in pitch black darkness. OK. Kinda cool. Not anti-android/google nor pro-Apple by any stretch. But I'll be curious to hear of any outcry for Apple to incorporate a response. Will customers soon demand their phone be able to jumpstart their car or walk the dog next?

I don't get it. This looks extremely useful. No, maybe not in pitch darkness, but if the lighting isn't perfect I find iphones take fairly crappy pictures. Well at least on my old X, as I haven't been able to put my Xs Max to the test yet.
 
  • Like
Reactions: Tozovac
I don't get it. This looks extremely useful. No, maybe not in pitch darkness, but if the lighting isn't perfect I find iphones take fairly crappy pictures. Well at least on my old X, as I haven't been able to put my Xs Max to the test yet.

I’ll admit, I initially misinterpreted the intent of the usefulness of that feature. The way the article is set up, it seem like this was more about taking pictures in total darkness, which I thought was just creating innovation just to create. But seeing improved pictures and low light situations, I can see a big benefit there.

Just curious, what are you and is sleepy of iOS? Went to android? What did it? There was a time after iOS 7 that I almost left that’s for sure.
 
  • Like
Reactions: spinedoc77
nIghtVISION

njah, too easy...
”we simple called it as Liquid Retina Stagging Nighphotography for pro shooters to take amazing pictures changing your life for ever. have you tried to take a pic of your dick.. eh, oh... DOG under the blanket at night and share it with your loving friend? now we made it to be possible!”
 
  • Like
Reactions: Delgibbons



At a media event in New York City earlier this month, Google previewed a new low-light camera feature called "Night Sight" that uses machine learning to choose the right colors based on the content of the image. The result is much brighter photos in low-light conditions, without having to use flash.

Google showed a side-by-side comparison of two unedited photos shot in low light with an iPhone XS and its latest Pixel 3 smartphone with Night Sight, and the photo shot on the latter device is much brighter.

pixel-night-sight-800x318.jpg

Google said Night Sight will be available next month for its Pixel smartphones, but an XDA Developers forum member managed to get the feature to work ahead of time, and The Verge's Vlad Savov tested out the pre-release software on a Pixel 3 XL. The results, pictured below, are simply remarkable.

low-light-1.jpg

Without Night Sight

high-light-1.jpg

With Night Sight


low-light-2.jpg

Without Night Sight

high-light-2.jpg

With Night Sight


low-light-3.jpg

Without Night Sight

high-light-3.jpg

With Night Sight

Google and Apple are both heavily invested in computational photography. On the latest iPhones, for example, Smart HDR results in photos with more highlight and shadow detail, while Depth Control significantly improves Portrait Mode. But, Night Sight takes low-light smartphone photography to a whole new level.

Article Link: Google's Upcoming 'Night Sight' Mode for Pixel Phones Captures Remarkable Low-Light Photos

That's insane. So so good

Can't wait for Apple to announce this as their own in 2 years in the iphone XII
 
I think some of the naysayer comments are missing the point here. There are nearly endless scenarios where having a camera with a trick like this could be useful. Kudos to Google. IMHO this is as big of step forward in smartphone cameras as Portrait Mode was.
 
I’ll admit, I initially misinterpreted the intent of the usefulness of that feature. The way the article is set up, it seem like this was more about taking pictures in total darkness, which I thought was just creating innovation just to create. But seeing improved pictures and low light situations, I can see a big benefit there.

Just curious, what are you and is sleepy of iOS? Went to android? What did it? There was a time after iOS 7 that I almost left that’s for sure.
I left for the Pixel 3. Case in point:
 

Attachments

  • 00100lPORTRAIT_00100_BURST20181027112819974_COVER.jpg
    00100lPORTRAIT_00100_BURST20181027112819974_COVER.jpg
    1.2 MB · Views: 160
  • Like
Reactions: mi7chy
It’s either going to be grainy as all hell or it’s going to keep the shutter open forever and basically be useless for the majority of photos people take.
 
That's actually amazing, let's see if Apple adds something similar.


Easy to do I think. The info has been captured by the camera and exposure adjusted and a few other photoshop tweaks applied. If these photos were handheld it becomes more impressive - I do agree - but if you simply stabilize your camera, it will catch sharper images that can be tweaked like this right now manually. It's not magic.

Try shooting on "B" (bulb) mode in camera+ or other apps and tweak later in Photoshop - it's surreal fun.
 
Owning both phones, I actually prefer the low light photos the Xs Max captures over that of the P3 in more cases. The P3 does a good job of boosting the lows, however it overdoes it with noisy artifacts at times, especially in dark sky above a skyline shot.


pixel 3.jpeg


iPhone Xs.jpeg


XS Max_Boost.jpeg


I agree that at first glance (which is all most people will do), the Pixel 3 shots look amazing, however when you start to pixel peep (no pun intended) there is a LOT of noise in darker areas, and the middle ground blending zones look blotchy as compared to the Xs. I also dislike the oversaturated colors in the shot that are not true to life. Standing on the roof looking at both photos, the Xs capture was far more realistic to what your eyes would see.

I would say the Xs camera system captures shots exactly between the pixel 3’s two photo modes. The 3 without night sight is a little Dark, the Xs is a bit brighter natively and still produces a clean shot. With night site (which the iPhone lacks) the Pixel 3 image processing is almost overdone with regards to shadow boost, and saturation. It feels like google is doing immediate post processing to boost shadows, and saturation without user intervention. When I boost shadows to Xs Max photos in post processing, I like the results better than many Pixel 3 night sight photos. Sure there is noise in the Xs shots as well, but not as much, and the highlights (as well as shadows) are not overdone.

Since Night sight does not work in video mode, I feel that the Xs has a far better camera system overall. The native photo and video captures from that camera in low light are good in both exposure and saturation. Combine that with far less choppy video, better stereo audio recording, it makes the Xs a far better camera system for my needs.
 
Mobile phone cameras have a hard time with dark lighting. They cannot see in the dark as well as the human eye. Did you ever notice that the photo you took in low light looks nothing like what your eyes are seeing? Google is just trying to replicate what your eyes are seeing. They aren't adding any light that isn't there. Instead they are trying to make the camera see as much light as the naked eye sees.

I agree, but we’ve had third party apps for years that can achieve the same look. By adjusting ISO and shutter speed. All google did was put implement it into the native camera app. I don’t see how google is doing something so “advanced”. I’ll give credit where due, but this type of photography isn’t new.
 
Last edited by a moderator:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.