Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I wish Sony, Nikon, Canon, Fuji, Panasonic, Pentax, Olympus etc etc etc would start exploring this tech in their mirrorless cameras.

Sony is already incorporating AI to have eye detect for animals in their upcoming cameras. This would be a very cool next step. Especially given the ability to control the amount. Pulling out details in post processing just for 1-2 shots can be annoying sometimes. Would be great to do it in camera as a separate mode.
 
  • Like
Reactions: HacKage
Well, you can get a USB power brick that can jump start a car, so maybe that is the next step. As for walking the dog, there's an app (several really) for that.

As for low light photography, this is actually an area where phones and even many point and shoot digital cameras have struggled for years and this brings a very usable option for this, especially when in low light situations where you can't use a flash for various reasons.

I installed the test app on my Pixel 2 XL and it works very well, which brings up an interesting point, Google has already said that ALL of the Pixel phones, including the original will get this new feature, something Apple wouldn't do with the iPhone because they want you to buy a new phone to get the feature (same applies to Samsung, etc.), yes sometimes they add minor new features, but this is a major feature that is getting added.

GREAT response. This has allowed me to broaden my mindset that perhaps, yes, taking photos in slightly darker than normal areas would benefit from being less grainy. I was keying in on the idea/need for taking photos in the dark, which seemed to be quite a stretch.

Great response. Thanks.
 
  • Like
Reactions: VulchR
Sorry but I have to call BS where I see it. Unless I'm missing something specific about the lighting in these images (please feel free to correct me if I am), they're ********. I've been taking plenty of low light shots on my iPhone XS without flash because I never use flash, and they're great.

I'm not saying Google hasn't made some kind of advancement, but it's hard to tell when they're obviously faking these images.
There are hundreds and thousands of users trying Night Sight right now and the examples speak for themselves.

Give credit where credit is due.
 
  • Like
Reactions: MEJHarrison
Totally. It's just over saturation of of the entire photo mostly. You can do it yourself with plenty of other photo apps on iOS.
Or just flash it, maybe attenuated to prevent overblowing on some occasions. Ok flash on a phone will have limits with landscapes, but hey, like you say, maybe fiddle with exposure time. (iPhone Xs has image stabilisation so slightly longer expose should be fine).
 
Last edited:
And the exposure time is........apparently not mentioned.
The exposure time really isn't very long, maybe 1/2 second at most, but that is not really accurate either, that is just how long you need to hold the phone still. The reality is the exposure time is odd because the exposure is about the same as taking a single photo, but then enhanced by having multiple images, pulling in light and using whatever it uses to calculate things.
 
Depending on how many stops of dynamic range the current iPhones can record this could easily be implemented into iOS. I could see them allowing the camera program to edit a dark photo to bring up the shadows and detail. Wondering what type of detection Google program is using to determine how much correction is needed....
 
The exposure time really isn't very long, maybe 1/2 second at most, but that is not really accurate either, that is just how long you need to hold the phone still. The reality is the exposure time is odd because the exposure is about the same as taking a single photo, but then enhanced by having multiple images, pulling in light and using whatever it uses to calculate things.

And your subject(s) need to be still as well.

Exposure is light integrated over time. Could be a single 1 second exposure. Or sixty 1/60 sec exposures.
 
Probably done by taking multiple shots with fast shutter speed and then averaging together, re-aligning as necessary. With a slow shutter speed, the movement is integrated and details are lost. But with multiple fast shutter speeds, each image is sharp, and can be intelligently aligned with the other images...

Equivalent of a long exposure, but without the tripod.

ETA: Wouldn't help for subject motion though, so maybe not.
Multiple images with a fast shutter speed = one image with same shutter speed. No .. matter .. how .. well .. aligned.
 
  • Like
Reactions: LordVic
Sorry but I have to call BS where I see it. Unless I'm missing something specific about the lighting in these images (please feel free to correct me if I am), they're ********. I've been taking plenty of low light shots on my iPhone XS without flash because I never use flash, and they're great.

I'm not saying Google hasn't made some kind of advancement, but it's hard to tell when they're obviously faking these images.

Would you like fries with your crow?

Obviously faking? Hop over to Reddit and look at the many examples being shared by Pixel users who got the prerelease APK. It’s not Google providing these images.

The ability to photograph in poor lighting isn’t limited to creative images. Most people will use it for snapshots and reference photos.
 
People have been saying that for some time now, but let me know when smartphones can compete with the glass of SLR - they can't. Sure, they're good enought to pass for on demand needs, but they cannot replace an SLR.
I would argue that they can't even replace a quality point and shoot camera. Cameras are always getting better and better to keep ahead of the smart phones. For instance my Panasonic point and shoot has essentially had this same feature and I bought it a couple of years ago.

My camera also has the feature where you can take multiple shots and it will choose the best one like the Pixel 3 has, but the difference is on my camera ALL of the shots are in full resolution, on the Pixel 3 only the original shot is and the other ones are taken at a lower quality.

Yeah, a phone camera can replace a cheap point and shoot, but there are areas where it won't be capable of doing so:
  • Optical zoom lenses on point and shoot cameras will always win out
  • Dedicated cameras are designed to do one thing and do it very well
  • Camera technology develops faster with better features than phones do

An advantage that the phone has is that you almost always have it with you, so it definitely works when you are in a situation where you either can't or don't want to lug around a larger camera or you aren't expecting to want to take photos.
 
Multiple images with a fast shutter speed = one image with same shutter speed. No .. matter .. how .. well .. aligned.

yes, this is why the multiple shots with short shutter doesn't make sense. if you have 10 shots with a .1s shutter speed, you don't get more light. Each shot will be dark. You don't suddenly get more light this way.

the only way i know with photography to get more light on the shutter without blurring, or changing deptho of field would be ISO.
 
I'm not saying Google hasn't made some kind of advancement, but it's hard to tell when they're obviously faking these images.

Why bother to take the time to actually RTFA when you can just make a knee-jerk reaction of a post?

"an XDA Developers forum member managed to get the feature to work ahead of time, and The Verge's Vlad Savov tested out the pre-release software on a Pixel 3 XL. The results, pictured below, are simply remarkable."
 
  • Like
Reactions: MEJHarrison
Would you like fries with your crow?

Obviously faking? Hop over to Reddit and look at the many examples being shared by Pixel users who got the prerelease APK. It’s not Google providing these images.

The ability to photograph in poor lighting isn’t limited to creative images. Most people will use it for snapshots and reference photos.
Exactly or like me for theme parks where flash isn't allowed and the low light settings on phone cameras still don't cut it very well. This may reduce the number of times I take my point and shoot to the local parks, that way I can avoid bag check lines.
 
  • Like
Reactions: MacNeb
Google does seem to keep knocking the ball out of the park on the photography front. It is awesome to see well executed great ideas come to life on any platform. I bounce between iOS and Android and they both have plenty of features that put them ahead of the other.
 
  • Like
Reactions: rhinosrcool
Sorry, those filtered photos are neither better nor more realistic. I never understood why non-Apple people are on the Apple site.
Just because it’s not apple tech, doesn’t mean it’s not useful. All these companies steal ideas from. Each other and sure google has stolen more but it would be a good option for apple to steal this one
 
Traditional camera companies like Canon will be completely obsolete within 5 years.
Well the saying has always been the best camera is the one you have with you. I have a relatively small M43 camera, fits in my jacket pocket, but for 95% of the photos that I'm taking day to day my iP8+ is more than capable. With the advancements in AI and having multiple lenses on cameras, traditional SLRs and mirrorless cameras will slowly be tools for professionals only.
 
You know its coming next year, lol. Apple will add their fancy name to it.
And they'll also be introducing it to 100 million users instantaneously. What good are AI advancements when Google can't even get this into the hands of more than 3 million users at once?
 
Seems interesting _ I wonder if Google will release some sort of app for the iPhone that does this.

Anyway, maybe this is the beginning of predictive photography, where the picture is not a record of what travels through the lens and falls on the sensor/film, but the pattern is interpreted by AI to show what the pattern should be. If so, the results will depend crucially on how the AI has been trained. Expect a biases based on scenes that are typically found in the US and/or Europe*...

EDIT: *at least at first
 
Last edited:
  • Like
Reactions: AsarTheGod
Unknown. Since presumably Google is uploading people's photos to their cloud and applying AI and then showing the updated image versus Apple's approach of privacy first where things are done locally on device. So anything is possible but the reason why Google's engineers can do such amazing things is their choice of features over privacy.

Sorry to burst your bubble, but it turns out that the majority of Google’s AI stunts are processed within the phone.
 
and yet you didn't name one
Speaker imbalance, poor audio/video in recordings, no 4K video at 60fps, YouTube videos not being centered...

And that's just from clicking on the link, but no one wants to be bothered to look for information.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.