Yeah... that's probably the case.
Which is sad... since there are two cameras with two lenses and two sensors.
Sure we're getting two focal lengths and a faux-blurred background effect.
But I was hoping they could do some magic with improving the image quality by combining two image captures into one.
The issue is one of registration when it comes to the two photos before you can combine. Having two focal lengths makes it a lot more difficult, and a lot less useful since the noise doesn't map correctly between the two frames. The end result is that you can make the noise worse rather than better. There are techniques that
could be applied here, but there's a lot of bad trade offs involved to the point where it doesn't really help in many real low-light photography situations (portraits, scenes with motion, etc).
Same here. Noise/intense noise reduction is still an issue once you start zooming in on iPhone photos. It's be great if they could iron some of that out.
The problem there is that you need to address the physics of it to make noise better. There are basically two categories of noise:
1) Shot Noise. The light you are trying to capture isn't perfectly uniform, so you get randomness in your signal that you capture.
2) Sensor Noise. This is erroneous signal generated by the sensor itself. This has been broken down into different categories, especially in astrophotography, where a lot of work needs to be done to weed it out.
The catch here is that shot noise can be a very big part of why your images are so noisy. Shooting faster, and using a higher ISO (on cameras where you have this control) drive the noise up, since you are collecting fewer photons, and so that randomness of how many photons will strike the sensor in that particular pixel over X period of time becomes more pronounced. And really, the only way to address it is to capture more photons and reduce that variability. How do you do that? Shoot at a lower ISO, longer exposure times, and use bigger pixels. Things like BSI sensors in phones are so huge because it allows the individual pixels to get bigger, as all the circuitry is now behind it all rather than on the surface of the sensor that's also trying to collect light. But we then used it to cram more pixels on the sensor, negating the benefit.
Not to mention a lot of the easy stuff to improve things on the sensor noise front are already done, and there's hard physical limits to what you can do about shot noise if you are unwilling to make the sensor itself bigger, or put fewer pixels on it. Shot noise is a big reason why cameras with bigger sensors will always pull ahead in IQ over camera sensors, assuming similar generations of technology is used in each to maximize surface area of the pixel and minimize sensor noise for both.