Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
ADDENDUM:

This article from March 2015 about the Olympus E-M5 Mark II Digital SLR camera discusses what I’m describing. This camera uses a 16MP sensor to achieve 40MP photos using Sensor Shift technology to shift the sensor in 1/2 pixel increments for a total of 8 shots—which it then assembles into a final true 40 Megapixel image. I’d bet Apple’s next iteration of its proprietary A[#] SoC processor will be up to the task!

*But I have no idea how this would work for video, or whether it needs to—unless Apple wants to offer 8K video on an iPhone(!) like the Red® Monstro V V — which requires a 35.4 megapixel sensor! (I'd settle for 6K! —if for no other reason than "future-proofing" my video footage for posterity.)

Of course the problem with this (which I mentioned way up in this thread) is that conventional sensor-shifting requires that the subject (and that the camera) does not move during the multiple exposures. Apple would have to do some massive A.I.-fu to make it work practically for the average photo, where both the phone and the subject move quite a bit during the multiple exposures.

Not impossible, but very very hard.
 
  • Like
Reactions: R2DHue
Not impossible, but very very hard.

Thanks. Yeah, I went back thru and re-read them. gg. It’s hard because I don’t know much about Sensor Shift technology—like how the 2015 Olympus E-M5 Mark II achieves 40MP images using only one 16MP sensor. From what you’ve described, even in this camera, it would seem only useful for tripod-mounted shots of landscapes, not for bringing this camera to a runners' marathon and expecting to snap sharp 40MP images.

That’s what makes me wonder (because I know so little) what two identical sensors+lenses could do? I wonder if the two sensors could each capture an image, ½ pixel shifted from the other sensor, at precisely the same instant (to account for moving objects/subjects, camera shake, etc.), to yield higher megapixel images than each sensor is capable of individually.

And of course, the obvious thing is the whole complex matter—at least as it relates to image resolution—could be resolved if Apple simply used higher MP sensors—14MP, 16MP?

That's if the years of development of the lens elements Apple has perfected for 12MP sensors can be re-engineered without too much time and agony for new higher-than-12MP sensors. (Which still doesn't account for the hardware processing that comes into play, the software algorithms, modeling, AI, ML, etc.) But…we're coming up on 5 years at 12MP!

¯\_(ツ)_/¯
 
Thanks. Yeah, I went back thru and re-read them. gg. It’s hard because I don’t know much about Sensor Shift technology—like how the 2015 Olympus E-M5 Mark II achieves 40MP images using only one 16MP sensor. From what you’ve described, even in this camera, it would seem only useful for tripod-mounted shots of landscapes, not for bringing this camera to a runners' marathon and expecting to snap sharp 40MP images.

That’s what makes me wonder (because I know so little) what two identical sensors+lenses could do? I wonder if the two sensors could each capture an image, ½ pixel shifted from the other sensor, at precisely the same instant (to account for moving objects/subjects, camera shake, etc.), to yield higher megapixel images than each sensor is capable of individually.

And of course, the obvious thing is the whole complex matter—at least as it relates to image resolution—could be resolved if Apple simply used higher MP sensors—14MP, 16MP?

That's if the years of development of the lens elements Apple has perfected for 12MP sensors can be re-engineered without too much time and agony for new higher-than-12MP sensors. (Which still doesn't account for the hardware processing that comes into play, the software algorithms, modeling, AI, ML, etc.) But…we're coming up on 5 years at 12MP!

¯\_(ツ)_/¯
Yes, you could use two sensors (even with non-identical focal widths) to mathematically enhance the resolution, though you’d have to account for parallax and still use a fair amount of AI to resolve it. You could reduce the amount of calculation required by using fancy lenses to essentially give each sensor the same (shifted by a fraction of a pixel) view, etc., but then you miss out on other uses of multiple sensors/lenses (E.g. detecting depth). The AI technique would therefore instead try to discern how to shift what one sensor is seeing by a slight angle. Not foolproof.

Alternatively you can use one sensor, shift it through multiple exposures, and compensate using machine learning to identify objects and how they may be moving relative to the sensor between exposures (including rotations, etc) and “unmove” them in subsequent frames to fill in details.

All of it is complicated, but theoretical possible, especially when you have additional available sensors to help resolve 3D, to take continuous exposures to track motion, etc.
 
Not foolproof.

If anyone can do it, it's Apple—

I've used the iPhone's Panorama photo capability from its first incanation, and by now, it's algorithmic seamless "stitching" capability has improved markedly and continues to boggle my mind.

Good info throughout, BTW. Thanks. gg.

BTW, are there still three subpixels per pixel in the CMOS sensors used in iPhones? Or does "Sensor Shift" involve one full pixel instead of fractions of a pixel?

Plus, I've been puzzling out how the Olympus E-M5 Mark(s) Sensor Shifting with a 16MP sensor works to get true 40MP images.

Olympus says it involves 8 shots automatically composited into one.

I count 9 though:

1.) Center + shifts: 2.) West, 3.) East, 4.) North, 5.) South, 6.) NW, 7). SW, 8.) NE, 9.) SE

(Unless there's no "Center," just West and East Shifts. That would equal 8.)

I've also been looking at info about "Quad Array" sensors (2 Green diagonally + R + B for a four subpixel square pixel) that better lend themselves to Sensor Shift, plus info about orthogonal and "non-conventional subpixel layouts." (Interesting to read, but all very confusing—I'll leave it to the pros…)

I was also thinking about motion blur and camera shake: don't HDR iPhone photos composite multiple images per "shutter click"?

I've never had motion blur or camera shake issues with HDR, but maybe I've been holding the phone real steadily whenever I've shot in HDR. (IDK.)

AND! Photoshop and Lightroom users have discovered a way to create double resolution iPhone 11 photos—but it requires camera shake to work!

¯\_(ツ)_/¯
 
If anyone can do it, it's Apple—

I've used the iPhone's Panorama photo capability from its first incanation, and by now, it's algorithmic seamless "stitching" capability has improved markedly and continues to boggle my mind.

Good info throughout, BTW. Thanks. gg.

BTW, are there still three subpixels per pixel in the CMOS sensors used in iPhones? Or does "Sensor Shift" involve one full pixel instead of fractions of a pixel?

Plus, I've been puzzling out how the Olympus E-M5 Mark(s) Sensor Shifting with a 16MP sensor works to get true 40MP images.

Olympus says it involves 8 shots automatically composited into one.

I count 9 though:

1.) Center + shifts: 2.) West, 3.) East, 4.) North, 5.) South, 6.) NW, 7). SW, 8.) NE, 9.) SE

(Unless there's no "Center," just West and East Shifts. That would equal 8.)

I've also been looking at info about "Quad Array" sensors (2 Green diagonally + R + B for a four subpixel square pixel) that better lend themselves to Sensor Shift, plus info about orthogonal and "non-conventional subpixel layouts." (Interesting to read, but all very confusing—I'll leave it to the pros…)

I was also thinking about motion blur and camera shake: don't HDR iPhone photos composite multiple images per "shutter click"?

I've never had motion blur or camera shake issues with HDR, but maybe I've been holding the phone real steadily whenever I've shot in HDR. (IDK.)

AND! Photoshop and Lightroom users have discovered a way to create double resolution iPhone 11 photos—but it requires camera shake to work!

¯\_(ツ)_/¯

HDR doesn’t necessarily require multiple images in the traditional sense, because iPhones have an electronic, not a mechanical, shutter. So instead of taking multiple separate images, the pixel data can be read out multiple times during one image. It’s one of the advantages that electronic shutters have (of course they also have disadvantages, like the jello effect, but that’s another story).

So, for example, if your exposure is 1/125th of a second, you read out the pixel data at 1/250th, and 1/175th, and 1/125th (or whatever the algorithm is) instead of taking two or three separate photos. No blur problem since instead of taking 1/30th of a second for three photos, you fit all three “exposures” into one exposure.
 
HDR doesn’t necessarily require multiple images in the traditional sense, because iPhones have an electronic, not a mechanical, shutter. So instead of taking multiple separate images, the pixel data can be read out multiple times during one image. It’s one of the advantages that electronic shutters have (of course they also have disadvantages, like the jello effect, but that’s another story).

So, for example, if your exposure is 1/125th of a second, you read out the pixel data at 1/250th, and 1/175th, and 1/125th (or whatever the algorithm is) instead of taking two or three separate photos. No blur problem since instead of taking 1/30th of a second for three photos, you fit all three “exposures” into one exposure.

As I've conceded from the start, "I know not whereof I speak" on Sensor Shift/Pixel or subpixel Shift/Obtaining higher megapixel photos than the megapixels of the sensor itself, etc.

I'm just a very curious person, so kindly bear with me.

Referring to your last post, if HDR photos are achieved not buy capturing multiple images in succession (which takes more time—however brief), and "pixel data can be read out multiple times during one image," then can multiple "Sensor Shift"/Pixel or subpixel shift "readings" be taken at the same instant with absolutely zero latency between readings?

Might this yield higher megapixel photos where motion blur and camera shake are not issues for the quality of the picture (beyond the expected, sometimes even desired motion blur that can occur on any camera)?

BTW, the MacRumors "Sensor Shift" article concedes, "according to a paywalled report today from hit-or-miss Taiwanese industry publication DigiTimes."

DigiTimes has "gotten it wrong" on multiple occasions, not unlike with the DigiTimes-sourced, September 5, 2017, MacRumors article, “Apple Takes Early Step Towards iPhones With 'Above 12-Megapixel' Rear Cameras.”

Maybe it just has yet to "come to pass," or maybe DigiTimes got it wrong. (Had bad information.)

Bottom line, though, I'd really like it if Apple would "graduate" the iPhone from 12MP sensors—which it's been using in every generation iPhone since the 6s—nearly five years ago(!)

My suspicion is that Apple has put in YEARS of work on the multi element lens system, the hardware digital processing and synthesis that happens behind the scenes, the GPU SoC layer, the algorithmic software and AI and ML that all serve to amazeall engineered specifically for 12MP sensors. And that bumping up to a 16MP or 14MP sensor might be disruptive and require a total do-over of allll this hardware and software engineering over many years that you could describe as "beyond the sensor" technologies, which, admittedly, have achieved stunning results with each new generation of iPhone—that all still have 12MP sensors. (Hope my suspicion is wrong.)

(Off topic: I read a white paper about advances in CMOS Sensors and newer generation sensors, and "non-conventional" pixel/subpixel arrays, orthogonal and geometric rotations during "capture," that allow video cameras to do many things including "see" or discover microscopic particulates and toxins that not only bear telltale shapes, but that move in patterns only associated with those respective particulates and toxins. In the not-too-distant future, will we be able to aim our iPhone's camera at food to get a reading on its purity? Test air quality in the home/workplace?)

16MP or even 14MP would make me "happy," as digital zooming with a 12MP sensor can lead to terrible results that Apple should be embarrassed by. And cropping photos or video can also lead to terrible results. The worst example though is trying to create a portrait of one person from a photo that is a group shot. Apple doesn't "show off" these types of edited iPhone photos at their Events.

I'm using FiLMiC Pro which has a digital zoom slider that will turn red when slid a certain amount to indicate unacceptable image degradation. It doesn't "stay in the green" for long at all, recommending with its color warnings only the most minimal digital zooming.

The iPhone takes absolutely stunning photos when no digital zooming has been applied. Apple shows off these stunning photos at Apple Events always at full 12MP resolution. The iPhone 11's Deep Fusion photos are breathtaking—but are still 12MP photos. (AFAIK.)

I was recently sent some iPhone 11 video of an Elementary School children's stage performance. The parents had "good seats," but they insisted on using digital zoom to a "fair thee well" to "live crop" the video to their own child only.

I was presented with lots of blocky, blurry, shaky, grainy footage of their child, and I couldn't help but be reminded of early flip phone video.

Let's see a higher-than-12MP sensor in an iPhone SOON!
 



Apple's high-end 6.1-inch and 6.7-inch iPhones in 2020 will adopt sensor-shift image stabilization technology, according to a paywalled report today from hit-or-miss Taiwanese industry publication DigiTimes.

iphone-11-pro-ultra-wide-800x419.jpg

While details are slim, sensor-shift technology could bring image stabilization to the ultra-wide lens on high-end 2020 iPhones.

iPhone 11 Pro models feature optical image stabilization for both photo and video, but only when using the wide-angle or telephoto lenses. Sensor-shift technology could change this, as the stabilization would apply to the camera sensor itself and not be dependant on any specific lens.

Sensor-shifting image stabilization could also result in better shots with attachable lens accessories like the OlloClip.

The report backs rumors that the high-end 6.1-inch and 6.7-inch iPhones will each sport a triple-lens rear camera system with time-of-flight 3D sensing. Largan Precision is said to be the primary supplier of the lenses, fulfilling 80 percent of orders, with Genius Electric Optical picking up the remaining 20 percent.

Taiwan-based ALPS will supply motors for the sensor-shifting stabilization, and Sony will offer CMOS image sensors, the report adds.

Article Link: Sensor-Shift Technology Could Bring Image Stabilization to Ultra-Wide Lens on 2020 iPhones
Shot a video on my new iPhone 12 Pro. While zooming in and out, the cameras are shifting and there does not seem to be any kind of stabilization. See attached video. If you look at the leaves on the edge at the right side, one can see the slight shift while the video moves from one Camera to another. Also, so much noise in the video. I am shooting the video at 1080p at 60fps. Is there an issue with my Phone or am I missing a setting?

 
iPhone 12 pro only has OIS and no SSOIS. Buy iPhone 12 Pro Max if you need SSOIS on all 3 cameras.

Tech Specs on Apple.com clearly states so
  • Dual optical image stabilization (Wide and Telephoto)
  • Sensor-shift optical image stabilization (iPhone 12 Pro Max Wide)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.