Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Will this fix the oversaturated green/yellow/orange in my iphone photos for the last few years? 🤔
 
Hope they add fake photo and video detection system in it.
Going to need it in the future with all this Ai stuff.
 
If you read the article, it says it takes up less space than the current sensor...

Stacked sensors are actually bigger by volume. They move the logic components from the side of the censor to under it - making the overall module thicker, but still requiring some logic/connective components to be placed around the edges.

It would be like a postage stamp changing to the dimensions or a quarter coin and claiming that it takes less space. Does it take up less 2 Dimensional area? Yes. Does it add the negative of being thicker, and requiring additional/more complex cooling components? Yes.

I don't think I've ever heard anyone complain about the dynamic range of current phone cameras. This technology is really only going to be useful if you're making the sensor much larger (up to micro four thirds), where the dynamic range also comes with deeper depth of field - because the people that will care about the dynamic range are going to want other benefits like DoF, better lenses, etc.

Apple could release a point-and-shoot zoom camera running iOS like the Samsung Galaxy Phone that came out years ago, and they would sell tens of millions of units.
 
Stacked sensors are actually bigger by volume. They move the logic components from the side of the censor to under it - making the overall module thicker, but still requiring some logic/connective components to be placed around the edges.

It would be like a postage stamp changing to the dimensions or a quarter coin and claiming that it takes less space. Does it take up less 2 Dimensional area? Yes. Does it add the negative of being thicker, and requiring additional/more complex cooling components? Yes.

I don't think I've ever heard anyone complain about the dynamic range of current phone cameras. This technology is really only going to be useful if you're making the sensor much larger (up to micro four thirds), where the dynamic range also comes with deeper depth of field - because the people that will care about the dynamic range are going to want other benefits like DoF, better lenses, etc.

Apple could release a point-and-shoot zoom camera running iOS like the Samsung Galaxy Phone that came out years ago, and they would sell tens of millions of units.
Significantly better dynamic range means less artificial looking HDR processing. And that is something that people who don't know what dynamic range means will appreciate.
 
I'd like to see better camera sensors in the Vision Pro passthrough cameras. They aced the rock steady image, but it's a bit like going from Oz to Kansas when you put the headset on.
 
I'd happily keep the current dynamic range if they just fixed the aggressive post processing.
 


Apple has filed a patent for a new type of image sensor that could give future iPhones and other Apple devices the ability to capture photos and videos with dynamic range levels approaching that of the human eye.

iphone-16-pro-rear-cameras.jpg

The patent, titled "Image Sensor With Stacked Pixels Having High Dynamic Range And Low Noise," was first spotted by Y.M.Cinema Magazine and describes an advanced sensor architecture that combines stacked silicon, multiple levels of light capture, and on-chip noise suppression mechanisms to reach up to 20 stops of dynamic range.

For comparison, the dynamic range of the human eye is estimated to be around 20 to 30 stops, depending on how the pupil adjusts and how light is processed over time. Most smartphone cameras today capture between 10 and 13 stops. If Apple's proposed sensor reaches its potential, it would not only surpass current iPhones but also outperform many professional cinema cameras, such as the ARRI ALEXA 35.

The patent outlines a stacked sensor design made up of two layers. The top layer, called the sensor die, contains the parts that capture light. The layer underneath, the logic die, handles processing, including noise reduction and exposure control.

Currently, Apple uses sensors made by Sony across the iPhone lineup. Those sensors also use a two-layer design, but Apple's version includes several original features and takes up less space.

One of the most important parts of the sensor design is a system called a Lateral Overflow Integration Capacitor (LOFIC). This allows each pixel in the sensor to store different amounts of light depending on how bright the scene is, all in the same image. With this, the sensor can handle extremely wide lighting differences, such as a person standing in front of a bright window, without losing detail in the shadows or highlights.

Another part of the design focuses on reducing image noise and grain. Each pixel has its own built-in memory circuit that measures and cancels out heat-related electronic noise in real time. This is done on the chip itself, before the image is saved or edited by software.

Patent filings cannot be taken as evidence of Apple's immediate plans, but they do indicate areas of active research and interest for the company, as well as what it is considering developing for future devices.

Article Link: Apple Researching Groundbreaking Image Sensor Tech to Achieve Dynamic Range on Par With Human Eye
well sounds like a given for the next blackmagic immersive camera :)))
 
how is this different from existing stacked sensor?
It's in the article.

"The patent outlines a stacked sensor design made up of two layers. The top layer, called the sensor die, contains the parts that capture light. The layer underneath, the logic die, handles processing, including noise reduction and exposure control.

Currently, Apple uses sensors made by Sony across the iPhone lineup. Those sensors also use a two-layer design, but Apple's version includes several original features and takes up less space.

One of the most important parts of the sensor design is a system called a Lateral Overflow Integration Capacitor (LOFIC). This allows each pixel in the sensor to store different amounts of light depending on how bright the scene is, all in the same image. With this, the sensor can handle extremely wide lighting differences, such as a person standing in front of a bright window, without losing detail in the shadows or highlights.

Another part of the design focuses on reducing image noise and grain. Each pixel has its own built-in memory circuit that measures and cancels out heat-related electronic noise in real time. This is done on the chip itself, before the image is saved or edited by software."
 
  • Like
Reactions: DailySlow


Apple has filed a patent for a new type of image sensor that could give future iPhones and other Apple devices the ability to capture photos and videos with dynamic range levels approaching that of the human eye.

iphone-16-pro-rear-cameras.jpg

The patent, titled "Image Sensor With Stacked Pixels Having High Dynamic Range And Low Noise," was first spotted by Y.M.Cinema Magazine and describes an advanced sensor architecture that combines stacked silicon, multiple levels of light capture, and on-chip noise suppression mechanisms to reach up to 20 stops of dynamic range.

For comparison, the dynamic range of the human eye is estimated to be around 20 to 30 stops, depending on how the pupil adjusts and how light is processed over time. Most smartphone cameras today capture between 10 and 13 stops. If Apple's proposed sensor reaches its potential, it would not only surpass current iPhones but also outperform many professional cinema cameras, such as the ARRI ALEXA 35.

The patent outlines a stacked sensor design made up of two layers. The top layer, called the sensor die, contains the parts that capture light. The layer underneath, the logic die, handles processing, including noise reduction and exposure control.

Currently, Apple uses sensors made by Sony across the iPhone lineup. Those sensors also use a two-layer design, but Apple's version includes several original features and takes up less space.

One of the most important parts of the sensor design is a system called a Lateral Overflow Integration Capacitor (LOFIC). This allows each pixel in the sensor to store different amounts of light depending on how bright the scene is, all in the same image. With this, the sensor can handle extremely wide lighting differences, such as a person standing in front of a bright window, without losing detail in the shadows or highlights.

Another part of the design focuses on reducing image noise and grain. Each pixel has its own built-in memory circuit that measures and cancels out heat-related electronic noise in real time. This is done on the chip itself, before the image is saved or edited by software.

Patent filings cannot be taken as evidence of Apple's immediate plans, but they do indicate areas of active research and interest for the company, as well as what it is considering developing for future devices.

Article Link: Apple Researching Groundbreaking Image Sensor Tech to Achieve Dynamic Range on Par With Human Eye
I wonder. I just wonder...possible a prototype/model used in film F1 which had shots that got me moving my head with the G's and feeling the bumps and impacts?
 
...
Some folks are going to be reading this thinking that apple is going to somehow outperform the legion of sensor-related companies that have been on the market forever who just don't know what they're missing out on and somehow apple is going to point in the right direction. RIIIGHT.
...
A lot of traditional camera companies are leaving the consumer camera market or pruning down their offerings because everybody is using mobile phones instead. The best camera is the one you use, and if it is a mobile phone the incremental improvements to small sensors help...
 
  • Like
Reactions: gluckett
It's just a patent. There are probably tons of similar patents. Since Apple does not manufacture any sensors, it's most likely never going to be used anyways.
 
  • Like
Reactions: VulchR
It's just a patent. There are probably tons of similar patents. Since Apple does not manufacture any sensors, it's most likely never going to be used anyways.
Apple doesn't have to manufacture sensors themselves for them to have their new technology used by whoever manufactures the camera modules for them. For example, Apple designed and patented the the tetraprism lens used in the the ProMax despite not manufacturing that camera module.
 
Last edited:
  • Like
Reactions: johnsawyercjs
This would be truly incredible. Huge fan of the camera advancements year over year.

Now if we could get full frame raw video recording. 🤞
All the advancements including this one are mostly just AI gimmicks and filters. Behind all the software processing is the same old tiny mobile lens. It'll never be on the same level as a full frame camera.
 
  • Love
Reactions: turbineseaplane
All the advancements including this one are mostly just AI gimmicks and filters. Behind all the software processing is the same old tiny mobile lens. It'll never be on the same level as a full frame camera.

Along these lines, I'd love the ability to get more advanced with the tweaks & toggles for the stock Camera instead of having to resort to using 3rd party Apps.

I'd really like to dial way back on the over processing.
 
Coming soon to human bionics: the iEye
It should really be the iEye Captain as shortly afterwards it'll start issuing order to you and if you don't carry them out it'll shock you into submission.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.