Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hmm I'd like to know what high res photos they are? Because this does have a whiff of prior art here. Guess we will find out cause the patent office don't bother to.

Anyway I can hear all the iPhone fans crying out on a rampage because their cameras stick out the back of the iPhones by 0.2mm to allow for the OIS!
 
When Apple implements fingerprint tech in iphone (not the first mobile phone with this tech) and Samsung comes out with it, every apple fanboys call Samsung a copycat. Let's see if those same people call Apple a copycat (Nokia is the first mobile phone with OIS) when they use OIS on iPhone. Double standard much?

Apple didn't invent fingerprint technology and Samsung didn't copy it. What Apple did was to bring the first fingerprint sensor to a phone, that works nearly all the time.
What samesung does is, trying to mimic the iPhone and profit from its popularity (this was much worse - they are no to a "let's throw spaghetti at the wall and see what sticks" approach)

and btw Canon patented some kind of OIS about 30-40 years ago. Best would be if you read up on how patents in general (and in some countries like the US) work.


Just read this the other day. I'll look for info on Nokia's oureview tech later. Just curious at the differences

http://www.engadget.com/2014/03/19/oppo-find-7-50mp-camera/

I haven't read the patent exactly but I think the main difference is this:
For a high res pictures out of different small images you need pictures of the same thing, which are minimal different (if you take 5 times pixel per pixel the same image, there is no way to make a bigger pictures out of it except upscaling it)
The oppo has no OIS and the idea is, that your hand always shakes a bit. This way you get mimimal different pictures that are then put together into one highres picture. Problem will probably be, that the shaking is sometimes worse and especially in low light will produce bad images

I think the Apple idea is, that the OIS does two things:
1. it counters the shaking of the hand (as every OIS does)
2. it moves sensor and/or optics (same as the natural handshake on the oppo but calculated)

If I am right, this would mean that the Apple approach would work if the phone is on a tripod or sitting somewhere without any movements, while in theory, the oppo approach shouldn't really work.

EDIT: Afaik the Nokia approach is the other way round. They have huge sensors (which is awesome, because it's probably the easiest way to improve image quality) with a huge amount of megapixels (40MP on a 2/3 sensor with 1.1µm pixels I think). The images are then down sampled
 
Honestly, does anybody think that resolution is the problem with cell phone cameras?

Noise is the problem! When you zoom a picture, noise becomes a problem long before the pixel size does. Taking multiple pictures in the same time reduces exposure time per pixel (just like shrinking pixels) and therefore increases noise.

This is no silver bullet.
 
Honestly, does anybody think that resolution is the problem with cell phone cameras?

Noise is the problem! When you zoom a picture, noise becomes a problem long before the pixel size does. Taking multiple pictures in the same time reduces exposure time per pixel (just like shrinking pixels) and therefore increases noise.

This is no silver bullet.

I think the idea behind this is, that they could have bigger pixel (like they already did with the 5s) and still get high res pictures. I think this technology would probably benefit HTCs Ultrapixel approach the most.

Anyway: OIS, a bigger image sensor and better optics would be welcome - but I think the "Always thinner!" way, isn't compatible with it :/
 
Mr Ahn "I read on Macrumours they are going to outdo us with lower megapixel camera. Don't make me send you lackies up North. Copy this NOW."

"Uh, sir, this is the telcon where you were telling the Judge you don't copy Apple"

Mr. Ahn, "How DARE you accuse me of saying to copy Apple! We have Patents in the Legal Field as well. I'm sure you will not be comfortable through any alleged litigation."

*click*

Mr. Ahn, "Hello? Hello?"
 
Get the basics sorted first

This is all good and great but get the basics sorted first like stop dust getting in to the lens !
 
Jumping Spiders in Tech Form

Cool, this is actually similar to how jumping spiders generate very accurate visual representations by moving their retinas they can generate extremely accurate visual representations of their surroundings.
http://www.evolutionswitness.com/?p=368

Glad that we're creating electronic technology inspired from evolutionary biological technology. :)
 
Cool, this is actually similar to how jumping spiders generate very accurate visual representations by moving their retinas they can generate extremely accurate visual representations of their surroundings.
http://www.evolutionswitness.com/?p=368

Glad that we're creating electronic technology inspired from evolutionary biological technology. :)

That's not similar to this technology: the jumping spider has a high resolution eye with very narrow field. Their "eye movement" is required to scan the environment and build a broader image from a lot of very narrow images.

Basically the jumping spider's eye works like the "panorama photo" feature: you scan the panorama with the iPhone which snaps multiple "narrow" photos and stitches them together building a broader one. The photo is broader but the resolution is not higher than the "narrow" original photos.

That's actually how even our "human" eyes work: we have only a very narrow "high resolution" area at the center of the retina and we "scan" the environment patching toghether a "broad" high resolution perception.

The technology mentioned in the article is like snapping multiple "narrow" photos without "scanning" the environment and building another "narrow" photo with much higher resolution. The photo will not be "broader" than the originals but will be more detailed.
 
Last edited:
The oppo has no OIS and the idea is, that your hand always shakes a bit. This way you get mimimal different pictures that are then put together into one highres picture. Problem will probably be, that the shaking is sometimes worse and especially in low light will produce bad images

I think the Apple idea is, that the OIS does two things:
1. it counters the shaking of the hand (as every OIS does)
2. it moves sensor and/or optics (same as the natural handshake on the oppo but calculated)

If I am right, this would mean that the Apple approach would work if the phone is on a tripod or sitting somewhere without any movements, while in theory, the oppo approach shouldn't really work.

EDIT: Afaik the Nokia approach is the other way round. They have huge sensors (which is awesome, because it's probably the easiest way to improve image quality) with a huge amount of megapixels (40MP on a 2/3 sensor with 1.1µm pixels I think). The images are then down sampled

That's pretty much it

The Oppo compensates for lack of OIS by taking 10 pictures, picking the best 4, then computational photography to stitch it together

Apple's patent uses OIS and only takes 4 shots, then computational photography to stitch

The 1020 has the full 40MP so it doesn't need to stitch anything

And all of them except the Nokia 1020 suck in low light, esp when flash is needed. Pretty much any phone that uses an LED flash is horrible because the pulse is too long, guarantees motion blur, and can only travel a meter and a half before dying
 
Unfortunately, instead of significantly increasing the sensor size and the number of pixels, it seems Apple wants to go the Oppo Find 7(a) way: using more than questionable post processing like this to emulate more pixels.

This, as has been proved by the Oppo Find 7 and 7a, doesn't really work in practice. Just check out the test images at http://www.gsmarena.com/oppo_find_7a-review-1073p8.php or at http://www.allaboutsymbian.com/features/item/19736_Camera_head_to_head_Lumia_1020.php . Particularly the former proves artificial 50 Mpixel images are actually worse than native 13 Mpixel ones.

While I do know Apple may still come up with an algorithm vastly superior to that of Oppo, I'm still pretty skeptical. Let's not forget that, upon announcement, Apple heavily advertised their software-only image stabilizer post processor in the iPhone 5s, which, as has been proved by independent testers like DPReview, simply isn't as effective as hardware solutions. (See the last section at http://connect.dpreview.com/post/7518611407/apple-iphone5s-smartphone-camera-review?page=4 )

All in all, don't expect miracles from Apple. Highly detailed images would in only one way possible: by putting a 40+ Mpixel, large (at least 1/1.5", as in the Nokia Lumia 1020) sensor in the iPhone. That, however, would add significant thickness (bumping it to at least 10-11mm) to the iPhone and, consequently, will never happen in the "let's-get-out-phones-as-thin-as-possible" Apple world. Software / post processing gimmicks will not work.

----------

This is basically an Oppo Find 7 with OIS

Just don't forget that the Oppo Find 7 only delivers in theory. In practice (see the reviews I've linked above) its 50 Mpixel mode is useless and in no way recommended.

----------

Really cool idea.

No, it isn't. Not even Apple can beat the laws of physics. And, knowing how bad their most-advertised multiframe "stabilization" is (see the DPReview link above), I'm pretty sure they won't be able to come up with something really cool in the future either.

It simply isn't possible to properly interpolate ("guess") missing pixels. There's only one way of providing significantly more detailed images: by using significantly higher-Megapixel sensors (and quality lens, of course); that is, going the Nokia 808 / 1020 way. But that'd mean significantly thicker and heavier phones. Software postprocessing (basically, this is what all about) won't help.

----------

What makes Apples tech different than what's already implemented in Nokia and Sony phones? This sounds exactly like what they do.

Nope, this ("let's interpolate the input of missing pixels by interpolating, based on the stabilizer's data") has nothing to do with

- simply stabilizing optically (Lumia 1020, 920, 925, 2013 HTC One's, LG G2)

- delivering a handset with a truly 40+ Mpixel sensor (Lumia 1020, Nokia 808).

Currently, only the Oppo Find 7 and 7a do what this is all about - and it just doesn't deliver. Again, interpolating rarely or, with Oppo, not at all works.

----------

I guess Apple has invented photography. Good job.

Probably the trailing /sarcasm is missing from your comment? ;)

----------

please link us to specific cameras that had this exact feature (componsite photo-stitching using OIS and onboard processor).

Oppo Find 7a. Under-delivering.

----------

When Apple implements fingerprint tech in iphone (not the first mobile phone with this tech) and Samsung comes out with it, every apple fanboys call Samsung a copycat. Let's see if those same people call Apple a copycat (Nokia is the first mobile phone with OIS) when they use OIS on iPhone. Double standard much?

You're wrong - Nokia "only" implemented (proper) OIS in many of their phones. "Simple" OIS has nothing to do with pixel interpolation - the one this article (and the Oppo Find 7(a)'s useless 50 Mpixel interpolated mode) is all about.

----------

This doesn't work for anything that moves. Just like image stabilization.

Exactly. This is one of the major problems with the tech.

Nevertheless, the interpolation in the Oppo Find 7a doesn't work with static objects either.

----------

Honestly, does anybody think that resolution is the problem with cell phone cameras?

The higher, the better, assuming the individual pixels don't get too small (which requires bumping up the sensor size - hence the huge, 1/1.2" sensor in the 808). Ever seen the actual, pixel-level detail of a Nokia 808 image?

Taking multiple pictures in the same time reduces exposure time per pixel (just like shrinking pixels) and therefore increases noise.

Wrong. Ever heard of temporal noise suppression? It has very widely been used in many-many Sony and Canon cameras for 4-5 years (in JPEG mode only and only with static objects.)

----------

What kind of person comes to MacRumors to post that?

A real "Apple enthusiast" maybe? ;)
 
This is as dumb of a post as saying "Future computer may be faster". No duh they'll improve. They have to - Nokia already has a 41mp OIS camera in a phone, while the iPhone's saddled with an 8mp non OIS camera or something.
 
Any luck they'll be able to get my camera to take a picture at a concert witout the lead singer looking like he has no face?
 
While I do know Apple may still come up with an algorithm vastly superior to that of Oppo, I'm still pretty skeptical.

As I already explained in this post Apple's approach is not new and it's different from Oppo's. In Oppo's case the camera "simply" takes many photos and "merges" the 4 best. In the Hasselblad and Apple's approach the camera takes multiple photos "shifted" by a controlled distance with sub-pixel precision. This actually allows the sensors to collect more information and to build a higher-resolution image.
 
Oppo Find 7a.

except that's an unreleased product (May 29 in China), and this patent was filed years earlier in 2012.

so either they in fact do things differently, or Oppo will have a problem on its hands :)

----------

Any luck they'll be able to get my camera to take a picture at a concert witout the lead singer looking like he has no face?

low-light and motion create blur. fast lenses can help bring in more light, but of course that requires bigger glass typically not seen in cellphone form factor.

----------

This is as dumb of a post as saying "Future computer may be faster". No duh they'll improve. They have to - Nokia already has a 41mp OIS camera in a phone, while the iPhone's saddled with an 8mp non OIS camera or something.

sounds like the nokia 808 oversamples to 41mp and then in the end delivers a 5mp image?

http://www.cnet.com/news/the-secret-behind-nokias-41-megapixel-camera-phone/

by itself tho, to compare MPs and saying iphone is poorer because its "only" 8mp compared to another phone w/ a higher MP (ex: 12) isnt a good metric. a higher number does not equate to "better pictures", and can in fact mean crummier pictures if the manufacturer simply crams in more & smaller picture elements into the same-sized sensor, which creates noise. you want bigger, light-absorbing pixels. the size & quality of the pixel is more important than the number of pixels. old news. bigger sensor is good too.
 
Any luck they'll be able to get my camera to take a picture at a concert witout the lead singer looking like he has no face?

If you meant the faces are over-exposed because the iPhone doesn't measure exposure using the spot metering approach but uses center area metering, which incorporates a lot of dark background; this is why shots are over-exposed, making faces and other, lit, bright surfaces well over the saturation limit.

There are several solutions to this:

- if you have iOS7, just tap-and-hold a bright surface, preferably in the same distance (so that your faces won't be out of focus) on the preview screen. The exposure (and, at the same time, focus) will be accordingly set; hopefully the the exposure will be turned down.

- if you don't have iOS7 and/or want separate focus and exposure setting,

a, use a camera client capable of exposure / focus locking - almost all AppStore camera apps are capable of doing so. I've published tons of articles on them here in the MacRumors forums; see for example https://forums.macrumors.com/threads/1621351/ (section "2.1.1.2 Third-party apps and separate focus / exposure indicators")

b, if you jailbreak, use CameraTweak. I recommend this the most as you can use the standard Camera interface with its built-in goodies like sweep pano and (semi-)HDR - with additional controls otherwise only available in third-party clients. See https://forums.macrumors.com/threads/1708305/ for more info.

----------

except that's an unreleased product (May 29 in China), and this patent was filed years earlier in 2012.

so either they in fact do things differently, or Oppo will have a problem on its hands :)

Of course I know it's different technology. I just wanted to point out that, in the smartphone world, somebody has already implemented something similar in their handsets - and it just didn't deliver.

While I'm certainly aware of Apple's controlled sub-pixel shifting tech inherently superior in theory, the practical results are still to be seen. Given that Apple's last "super-duper multi-frame EIS" tech in the 5s didn't really deliver and is more like a gimmick, I don't have high hopes.

low-light and motion create blur. fast lenses can help bring in more light, but of course that requires bigger glass typically not seen in cellphone form factor.

I think he referred to overexposed (and, consequently, burnt-out) faces - a very common problem when shooting people on stage. (This is why many camcorders / cameras have always had "stage" modes, which told the camera to seriously turn down exposure and/or set the exposure to the brightest surfaces in the frame.)

----------

by itself tho, to compare MPs and saying iphone is poorer because its "only" 8mp compared to another phone w/ a higher MP (ex: 12) isnt a good metric. a higher number does not equate to "better pictures", and can in fact mean crummier pictures if the manufacturer simply crams in more & smaller picture elements into the same-sized sensor, which creates noise. you want bigger, light-absorbing pixels. the size & quality of the pixel is more important than the number of pixels. old news. bigger sensor is good too.

Yup, when increasing the number of pixels, decent manufacturers need to increase the sensor size too so that the individual pixels' size doesn't decrease. Too bad this results in major thickness increase (18mm with the 1/1.2" Nokia 808 and 11mm with the 1/1.5" Nokia Lumia 1020); this is why most manufacturers don't really increase the sensor size.

This is why for example the 20 Mpixel Sony Z1 produces so awful images.
 
Cool, this is actually similar to how jumping spiders generate very accurate visual representations by moving their retinas they can generate extremely accurate visual representations of their surroundings.
http://www.evolutionswitness.com/?p=368

Glad that we're creating electronic technology inspired from evolutionary biological technology. :)
If that's an evolutionary advancement, why don't we have it?
 
sounds like the nokia 808 oversamples to 41mp and then in the end delivers a 5mp image?

http://www.cnet.com/news/the-secret-behind-nokias-41-megapixel-camera-phone/
That's a dumb word for it, I don't know why they don't use a better term. It combines actual pixels into a smaller number of larger ones for various output sizes, sounds like 5mp is the default. To me, "oversample" means to do something special in the first process (like oversampling in CD decoding), but receiving standard data from the actual photosites isn't "over" in my book.
 
LOL. So if someone copies Apple, it's a blatant ripoff. If Apple copies someone else, it's 'done right'. Good logic.

Fanboys fail to see that big companies copying eachother and improving on eachothers tech is the best things we consumers can ask for, as long as it's not a pure copy of a valid patent that the other company has spent millions on developing.

And no, I don't consider stuff like "slide to unlock" to be valid patents, but sadly the US patent system does.
 
That's a dumb word for it, I don't know why they don't use a better term. It combines actual pixels into a smaller number of larger ones for various output sizes, sounds like 5mp is the default. To me, "oversample" means to do something special in the first process (like oversampling in CD decoding), but receiving standard data from the actual photosites isn't "over" in my book.

Well, actually, "oversampling" is the right term in this case too. As with digital audio oversampling, digital image oversampling means the target multimedia content will be of higher quality (here, more detailed) than w/o oversampling.
 
I don't know anything about cameras, so I have a question: if this relies on taking several pictures rapidly, obviously each one has a short exposure time. There's a downside to that isn't there?
 
I don't know anything about cameras, so I have a question: if this relies on taking several pictures rapidly, obviously each one has a short exposure time. There's a downside to that isn't there?

In good lighting, where the noise reduction wouldn't smear all detail, there is point in trying to shoot as detailed image as possible. There (and only there!), this sensor-shifting method, if and when implemented properly(!), could work.

That is, the high shutter speed isn't a problem. Unless, again, you expect to take detailed shots in low light, which in itself contradictory, given that Apple applies strong noise reduction over ISO 400.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.