Then you probably don't know how to use a camera properly.
The iphone4 camera is just slightly better than a modern low end (~$100) ultra-compact point and shoot camera.
That's not really his point.
Most of us don't have pockets to accommodate a bulky camera everywhere we go (even the smallest compacts are inconvenient to carry around in addition to wallet, keys, phone etc.) and the real triumph of the iPhone 4/iPhone 4S is that the included camera is of the quality that the convenience of device convergence has outweighed the benefits of carrying around an additional dedicated camera.
I'm sure mdelvecchio is perfectly capable of using a camera, but now that the cameras inside phones are of a level that they can compete with most compacts...the decision now is whether he wants to drag that expensive SLR out to the park, or to a bar etc. on the off-chance that he might want to take a photo...or whether to save his bulky dedicated camera for special occasions like Holidays, Weddings, Baby Photos etc. and do the vast majority of his picture-taking with his phone.
Call me sad but I actually skimmed Lytro's CEO's Stanford dissertation last night and found something particularly intriguing. The technology relies on ray-tracing and as such does not require the optical zoom element to capture the necessary information to allow for post editing of focus (although zoom is included in this first model). To use his words:
"The light field sensor comprising the microlens array and the photosensor can be constructed as a completely passive unit if desired"
The camera effectively focuses on the "infinity point" at all times, negating the need for zoom. So to my mind...in these cameras the zoom is not for focussing and more for determining what resides in the frame. So...from what I can see from the engineering diagrams for the prototype units and this 1st Release, there is NO reason why the iPhone's existing microlens array couldn't be used and I see no reason why the processing couldn't take place on the main CPU. From the diagrams on the site, it appears that almost half of the length of the camera is taken up by the Optical Zoom element, while the Light Field Engine takes up most of the rest of it's size. All you would have to do is to incorporate the already very thin Light Field Sensor behind the iPhone's Microlens array and we're talking about this technology being viable TODAY.
Where the problem lies is in the technology's dependence on Directional Resolution (which affects overall Spatial Resolution). In order to maximise the effect of the focus-editing in post...you need high Directional Resolution. Essentially, how this light-field photography works is that there's a trade-off in pixels if you want the advertised depth-of-field effects. The dissertation talks about 4x6 being the most common photo size today and that anything above 2MP is essentially wasted. All of the theory he talks about is dependant on a final goal resolution of around 2MP (although this was written in 2006)
I don't think this tech in it's current form is designed to replace SLR Cameras. Photographers will not want to compromise on pixels to gain flexibility of focus in editing. I think it's more to bring some of the freedoms and features of an SLR to a smaller device...for those point-and-shoot moments where the person taking the photo didn't quite get it right at the time. It's for snapshots rather than professional photography and I for one like the idea of having a Light Field Photo as a toggled option in future Apple devices for those moments when you CAN sacrifice a little resolution for the sake of ending up with a usable final image.
Apologies for the length of the post.
