nagromme said:
Interesting. I suppose for lots of little "cameras" to assemble one big image in a clean way, they'd all have to be converged on a single focal distance though? Sounds like it would lack flexibility, but still intriguing!
A trivial implementation, where each records a segment of the overall image, to be combined mosaic-style, would require this.
But if they all aim in the same direction (perpendicular to the surface), you'll get lots of overlapping images, all capturing the object from different perspectives. The amount of DSP and software work to put this together would be massive, but it should work.
If you've ever used Photoshop to stitch images together into a panorama, you've done this on a small scale. While you'd never want to manually stitch together thousands of images, there's no problem with an automated procedure doing this, especially if all the images are taken by an array of sensors with precisely-known dimensions and spacings.
What's more, since there are different perspectives involved, it might even be possible to construct a 3D image of what it's aiming at. After all, two lenses (your eyes) spaced a few inches apart give you depth perception, so it would stand to reason that a few thousand lenses spread out over the area of a 12-30" LCD panel should be able to do the same.
As for focussing, I don't think it is a concern. As the aperture decreases, depth-of-field increases. This is why pinhole cameras can focus on just about anything. If the lenslets are small enough to squeeze between the pixels of an LCD panel, that should allow it to focus on just about anything.
Of course, what Apple is actually referring to is anybody's guess, but I don't see any tehcnological reason why they couldn't do what I'm describing.