Originally posted by Abstract
Um....if IBM has made a display with a higher resolution than the human eye, then humans would never be able to see this "improved" resolution. Any image that appears on such a display will only appear as good as the human eye can perceive it. Basically, any image, whether the resolution is higher than that of the human eye, can only be seen as well as the human eye is capable of seeing it. Bad eye vision = poor display image, no matter what the resolution of the display happens to be.
Same with the audio capabilities.
Also, we will always need more processing power. We just don't know it yet.
Hmm, actually, not true. Assuming, for one instant, that all human eyes were exactly the same (your argument fails immediately if one takes into account the fact that the resolution capabilities of the human retina varies widely from one individual to another).
You see, the human eye does not use a square- or rectangular-pixel "grid" of photosensors to capture its view of the world around it. The rods and cones of the human eye will never align perfectly on the pixel boundaries of the display. As a result, what one rod/cone sees will either be entirely one generated pixel or a combination of two or more gewnerated pixels. In order for the eye to perceive "black,white,black" picket fence pattern at a frequency equivalent to the rod's linear density on the retina, going down just to the resolution of the rods (120 million in the retina, more highly concentrated in the center, which is the "max resolution" area of the eye) is insufficient, as going down only that far would yield "gray, gray, gray". One would in fact have to display the picket fence to much higher resolutions (meaning, either make the "black" portion significantly thinner or the "white" portion significantly thinner) to give the eyes enough data to see that this is really "black, white, black". In the real world, the eye is able to pick up clues of a thin black line because it can see that the rods in that line are slightly "grayer" than the surrounding all-white rods. This is a somewhat contrived example, but I think you can see what I'm getting at.
Another way of looking at it is in a more tangible analogy: a scanner versus a printer. Imagine you have a scanner set to scan at precisely 100 dpi (and the scanner engine scans only at this resolution, not at a higher resolution then applying logic to resolve a clearer 100 dpi image), and a printer that prints at exactly 100 dpi (true gray tones instead of dithering patterns). You print out a highly-detailed line drawing at 100 dpi. You scan that line drawing in. Is it still precisely black/white, or are there numerous "gray" areas? For the most part, you should have ended up with a slightly "blurred", grayed drawing, with few if any "true black" lines. Print out the scanned image, and scan it again: even more blurring. Every generation introduces more blurring, highlighting the fact that no generation was an exact depiction of the preceding generation.
Now, with the scanner, you have no way of combatting this blurring. The eye, however, sees in "grains" of varying sizes/shape, and can take multiple "pictures" of an object from slightly modified viewpoints, and so can distinguish between
gray" and "thinner than my resolution black", much as a scanner which scans at 1200 dpi and then downsamples and uses path-finding logic can come up with an almost identical black-white image of the original.
With the human eye, one can perceive exactly the displayed image if the display pixels are much larger than the resolution of the rods and cones at the center of the retina (ie, matching many rods/cnes per pixel). The eye can be fooled into not seeing pixel boundaries at all when the image is displayed at a resolution significantly greater than the resolution of the rods and cones at the center of the retina. The eye "wants" to be fooled, and the optical cortex tries to make what it sees (square pixels with a thin black line surrounding them) match what millions of years of evolution makes it believe it should be seeing (an image), and so even a fairly low-resolution image will be "resolved" to a photo-like image by the mind. But, that having been said, even better if one doesn't rely on one's own eyes "fooling" him, and instead can "see" the image without effort.
So, to the point: a higher-than-retinal "resolution" image can well be seen by the retina as being a synthesized image, and can cause the cortex to employ its extraordinary processing capabilities so that the brain sees "real" images. If one is to convey to the retina the highest possible density of information, the display array must have a resolution significantly greater than that of the target area of the retina. Defeating the post-processing by the cortex is another matter entirely, of course.