To the people saying Apple going to 1280x960 or 1920x1440 isn't a good idea, I found this article particularly insightful...
Some wager that the upcoming iPad 2 will pixel double both axis, similar to what the iPhone 4 did relative to its predecessor, while others believe that it will keep the resolution of the current generation.
Doubling both axis is a formidable technical challenge and would be a unique, likely expensive display. Continuing with the current resolution would represent a significant competitive disadvantage. As people acclimate to high density smartphones, such as the iPhone 4, the iPad's low density is really starting to stand out.
Few believe it will do anything in between. It wont, the common wisdom goes, go to say 1920 x 1440 or 1280 x 960, or any other fractional improvement less than an outright doubling or quadrupling. The logic is that
pixel scaling issues eliminate the possibility of such a half measure.
This harkens to discussions that occurred over
20 years ago.
It should be an embarrassment that such a discussion is occurring in 2011.
In the
TiPb article linked above the author leads off with a slur towards Android, saying
Either iPad 2 will have a standard 1024×768 display or a doubled 2048×1538 Retina Display, or developers and users will be in for the type of frustration usually ascribed to Android.
That makes for an odd, if not outright ignorant, statement: I cant recall ever reading anyone complain about the
density independent pixel of Android, or its awareness and accommodation of a wide variety of profiles. Thats a problem that it has solved very well, and a large ecosystem of sizes and resolutions of displays exist in remarkable harmony.
Consumers like being able to choose between 3 15+ devices with a wide variety of densities. Choice is good.
Because of course the DPI issue has long been solved. Otherwise you would be lamenting that your 72dpi word processor isnt compatible with your 300dpi printer: Everything prints out all tiny-like. Is that the case?
Vector fonts with pixel independent abstractions have been around for a long time (in TrueType and Postscript form), with Apple as one of the primary inventors. Most GUI frameworks, including iOS, have the ability to scale UI rudiments to virtually any resolution and pixel density with ease.
That is an ancient problem, long solved.
But what about icons? What about bitmap graphic artifacts?
In an ideal world icons would come in vector graphic form. That isnt the case on Android (the platform doesnt support SVG, including in the browser, which is a huge deficiency), but it is still shocking that Apple, which usually takes the lead on such innovations, doesnt use them for iOS, as had been widely speculated as a given before the iPhone OS was first released.
With a vector graphic the rendered image is always perfect for the target, ideally with hints that suppress decorations at very low sizes.
Even with bitmap graphics, however, while its easy to contrive ridiculous examples to demonstrate the worst of scaling, the reality is that given that text should always be UI generated from vector fonts, perfect for the target, and graphics are usually just supplementary decorations, where scaling up or down by partial multiples is often perfectly adequate.
For your consideration below are some iOS icons (used for fair use purposes but owned by Apple) at their original pixel size, and then scaled to 125% and 150%. Scaling was done using
Sinc (Lanczos3), which is a good algorithm to use when scaling
up and you want to maintain fine detail.
The horrors! Just to be clear (as it's hard to imagine what the larger images would look like when shown in the same physical space), we're comparing this to simply pixel-doubling, which would look like the following (cropped to avoid exceeding most reader's screen bounds).
There is no universe where a straight pixel-doubled image looks better than an interpolated image, unless you have fine detail in the image (like text) which shouldn't be in the image to begin with.
Not only do they still look great, but remember that in such a case the actual viewed sizes would also decrease proportionally, so the marginal artifacts would be rendered completely irrelevant. Reading some of the blog entries on scaling you would think youd end up with some sort of blob.
Not to mention that most iPad apps would be fixed up to handle the new platform shortly after the SDK were released...