Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
How would that be different than if iPad apps included standard and 1.25 scaled graphics and iOS "handled" the rest? It's the same thing. Two sets of graphics, the OS handles scaling of everything else.

And "everything else" is usually vector graphics. Apple's UI widgets are vector graphics or images generated in code with CoreGraphics so they're generated at the right resolution.
 
This is not exactly correct. Let's say I'm a developer (which I am) and one of my buttons has a 1 pixel wide border in the 1x (non-retina) version, which I create as a vector-based graphic in Photoshop. When I go to make the iPhone 4 version of this button at 2x, I can simply scale my vectors by 2 and my Retina display button will have a 2 pixel wide border around it as it should. However, if the scale factor is not a whole number, say 1.25, then I have to decide how to treat the border of the button. Should it become 2 pixels? Should it stay 1 pixel? Should I interpolate somehow? Photoshop can handle sub-pixel units, but getting everything to look right and line up correctly in the scaled up version requires some extra tedious work at least. This extra pixel tweaking can really add up in an app with a lot of custom graphics.

My first question is why are you generating your buttons in Photoshop? Do it in your app with CoreGraphics and they'll look great at any size/resolution. CoreGraphics has support for shading, masking, gradients, etc.

UI elements should be done in code, not in Photoshop, unless you actually want a real world image to be part of your element. Gradients, shadows, highlight masks, etc. can all be done in code and thus generated at the appropriate resolution when needed.
 
blah, blah, blah

EGADS!!!!! You've cracked it! You, and you alone of all the worlds inhabitants, ( including those that Apple pays a buttload of money too to figure such things out), have it all scoped out!!!!!

Now if we could only get you to focus your amazing intellect on that pesky World Peace issue!
 
It doesn't work the same way. Let me use a really bizarre but obvious example (quoted for easier reading):

It does work the same way, because if those were 1x1 pixel images as oppposed to controls, you'd be faced with the same problem (regardless of how ridiculous that scenario is).

I'm not talking about pixel doubling, that's what happens on the iPad with an iPhone app and it looks terrible.

Yes, you are when you say that the resolution can only be increased by integers like with the iPhone 4 because they look terrible otherwise. Pixel doubling is what is used for non-retina iPhone apps when they are used on the iPhone 4.
 
Last edited:
You can write (code) with a brush or with a trowel. Brushed apps required the least amount of effort (little or none) to adapt. Apps authored with a trowel are like bricks, sturdy and fairly reliable but a pain to modify.

There are (I think) some applications that are served best as brickwork, but if you can get a nicely brushed app (if you can tell) instead, do that.
 
My first question is why are you generating your buttons in Photoshop? Do it in your app with CoreGraphics and they'll look great at any size/resolution. CoreGraphics has support for shading, masking, gradients, etc.

UI elements should be done in code, not in Photoshop, unless you actually want a real world image to be part of your element. Gradients, shadows, highlight masks, etc. can all be done in code and thus generated at the appropriate resolution when needed.

It doesn't matter what you use to make the graphics, my point about scaling still applies. That said, I guarantee you that 99% of all professional UI designers use Photoshop, and very infrequently are custom interface elements done in code. I do agree that doing them in code would be ideal in many cases, but practically speaking it is rarely done.

By the way, you owe me $50.

Hah, I knew your name sounded familiar! Check's in the mail :D
 
It does work the same way, because if those were 1x1 pixel images as oppposed to controls, you'd be faced with the same problem (regardless of how ridiculous that scenario is).
No, it's the exact same scenario regardless of whether it's an image or a control. If it's an image, it has to be drawn either to the UIView or on a UIImageView, either way you have the same issues.
Yes, you are when you say that the resolution can only be increased by integers like with the iPhone 4 because they look terrible otherwise. Pixel doubling is what is used for non-retina iPhone apps when they are used on the iPhone 4.
No, I'm not.

There isn't such a thing as a "non-retina" iPhone app, iOS 4 automatically scales apps on the iPhone 4 to make use of the retina display. It's only the images which need updating which until updated otherwise look identical on the 3GS and the iPhone 4.
 
It does work the same way, because if those were 1x1 pixel images as oppposed to controls, you'd be faced with the same problem (regardless of how ridiculous that scenario is).



Yes, you are when you say that the resolution can only be increased by integers like with the iPhone 4 because they look terrible otherwise. Pixel doubling is what is used for non-retina iPhone apps when they are used on the iPhone 4.

Thank you for contributing to this thread and correcting people Eso. There's so much misinformation posted in here that it's making my head spin.
 
You can write (code) with a brush or with a trowel. Brushed apps required the least amount of effort (little or none) to adapt. Apps authored with a trowel are like bricks, sturdy and fairly reliable but a pain to modify.

There are (I think) some applications that are served best as brickwork, but if you can get a nicely brushed app (if you can tell) instead, do that.

I'm sorry and I mean no offence, but your whole post comes off as "doing it with Photoshop is easier because I say so." I find doing it in code is easier. It takes more initial effort, yes, but afterwards, modification is quick and easy.

For example, let's say you make custom green glossy gradient buttons in Photoshop. I do it in code. It takes me longer at first, I admit.

But then I want a button that is red glossy gradient. You have to go back to your original in Photoshop, change the colour, and resave two images (regular and @2x) and add two files to your project.

I simply call myCoolButton.tintColor = [UIColor redColor]; and I'm done.

You take longer to change, and your app gets bigger because you have to store more images in the project.

Let's say you want a larger button on one of your screens, again, back to Photoshop, two more files. I simply set a new frame on my button (something you have to do anyway) and it draws perfectly.
 
It doesn't matter what you use to make the graphics, my point about scaling still applies. That said, I guarantee you that 99% of all professional UI designers use Photoshop, and very infrequently are custom interface elements done in code. I do agree that doing them in code would be ideal in many cases, but practically speaking it is rarely done.

I don't buy your 99% figure, but I get your point, just as you agree doing it in code is ideal. But since it's ideal, my question is, why not do it in code and reap the rewards? It's more effort up front, but easy to modify from then on.

FYI, I have made elements in an illustration app (I prefer Stone Create to PhotoShop for this work), but only due to a time constraint. I then went back and did it in code.
 
I'm sorry and I mean no offence, but your whole post comes off as "doing it with Photoshop is easier because I say so." I find doing it in code is easier. It takes more initial effort, yes, but afterwards, modification is quick and easy.

Then my metaphor came out backwards. "Using a brush" was intended to mean maximizing abstraction and flexibility, the opposite of hard-coding and bit-mapped nib instance graphics. My apologies.
 
OP makes sense. Everyone that isn't understanding him, does not make sense.

I always felt the 4x resolution would be a nightmare for developers trying to find images to fit it.
 
OP makes sense. Everyone that isn't understanding him, does not make sense.

I always felt the 4x resolution would be a nightmare for developers trying to find images to fit it.

Noise, you mean. OP makes noise, not much else.

Sydde, the only person posting noise here is you.

I made relevent points and there is an active discussion going on. If you do not have anything more to contribute than worthless noise, the exit is down here V
 
1280x960 makes a lot of sense. I'm very surprised that HP/Palm's new tablet doesn't adopt such a resolution.
 
1280x960 makes a lot of sense. I'm very surprised that HP/Palm's new tablet doesn't adopt such a resolution.

Guys.. Apple wont up the resolution because it would affect the cost too drastically for no reason. A resolution bump is pointless in more ways than 1.

What isnt pointless, however, is change in display quality / touchscreen width: Think contrast ratio, blacker blacks, whiter whites, more colorful colors, less power for a better quality image and making a thinner touchscreen would cut the price by 40% (which is currently the most cost effective way to cut prices, right after SSD costs)

Lets think, A perfect low cost iPad, would probably consist of the following:
- Better quality display (higher contrast)
- Thinner touchscreen
- Slightly improved battery, or balanced battery with the improved hardware
- Smarter, faster dual CPU / GPU: To save battery while not in use, and be more efficient while in heavy use
- More memory, 1GB max: Any more than that, really is not needed for a tablet, and will waste money
- A front facing camera only: A rear facing camera is not needed, you do not need to take pictures with a 10" device

My guess, is that the specs above would keep the cost for a 16GB version very low, say about the same as the current price, or even $10 higher (which would be very impressive, given the powerful specs)

Actually now that I think about it.. Wouldnt it be so cool if Apple did, for the first time, a PPP event!!

What PPP is, is an extremely low profit margin that drives sales to go 20x what they usually would, and this means that they only charge a penny over what it takes to manufacture the device.

Think about it: Apple could sell a 64GB iPad for ONLY $309 (manufacturing cost is somewhere around the $300 mark)

That would be pretty sick xD
 
What Apple really needs is resolution independence ala WebOS and the new fragments implementation in Honeycomb.
 
You can write (code) with a brush or with a trowel. Brushed apps required the least amount of effort (little or none) to adapt. Apps authored with a trowel are like bricks, sturdy and fairly reliable but a pain to modify.

There are (I think) some applications that are served best as brickwork, but if you can get a nicely brushed app (if you can tell) instead, do that.
Oh the analogies some people come out with...you can just tell they read it somewhere else and are trying to pass it on as though they came up with the [ridiculous] concept.

iPad 2 does not need such a high resolution, fact. Will it get it? Probably, but far more important is the actual quality of the screen itself, which is not a 1:1 relationship with resolution.
 
Pixel doubling is what is used for non-retina iPhone apps when they are used on the iPhone 4.
That's true for apps using the standard UI elements which are technically "retina apps", just lacking high-res bitmap graphics. Most games are upscaled with a bilinear filter, though.
 
- A front facing camera only: A rear facing camera is not needed, you do not need to take pictures with a 10" device
Actually, it could be useful. I'd much rather take a snap with my iPad (or video) and see it on that large screen than I would my iPhone, and if I had both with me I'd lean towards the iPad for not only that reason but also I won't have to transfer the image or video over to my iPad, it's just already there. Unfortunately, it looks like we'll get a cheap rear facing camera so in that case I wouldn't.

On another note, having a rear facing camera opens the iPad up to all sorts of augmented reality apps, which allows a whole other type of apps to be designed.

That's true for apps using the standard UI elements which are technically "retina apps", just lacking high-res bitmap graphics. Most games are upscaled with a bilinear filter, though.
Pixel doubling is the wrong term, it's what happens with an iPhone app on an iPad. When an iPhone app on an iPhone 4 is drawn it is drawn at a much higher resolution and is thus "retina", with exception to graphics yet to be updated which look identical vs on the 3GS unlike pixel doubling which looks terrible, period.
 
1280x960 would be the perfect resolution bump up for the iPad 2.

It won't be too big or expensive an upgrade and it would be easy for the software to upconvert to that resoultion.
 
Retina Display ~2500x1500 resolution which on a 9.7 inch screen sounds just a tad unviable to be honest if they want to price it at $499, especially since even their $3000 MBPs fail to offer that.

And a 1280x960 resolution (1.25 x the current aspect, ratio, allows pixel perfect play back of 720p content, and allows for two iPhone 4 apps (640x960) to be run side by side). That rumor seems a lot more feasible to me for the iPad 3.
 
While two iPhone apps running side by side may sound good "in theory" I just don't see how it would work in reality.

First, would apps have to "opt-in" to be able to run side by side? Obviously there is no point to run two landscape iPhone games side by side, unless you have two kids playing two separate games at once on either side of the iPad.

Second, with Universal apps, would there be an option for the user to downgrade their interface to the iPhone version to run it side by side? Most iPad universal apps have more functionality than their iPhone counterparts so it doesn't seem like a good tradeoff.

Third, what is the point? I guess you could check twitter and facebook simultaneously, but there would have to be new API's and devs would have to implement them if you want the two side by side apps to interact with each other. Like say dragging an image from one to the other.

And finally it seems like it would DISCOURAGE developers from creating universal versions of their apps, because the side by side mode is available for "free".

I can think of a few cases where it would be useful but overall I think it would be too confusing in both implementation and in use. (Example: what does the home button do when 2 apps are open? close both of them? How do you switch just one app out or close one of them? What about using the multitasking tray? do both apps show up? How do you open the second app to begin with?).

I think there are just too many new interface requirements that would be too confusing for most users, which kind of defeats the simplicity of the iPad. It is an interesting concept but I don't think Apple would use it. Plus, marketing it would really difficult - "Our new Magical iPad can run 2 iPhone Apps at the same time!" It kind of seems like a downgrade when most people want better apps, not just the ones they have on their phone. Stuff like iWork, GarageBand, and iMovie.
 
Last edited by a moderator:
To the people saying Apple going to 1280x960 or 1920x1440 isn't a good idea, I found this article particularly insightful...


Apple's Embarrassing Predicament

Some wager that the upcoming iPad 2 will pixel double both axis, similar to what the iPhone 4 did relative to its predecessor, while others believe that it will keep the resolution of the current generation.

Doubling both axis is a formidable technical challenge and would be a unique, likely expensive display. Continuing with the current resolution would represent a significant competitive disadvantage. As people acclimate to high density smartphones, such as the iPhone 4, the iPad's low density is really starting to stand out.

Few believe it will do anything in between. It won’t, the common wisdom goes, go to say 1920 x 1440 or 1280 x 960, or any other fractional improvement less than an outright doubling or quadrupling. The logic is that pixel scaling issues eliminate the possibility of such a half measure.

This harkens to discussions that occurred over 20 years ago.

It should be an embarrassment that such a discussion is occurring in 2011.

In the TiPb article linked above the author leads off with a slur towards Android, saying “Either iPad 2 will have a standard 1024×768 display or a doubled 2048×1538 Retina Display, or developers and users will be in for the type of frustration usually ascribed to Android.”

That makes for an odd, if not outright ignorant, statement: I can’t recall ever reading anyone complain about the density independent pixel of Android, or its awareness and accommodation of a wide variety of profiles. That’s a problem that it has solved very well, and a large ecosystem of sizes and resolutions of displays exist in remarkable harmony.

Consumers like being able to choose between 3” – 15”+ devices with a wide variety of densities. Choice is good.

Because of course the DPI issue has long been solved. Otherwise you would be lamenting that your 72dpi word processor isn’t compatible with your 300dpi printer: “Everything prints out all tiny-like”. Is that the case?

Vector fonts with pixel independent abstractions have been around for a long time (in TrueType and Postscript form), with Apple as one of the primary inventors. Most GUI frameworks, including iOS, have the ability to scale UI rudiments to virtually any resolution and pixel density with ease.

That is an ancient problem, long solved.

But what about icons? What about bitmap graphic artifacts?

In an ideal world icons would come in vector graphic form. That isn’t the case on Android (the platform doesn’t support SVG, including in the browser, which is a huge deficiency), but it is still shocking that Apple, which usually takes the lead on such innovations, doesn’t use them for iOS, as had been widely speculated as a given before the iPhone OS was first released.

With a vector graphic the rendered image is always perfect for the target, ideally with hints that suppress decorations at very low sizes.

Even with bitmap graphics, however, while it’s easy to contrive ridiculous examples to demonstrate the worst of scaling, the reality is that given that text should always be UI generated from vector fonts, perfect for the target, and graphics are usually just supplementary decorations, where scaling up or down by partial multiples is often perfectly adequate.

For your consideration below are some iOS icons (used for fair use purposes but owned by Apple) at their original pixel size, and then scaled to 125% and 150%. Scaling was done using Sinc (Lanczos3), which is a good algorithm to use when scaling up and you want to maintain fine detail.

vurVE.png


4uLX5.png


rT5SH.png


The horrors! Just to be clear (as it's hard to imagine what the larger images would look like when shown in the same physical space), we're comparing this to simply pixel-doubling, which would look like the following (cropped to avoid exceeding most reader's screen bounds).

p39d1.png


There is no universe where a straight pixel-doubled image looks better than an interpolated image, unless you have fine detail in the image (like text) which shouldn't be in the image to begin with.

Not only do they still look great, but remember that in such a case the actual viewed sizes would also decrease proportionally, so the marginal artifacts would be rendered completely irrelevant. Reading some of the blog entries on scaling you would think you’d end up with some sort of blob.

Not to mention that most iPad apps would be fixed up to handle the new platform shortly after the SDK were released...​
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.