Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In other words, you answered one bad generalisation with another. How did that add to anyone's learning? Anyway, the context of the discussion is UI graphics — things like window controls, not logos on billboards. In this context, the other commenter's generalisation was more correct than yours.
It may have been an imperfect generalization, but everybody here knows what he meant, including you. There's just no need to litter the discussion with qualifiers of qualifiers.

When you're talking about window controls that need to be 4x4 or 512x512, you may as well be talking about logos on billboards.

Yes a 4x4 bitmap is going to be smaller than most simple SVG graphics. Maybe even 32x32 bitmaps or whatever. But the bigger you need that graphic to appear, the more likely an SVG graphic containing the same information will reproduce better, and take up less disk space, and the performance difference is most likely negligible (especially if you have a good processor and programmers bright enough to take advantage of any available GPU — EDIT: and a good illustrator!).

And since we ARE talking about retina monitors here, it should kind of go without saying that we're talking about large graphics, not solely 16x16 four-color icons.
 
In other words, you answered one bad generalisation with another. How did that add to anyone's learning? Anyway, the context of the discussion is UI graphics — things like window controls, not logos on billboards. In this context, the other commenter's generalisation was more correct than yours.

I don't have a problem with anyone asking questions. I have a problem with people spreading misinformation that, when proved wrong, they later back away from and try to shift the goalposts.

I never backed away from anything. I merely clarified my position on vectors that at tiny sizes bitmaps are generally smaller in filesize than a vector - something to which we both agree on. If I had left my post at 'no they are smaller' then that would of been spreading misinformation.

However my point is and has been answered somewhat, as higher and higher resolutions require ever larger bitmaps, why not look into vectors?
 
Everytime there is some thread about dpi or resolution independence, there are so many wrong statements. Why don't you read and try to understand the topic of digital images with a minimum sense of logic?

But instead of writing down again what has already been written down by those who understand the subject, I will just sit back and enjoy the feeling that I will not have any trouble finding new software-engineering projects, as more and more people obviouslsy have no clue about the basics in digital imaging and will cry out for help.
 
As of today, there is only on commercially available display with higher resolution than 2560x1600 and it's quite expensive, $36,000.

I wouldn't expect "retina monitors" anytime soon.

Except LG has already indicated a 27" 166 dpi 3840x2160 screen that would be perfect in a next generation iMac(third from the left):
http://flatpanelshd.com/pictures/lgsid2011-1l.jpg

Also consider the strong partnership between LG/Apple on displays. Think of the first place that 2560x1600 panels appeared (Apple Cinema Display), the first place 27" 2560x1440 screens appeared (iMac).

It is all falling into place for the first appearance of a high DPI 27" 3840x2160 screen to appear first in a next Gen iMac within less than two years.

I expect 2012 will be Apples year of Retina. Retina iPhone, Retina iPad and Retina iMac.
 
I'm cracking up that you guys are being all anti-vector about this, when every word you read in OS X is being treated as vector, and rendered in a way that is much more complicated than anything we're talking about doing with an all-vector UI system.

The hardest thing to deal with in vector is text, because it requires more than "looking good." It requires actually looking "perfect."

Do you guys even know how sub-pixel rendering works? Do you know how much processing power is required to process 4 pt text on screen to look smooth and clear? You just click a box, and poof, you've tripled the amount of processing required for the OS to display a character on screen. And even an older Mac doesn't miss a beat.

And yet, even with the equivalent of 3x horizontal resolution being used for sub-pixel text rendering in every modern OS, we can't figure out how to do vector UI elements, too?

Imagine the entire OS interface running with sub-pixel rendering...every box, every line, every arrow or bullet or drop shadow or icon...

It would effectively triple the horizontal resolution of every LCD monitor right now, without even needing any kind of "retina" display resolutions. Yes, there is a big caveat that comes with that (certainly works best with black or white elements in contrast), but looks at the examples of things that are NOT high-res in Lion...black arrows. A grayscale apple icon.

Both of those things would look much better if they were being rendered 2 or 4x their current size, and then resampled, even (especially) on a 72 dpi monitor using sub-pixel rendering. But since they are bitmaps and already being displayed at 100%, there is nothing for the sub-pixel rendering engine to infer. Even if you wanted to just display icons with a higher horizontal resolution, you could use bitmaps that were 2x the size and render them this way. For all I know, Apple is doing this in the Dock and in the Finder. I haven't investigated it. But it's clear they aren't doing it in the menu bar, and they aren't doing it in their own UI elements in various places throughout the OS, and they haven't been making much progress since this was introduced 6 years ago in 10.4.
 
I have a 27" iMac running a 30" and at roughly 2.8' away, I really think the improvement would be a diminishing return, hardly noticeable. I haven't crunched the DPI:average viewing distance numbers, but I think for normal computing (ie. graphics/photo editing not withstanding) it's not worth the GPU strain.

To a certain extent I agree, but it does depend on how close you sit and how good your vision is. I made a table with the DPI and limit around where the pixels blends for both nomal 20/20 and "perfect" human vision based an previous article on iP4 retina limits. Note that you are using your display somewhat close to the point of diminishing returns so you likely see pixels from time to time (I sit further back an use a 94 DPI and experience the same). But for those with better vision or who sit closer or both. There is room for improvment. Even you would see some nice improvement from a 166dpi display that I think Apple will go with (my previous post).

Code:
DPI    Limit 20/20    Limit Perfect
94         36.6           61.0
100        34.4           57.3
109        31.5           52.6
132        26.0           43.4
150        22.9           38.2
166        20.7           34.5
200        17.2           28.7
250        13.8           22.9
300        11.5           19.1
326        10.5           17.6

I think the likely 166 DPI display will essentially be Retina Desktop for the Vast majority of people. Once Apple transitions to this, they really don't have to worry much about resolution independence again as it will be near perfection.
 
Except LG has already indicated a 27" 166 dpi 3840x2160 screen that would be perfect in a next generation iMac(third from the left):
http://flatpanelshd.com/pictures/lgsid2011-1l.jpg

Also consider the strong partnership between LG/Apple on displays. Think of the first place that 2560x1600 panels appeared (Apple Cinema Display), the first place 27" 2560x1440 screens appeared (iMac).

It is all falling into place for the first appearance of a high DPI 27" 3840x2160 screen to appear first in a next Gen iMac within less than two years.

I expect 2012 will be Apples year of Retina. Retina iPhone, Retina iPad and Retina iMac.

Indication doesn't really mean anything. The picture you linked shows no clue of the schedule or price. It could be this year, or 2015. It could cost $1000 when it comes out, or $30,000. The fact is that the current displays with that resolution cost more than 15 high-end iMacs. The price would have to drop by ~97% to be suitable for iMac. That is quite a big drop in one year and I don't see it happening. Thus I doubt we will see "retina displays" in Macs anytime soon.
 
Last edited:
I think that this is just to support new 3rd party displays myself.
 
Indication doesn't really mean anything. The picture you linked shows no clue of the schedule or price. It could be this year, or 2015. It could cost $1000 when it comes out, or $30,000. The fact is that the current displays with that resolution cost more than 15 high-end iMacs. The price would have to drop by ~97% to be suitable for iMac. That is quite a big drop in one year and I don't see it happening. Thus I doubt we will see "retina displays" in Macs anytime soon.

You could have said as much about 30" ACD before it appeared. How much did we see/know about 2560x1600 screens before Apple dropped the 30" ACD?? Nothing. Yet it was a large leap, just about double the pixels of anything else on the market. This new 27" would again double that. Perfectly feasible now.

Given the history between Apple/LG that 27" version is almost certainly an Apple request. There will be a premium, but it will be more like a few hundred dollars more, not thousands. There is nothing really that challenging about building 166 dpi display in 2011.

A 27" 3840x2160 screen will give the iMac a huge boost in mindshare and is a perfect fit with everything before. 2012 or 2013 at the latest IMO.

Sure it is guesswork, but it is based on many clues both form Apple and LG and the strong relationship between the companies.
 
Apple Logo 16x16:
SVG: 16 vector points. 3 color swatches. 1 gradient. 95 KB (uncompressed SVG format).
PNG: 256 pixels. 32 bits per pixel (8 bits per RGB + alpha channel). 78 KB (compressed format, 82 KB uncompressed)

Apple Logo 32x32:
SVG: (same file)
PNG: 1024 pixels. still 78 KB (compressed format, 94 KB uncompressed)

Apple Logo 128x128:
SVG (same file): 95 KB
PNG: 16,348 pixels. 92 KB (compressed format, 156 KB uncompressed)

Apple Logo 512x512:
SVG (same file again): 95 KB
PNG: ~262K pixels. 127 KB (compressed format, 945 KB uncompressed)

Apple Logo 1024x1024:
SVG (same file again): 95 KB
~1 million pixels. 168 KB (compressed format, over 3.4 MB uncompressed)

I guess I don't see the advantage in using pixel formats for simple UI elements at any resolution.
 
I guess I don't see the advantage in using pixel formats for simple UI elements at any resolution.

This really only applies items that are very simple. Eventually you can make them large enough that the pixel drawing requires more space.

But if you do anything complex, then you practically doing 3d mesh with textures and rendering. Both using more space, using more computation resources and requiring a whole different type of artist and a lot more time/money to create assets.

Apple looks to be going with the simple pixel doubling approach, once this is accomplished they really won't have to worry about DPI independence again as they will have reached a level that makes further advancement unnecessary.

There will never be an issue with DPI independence going forward on the iPhone as it is already overkill for the majority of the population. It will be similar for iPad if they double the resolution and similar iMacs if they bring 166 DPI screens.
 
These excavated tidbits of information are always the best: they let us glimpse through a narrow window into the future of Apple's products.

What it does mean is that Apple is planning higher res. monitors. I'd guess they'll be out next year sometime. Hope they don't go bigger than 27". I own a 30" and while it works fine, I find it too cumbersome. I'd rather have the same resolution on a 15" or 20" screen.
 
Using Scalable Vector Graphics for Icon

I'm quite sure technically vector can be used from icons. There is no technical challenge in this aspect. Vector graphics are normally smaller in size versus it's bitmap counterparts and w.r.t to HiDPI, it just make sense.

But the problem is also processing power to actually rasterize these vectors. If you have a tons of icons and all of them are formatted in vector form, it will pose a substantial load to the GPU and/or CPU and mind you, icons are not the only payload for GPU and CPU.

Next is bitmap has all along being the de-facto standard for icons. Reason been the following I highly suspect. In general, an artist can express more freely and also more accurately when using bitmap versus vectors. Bitmap can produce more fanciful gradients and effects that vectors will have a hard time to mimic. Not that it is not possible, Corel Draw do have such a functionality, but if your graphic is so complicated, you end up using more vectors to represent the same bitmap information, you might just end up rectangle vectors to describe pixels. Surely in such cases, rasters make more sense.

Next is hinting. When fonts get really small, glyphs which are basically vector-based doesn't work well. Hence there is something called font-hints embedded in fonts to describe how to render each glyphs at different font sizes. This will make large font looks nice and mini fonts looks clear enough for the eyes to view properly. I believe it's not that well defined for hints in normal vectors should they are used for icons, which are normally very small buttons.

Hence all in all, bitmaps gives you faster speed, probably better usage of processing load, more predictable memory consumption, more accurate graphical representation etc..



You're not a software developer, are you?
Scaling is not the problem. Scaling and not looking like ****, that is the problem.
 
Next is hinting. When fonts get really small, glyphs which are basically vector-based doesn't work well. Hence there is something called font-hints embedded in fonts to describe how to render each glyphs at different font sizes. This will make large font looks nice and mini fonts looks clear enough for the eyes to view properly. I believe it's not that well defined for hints in normal vectors should they are used for icons, which are normally very small buttons.
Font hinting is only needed for low resolutions. At high resolutions (such as for a 215+ ppi monitor), it is completely unnecessary.
 
Have we reached the point where our eyes can no longer notice the differences yet?

See the table in my post (#81) above.

The needed DPI depends on viewing distance and eyesight.

I made a table of various DPI values and the distance in inches where you are near the limits of normal vision (20/20 1 arcminutes resolution) and "perfect" vision (.6 arcminutes resolution).

The best consumer desktop is currently 109 DPI (iMac 27"), if you have normal (20/20) vision and sit about 3 feet away you likely won't see much benefit for higher DPI, but if you sit closer or have better vision then more DPI will help.
 
It's not an Either Or proposition. Vectors and Bitmaps have different strengths. Either can scaling effectively when used in the appropriate place.
Photorealistic graphics.[/B]
Use a Bitmap. Photos tend to scale very well and result in few visible artifacts.
Pixel-perfect designs.
Pixel-perfect at RD resolutions is visibly meaningless. Depending on the design a scaled Bitmap may be the best choice.
Processing power.
Arguable. Both can be handled very efficiently by modern machines.
File size.
Arguable. When rendered at RD resolutions, typical UI elements are close enough in size as to have little-to-no performance impact.
Textures.
See Photorealistic graphics. Also consider resolution independent procedural techniques.
Effects rendering.
Artistic choice. Some effects may make for sense as bitmaps, others may work look better as vectors.
Development time.
I don't want my computing experience to stagnate for the sake of easy development. Take the easy way out and use bitmaps, but expect people to complain once resolutions change.

Scalable UI's have value in many area's, from device cost to visual impairment. We should be focusing on how to accomplish it as a goal instead of making excuses for why it can't be done.
 
Maybe I'm just nostalgic for the 80s and 90s, back when vector graphics were a possibility. Now that we have much more powerful CPUs and GPUs, it seems that vector graphics have become way too hard to handle... :rolleyes:

For the core UI of an OS, yes they're still too costly.

No OSs in the 80s or 90s used Vectors graphics for UIs. None. Some games in the 70s/80s used vectors, like Asteroids (1979). Since then for anything with multiple colors it's all been bitmaps.


And games have been using vector graphics since the 80s and 90s, at least, games made with a little known technology called Flash.

Flash is about the only games tech that's used vector graphics, and it's slow as poop. I'd not want to have any OS UI rendered completely via Flash. With bitmaps you can very quickly blit a whole block of pixels to the screen in an instant. With vector graphics you need to loop though all the points in the graphic, calculate the boundaries of shapes, fill the shapes, AA the edges/lines ect and adding bezier curves (solving parametric equations in real time is the opposite of fast) gradients and multiple layers of alpha transparencies suddenly it gets really damn expensive compared to just blitting a block of pixels. There's no way to speed up vectors (yet) using the GPU either, it all has to be rendered on the CPU - GPGPU computing still hasn't reached a point that it could be used for something like the core UI of an OS.

Now, there's Scaleform which is a games UI tech built on flash, but it's still not really vector based. I used scaleform on a handful of crossplatform PC/360/PS3 games when I use to work for Sega and it converts vector graphics into 3D vectors/bitmaps to be rendered by a GPU. It's still mostly bitmaps though - scaleform converts 95% of the flash 2D vector content into bitmaps being rendered into full screen quads by a GPU and it's not really all that resolution dependant as it's mostly bitmaps in the end it just relies on the GPUs linear bitmap filtering to scale the content.

Or the above Sierra game, Conquest of Camelot ;) In fact, isn't that screenshot from a game that uses vector graphics ? (Plants vs Zombies ?).

Neither Conquest of Camelot nor Plants vs Zomvies use real time vector graphics! They might have used Vector illustration programs to draw the graphics, but those graphics were then converted into bitmap assets to be used in the game. I've used PopCap's game engine before (you can downlaod it and use it yourself here: http://sourceforge.net/projects/popcapframework/). Mostly just to tinker and see how it works, but I'd bet my left nut PvZ is all bitmaps rendered to quads using a GPU.

Think man, think.

Thinking is nice, but but without any research or critical application you're just living in the clouds.

Now, I'm not saying a vector based UI would be a bad thing full stop. If done right it would be really awesome. However, it would be a pretty big undertaking from a software development perspective and I can't see it happening for the core UI of any OS until GPGPU can be leveraged to render it. In the meantime hybrid 2D bitmaps with GPU acceleration mixed in here and there is probably the smartest choice.
 
Last edited:
There's no way to speed up vectors (yet) using the GPU either, it all has to be rendered on the CPU - GPGPU computing still hasn't reached a point that it could be used for something like the core UI of an OS.
You haven't heard of OpenVG? http://www.khronos.org/openvg/ It's implemented in a lot of chips and devices, including the A4 and A5 chips used in iPads, iPod Touches, etc.

Here's an article about hardware-accelerated SVG. You'll notice that, somehow, Internet Explorer does the best job of it. Which says a lot, really, but if Microsoft can do it well, surely anybody else can. And I mean ANYBODY else. http://blogs.msdn.com/b/ie/archive/...vg-across-browsers-with-santa-s-workshop.aspx (And obviously, a UI isn't going to have nearly this many transformations running simultaneously, at least most of the time.)

And, seriously, Flash should not be used as an example of anything, especially good programming.
 
Last edited:
Thanks mate. I nearly gave up! :D

I'm with you too. With vectors, true resolution independence is possible. The problem with Mac desktops is that we have lots of different monitor sizes and resolutions. "HiDPI" does nothing for that. If I'm working in a word processor or a design app, I want the ruler to show an inch that is really an inch, and all UI elements drawing that the "right" in native resolution no matter what size display I have. Retina display is fine for iOS devices where we know the size of the screens, but not for Macs.

If the app designer wants his button to be 72 points (real points, not the arbitrary points in iOS), then he should be able to demand it and it should be that way on all Macs.

And if the user wants to magnify a little bit, (s)he should be able to without having to suffer through scaling issues or have to scroll the screen.

arn posted a picture of plants vs. zombies - well, nothing in that couldn't be made with vectors quite easily. I'll bet the original artwork was drawn with vectors.

I know some of you are thinking "what about the performance hit from drawing everything all the time?" Apps can cache their graphics. There's nothing stopping Apple implementing a "graphics resource" where the developer supplies the quartz code to draw the graphic, and the app automatically caches a bitmap version. Anytime you magnify or change your screen, it could trigger the app to re-render the bitmap cache for that resolution, either on-the-fly on a background thread, or next re-launch. The OS could then use the new cache to render the graphic.
 
This really only applies items that are very simple. Eventually you can make them large enough that the pixel drawing requires more space.

But if you do anything complex, then you practically doing 3d mesh with textures and rendering. Both using more space, using more computation resources and requiring a whole different type of artist and a lot more time/money to create assets.

You can cache bitmaps that are automatically generated for your screen to solve computation issues. And yes, better graphics require better artists. But that's true today. Not necessarily more money/time though, not for most graphic assets anyway. Heck a lot of bitmap assets you see today are rendered from illustrator or another vector app.
 
No I'm a graphic designer that uses both Illustrator and Photoshop. Admittedly I dont know how to implement the code for the UI but isn't the fact that a set size image is being scaled that makes it look "like ****?"

If it was a vector it would scale and not look like '****'.

Actually, no. If it was a vector, it would scale down to mush and up to a flat cartoon.

Go look at something, anything, which is delivered in various sizes and scales. When things look "right" and identifiable at both small scale and large scale, it is because details are removed for the smaller scales and added for the larger scales. That's not something a computer can do as well as a half-decent graphics designer.

This is why Apple uses multiple graphics at set sizes. It doesn't explain why those graphics at set sizes are not vectors (which, I agree, would be an improvement, albeit at runtime performance cost), but it definitely explains why vector graphics alone do not solve the scaling problem.
 
You can cache bitmaps that are automatically generated for your screen to solve computation issues. And yes, better graphics require better artists. But that's true today. Not necessarily more money/time though, not for most graphic assets anyway. Heck a lot of bitmap assets you see today are rendered from illustrator or another vector app.

It isn't necessarily better art. It is more complex. It has to be created,3d modeled more than simply drawn or photographed.

All this argument about vectors when it looks pretty clear that Apple is going to choose the simple, practical route that they did for iPhone and simply double.

This is a win because we can get a working solution as soon as the monitor is ready, instead of fiddling around trying to design the "perfect" solution that never gets implemented.

In the end I would much rather the go the iP4 route and just get it done and actually be using a high DPI monitor next year. Instead of waiting 5 years for a vector overhaul to the OS that in the end the results to me (end user) will be much the same on a high DPI monitor.
 
That's a joke, right?

One of the joys of vector graphics is smaller file sizes as standard as opposed to bitmap images.

To be fair, if you are essentially drawing a bunch of pixels on the screen (ie, all your "vectors" are one pixel long at a particular size, and there is one vector starting/ending in each "pixel" of the drawing), the vector drawing could be larger than the bitmap, because the vector drawing stores the start/end coordinates for the line as well as the color or color index (and likely additional irrelevant details about it) and the bitmap just stores the color or color index.

In the specific case here, where you have a 9x9 monochrome bitmap, the bitmap can be held in 11 bytes. You'd have a hard time beating that in a non-proprietary vector graphics format (ex, using a word-length coordinate system you would end up with 12 bytes just to define the three coordinate pairs). You could easily beat it using proprietary encoding (you could, say, define a vector file with half-byte indices which would allow or 16 coordinate values in each direction, assuming that each coordinate was one point in a polygon, and then internally coloring that polygon, and end up with a 3-byte size for this graphic), but that type of thing would need to be supported by OS X itself to take advantage of resolution independence (if the app code translates from that to a bitmap, the OS still just gets a bitmap).

All that said, in general a vector graphic will be much smaller than the corresponding bitmap, for meaningfully large and "flat" graphics.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.