Vision and touch in fact tied together: "eye-hand coordination" goes far beyond just being able to coordinate action. It also entails Hebbian adaptation: the ability to "feel" a touch sensation *before* you have actually touched something. Seeing a texture anticipates feeling it; this predictive ability of the imagination and subconscious is evolution's solution for the problem of lag: by feeling something before it is touched you can avoid touching something bad before you already have, and it's too late. (Touch a hot stove.)
All sensations of texture take place fully in the mind. Sensory data from the fingers (touch) can be one source of what causes the mind to perceive a texture, but sight data from the eyes can also trigger it, as can memories. Just as you can imagine touching a leaf and remember the feeling of it, so also you can see a picture of a leaf and reach out to touch it, and even if it's behind some glass, just the action of doing this can cue your mind to "feel" the leaf more realistically than if you closed your eyes and just tried to imagine it.
On a certain level, you don't need to actually feel "real" (physical, mechanical) haptic feedback to have haptic feedback, because to quote Morpheus, "Your mind makes it real." You just need realistic, familiar textures and realistic-looking skeuomorphic design, and boom, you have haptic feedback -- probably better haptic feedback than if you actually had whatever crappy haptic feedback we'll likely see whenever the physical kind finally comes out. On a subconscious level, your brain anticipates feeling the textures it sees; then it anticipates that feeling even if the neural impulses don't actually come from the fingers.
Steve Jobs understood this. He famously said regarding OS X's Aqua interface, "One of the design goals was when you saw it you wanted to lick it." Steve understood there is a subconscious element to things and that a good UI ought to leverage all the processing power not only of the computer, but of the user's subconscious mind.
After all, Neo, what's the difference between something that tastes like chicken, and "real" chicken?
...
So why did Apple give up that technology in iOS 7? I would argue that they were forced to, and it was not necessarily by choice. They realized that in order to migrate iOS to a variety of screen sizes, they could not keep kicking the pixel can down the road.
The only legitimate problem with skeuomorphic design is that it's highly reliant upon bitmaps. We saw with the iPad Mini that the UI simply shrank, something Jobs wanted to avoid because the original iPad UI was designed to be just the right physical size for actual fingers to use. However some people have better manual dexterity and close-up eyesight than others, and Apple sells plenty of iPad minis as a result. I don't see a problem there.
But on the iPhone, Apple knows that it can't just decrease the screen DPI and make it bigger, while remaining competitive. They want to make a UI that will scale properly across devices while remaining the same physical size.
That's why in iOS 7 they got rid of so much UI chrome, because they want the UI to be solely based on vector graphics elements (like fonts and lines) that can scale perfectly if the screen size changes. That's why they are pushing developers to use the "AutoLayout" constraints, which are a series of rules that defines where on the screen each element gets drawn according to its distance from other elements or the screen boundaries, instead of just a position on a grid of coordinates that is directly mapped to a pixel grid.
I've been in favor of a fully vector-based UI for OS X and iOS for many years, since in a world of varying screen sizes and varying screen DPIs it would be the only way to have a truly resolution-independent, WYSIWYG UI.
However even though Apple said several years ago now that they would move to a vector-based UI for OS X, they never did because it would have been too much to ask everyone to convert all their UIs into to vector-based ones. Every app would have to get redone. It's a monumental undertaking, and Apple did not have a way to force developers to do things. Besides, it's not like you can't just change your screen resolution on a Mac if you want the UI elements to look bigger, or use the zoom-feature. Also, apps run inside their own windows on the Mac, so it doesn't matter if that window gets smaller.
Now on iOS, Apple does have the ability to force developers to do things. That's good because it means they can require the use of APIs like AutoLayout and TextKit to make apps that will scale to differing screen sizes. This becomes a manageable task when you can stop worrying about scaling textures and design all your icons as vector-based ones. You can use a desktop-publishing style approach to design now and achieve a nice look through alternate means than skeuomorphism.
...
But have we lost something important with the move away from skeuomorphism? I'd argue that perhaps we have. How can we get back there and strike more of a balance between resolution independence, and psychohaptic feedback?
One idea would be to use dynamically generated textures based on fractals or other forms of procedural math, or leverage OpenGL to scale bitmapped textures in a way that looks better using bump-mapping and dynamic light sources linked to the accelerometer.
This is where I think we are ultimately headed, probably not as soon as iOS 8, but somewhere down the road.
Check out the new Frax app for iOS to get a sense for what kind of real-time textures can be possible:
http://www.pocketmeta.com/frax-hd-ipad-beautiful-strange-ingenious-5940/
The issue of course right now is battery life; you have processor-expensive textures everywhere and it just kills you. However if the textures are not updated in real-time, but simply rendered once upon determination of the screen size, and then cached, it doesn't kill you.
Apple has a lot of ground-work to do before it can realize something like resolution-independent skeuomorphism in a battery-efficient way through system APIs with fractal textures and OpenGL, etc. However I would not rule out the possibility or assume that they went away from skeuomorphism on purpose.
Because if you look closely skeuomorphism is still present in certain places in iOS 7: the look of frosted, translucent glass in Control Center and Notification Center; the light paper texture of Notes app. It's present in things things that can scale independent of resolution, as one might predict if I'm right, and they haven't completely eschewed skeuomorphism but rather been forced away from it in order to migrate to a variety of screen sizes.
I do not think Jonny Ive doesn't understand the value of psychohaptic feedback in places, but I think they wanted to make a bold move and push the envelope in a new direction to freshen things up. That's not to say the pendulum couldn't swing back the other direction towards a method of implementing psychohaptic feedback via skeuomorphism in places where it really does help the interface feel more interactive and draw the user in more.
I'd like to hear all your thoughts on the matter.
All sensations of texture take place fully in the mind. Sensory data from the fingers (touch) can be one source of what causes the mind to perceive a texture, but sight data from the eyes can also trigger it, as can memories. Just as you can imagine touching a leaf and remember the feeling of it, so also you can see a picture of a leaf and reach out to touch it, and even if it's behind some glass, just the action of doing this can cue your mind to "feel" the leaf more realistically than if you closed your eyes and just tried to imagine it.
On a certain level, you don't need to actually feel "real" (physical, mechanical) haptic feedback to have haptic feedback, because to quote Morpheus, "Your mind makes it real." You just need realistic, familiar textures and realistic-looking skeuomorphic design, and boom, you have haptic feedback -- probably better haptic feedback than if you actually had whatever crappy haptic feedback we'll likely see whenever the physical kind finally comes out. On a subconscious level, your brain anticipates feeling the textures it sees; then it anticipates that feeling even if the neural impulses don't actually come from the fingers.
Steve Jobs understood this. He famously said regarding OS X's Aqua interface, "One of the design goals was when you saw it you wanted to lick it." Steve understood there is a subconscious element to things and that a good UI ought to leverage all the processing power not only of the computer, but of the user's subconscious mind.
After all, Neo, what's the difference between something that tastes like chicken, and "real" chicken?
...
So why did Apple give up that technology in iOS 7? I would argue that they were forced to, and it was not necessarily by choice. They realized that in order to migrate iOS to a variety of screen sizes, they could not keep kicking the pixel can down the road.
The only legitimate problem with skeuomorphic design is that it's highly reliant upon bitmaps. We saw with the iPad Mini that the UI simply shrank, something Jobs wanted to avoid because the original iPad UI was designed to be just the right physical size for actual fingers to use. However some people have better manual dexterity and close-up eyesight than others, and Apple sells plenty of iPad minis as a result. I don't see a problem there.
But on the iPhone, Apple knows that it can't just decrease the screen DPI and make it bigger, while remaining competitive. They want to make a UI that will scale properly across devices while remaining the same physical size.
That's why in iOS 7 they got rid of so much UI chrome, because they want the UI to be solely based on vector graphics elements (like fonts and lines) that can scale perfectly if the screen size changes. That's why they are pushing developers to use the "AutoLayout" constraints, which are a series of rules that defines where on the screen each element gets drawn according to its distance from other elements or the screen boundaries, instead of just a position on a grid of coordinates that is directly mapped to a pixel grid.
I've been in favor of a fully vector-based UI for OS X and iOS for many years, since in a world of varying screen sizes and varying screen DPIs it would be the only way to have a truly resolution-independent, WYSIWYG UI.
However even though Apple said several years ago now that they would move to a vector-based UI for OS X, they never did because it would have been too much to ask everyone to convert all their UIs into to vector-based ones. Every app would have to get redone. It's a monumental undertaking, and Apple did not have a way to force developers to do things. Besides, it's not like you can't just change your screen resolution on a Mac if you want the UI elements to look bigger, or use the zoom-feature. Also, apps run inside their own windows on the Mac, so it doesn't matter if that window gets smaller.
Now on iOS, Apple does have the ability to force developers to do things. That's good because it means they can require the use of APIs like AutoLayout and TextKit to make apps that will scale to differing screen sizes. This becomes a manageable task when you can stop worrying about scaling textures and design all your icons as vector-based ones. You can use a desktop-publishing style approach to design now and achieve a nice look through alternate means than skeuomorphism.
...
But have we lost something important with the move away from skeuomorphism? I'd argue that perhaps we have. How can we get back there and strike more of a balance between resolution independence, and psychohaptic feedback?
One idea would be to use dynamically generated textures based on fractals or other forms of procedural math, or leverage OpenGL to scale bitmapped textures in a way that looks better using bump-mapping and dynamic light sources linked to the accelerometer.
This is where I think we are ultimately headed, probably not as soon as iOS 8, but somewhere down the road.
Check out the new Frax app for iOS to get a sense for what kind of real-time textures can be possible:
http://www.pocketmeta.com/frax-hd-ipad-beautiful-strange-ingenious-5940/
The issue of course right now is battery life; you have processor-expensive textures everywhere and it just kills you. However if the textures are not updated in real-time, but simply rendered once upon determination of the screen size, and then cached, it doesn't kill you.
Apple has a lot of ground-work to do before it can realize something like resolution-independent skeuomorphism in a battery-efficient way through system APIs with fractal textures and OpenGL, etc. However I would not rule out the possibility or assume that they went away from skeuomorphism on purpose.
Because if you look closely skeuomorphism is still present in certain places in iOS 7: the look of frosted, translucent glass in Control Center and Notification Center; the light paper texture of Notes app. It's present in things things that can scale independent of resolution, as one might predict if I'm right, and they haven't completely eschewed skeuomorphism but rather been forced away from it in order to migrate to a variety of screen sizes.
I do not think Jonny Ive doesn't understand the value of psychohaptic feedback in places, but I think they wanted to make a bold move and push the envelope in a new direction to freshen things up. That's not to say the pendulum couldn't swing back the other direction towards a method of implementing psychohaptic feedback via skeuomorphism in places where it really does help the interface feel more interactive and draw the user in more.
I'd like to hear all your thoughts on the matter.