I think he was referring to the fact that you "just do" with an iPhone.
For example. On a computer, in mail, to delete a message, you select it and then press Delete, or use the delete menu. On an iPhone just swipe your finger over the message. You have effectively selected and acted upon it in the same act. OK, so you have to confirm that you want to delete it (which I wish they'd let me turn off!) but you get the point.
The resize of an image is the best example, as you say. On a computer, you would select it and then drag it. On an iPhone you just "pinch" in one motion. There's no real notion of "selecting" things on an iPhone (which is why they haven't gotten cut and paste working, something that has irked me on a few occaisons. For example when I wanted to email someone an address from google maps and couldn't find any way to do it!)
There are dialogs in the iPhone OS, but they are confirmation and choices. There is a definite usage paradigm shift. Not sure how well it translates to a larger device like a computer yet, but if I had to pick anyone to figure it out, I'd pick Apple.
be well
t
I think you hit the nail... The challenge with copy-paste is that iPhone's implementation of it MUST fit within the fold of their own usability standards. It can't be as clunky as it is on iPhone's contemporaries. It has to be something much easier... as easy as pinching/stretching in concept and execution.
There could be a few different approaches in testing right now.
One could be a two finger swipe to highlight but that presents visibility problems... which could be overcome with a magnifier above the selected text... but already you can see this needing a few workarounds just to make it work.
Another could be a copy-paste button that when pressed switches the functionality of dragging to highlight text.... but again we're moving away from ease of use here.
Designing a device to be idiotically easy doesn't mean making it easy for idiots. It means making the technology transparent so the user can focus on creating and executing (doing) rather than configuring, selecting, copying, pasting, etc. Making things easier to do increases one's productivity... getting more things accomplished in the same span of time.
So what about a time-delay that works like the time-delay double-click sensitivity on a mouse.... I know this sounds convoluted but the concept is simpler than the explanation, bear with me: You know on a mouse you click twice fast and it has a separate function from clicking twice slowly which registers as two single-clicks. Now, iPhone already uses double-tap to zoom in/out.... but what if a copy function sensed double-tap in two different places. i.e. you tap once at the beginning of the text you want to copy, and tap again at the end of it. You could set the threshold for double-tap speed to your liking, AND, lets say, if you need some accuracy you hold the button down on the second tap and the magnifier appears, allowing better precision of the end of your highlight. Again, the trick to this is making the execution minimal and fluid. Tap ......... tap.
Ok now what? We've highlighted... but what about copy. I like the idea of a contextual pull-down menu but it still is wasting the potential of multitouch. What about a gesture for copy... gesturing a "c" on the screen. And then maybe a gesture like a "v" for paste. And "x" for cut. Two reasons this might work...
1. Everyone is already familiar with Control or Command "c" and "v" whether you use Windows or Mac, the letters are the same.
2. "c" looks like a circle.. like you're saying "this is what I want to copy. "v" looks like the insert character you write when proofreading a paper and marking where you want to insert text... which is probably why "v" was picked as the universal paste shortcut key in the first place.
Simplicity isn't just about being dopey enough that a moron can use it... It's also about mnemonics... being simple enough to remember by relating to things you already know how to do.
With some intelligent software filtering, as iPhone already has to limit unintended gestures and recognize intended ones within the context of what you're doing (e.g. "c" and "v" should do nothing when text isn't selected), you could have some really cool gesturing capabilities that extend your productivity well past what you can do with a clunky stylus and mediocre handwriting recognition.
Speech recognition is another possibility but you have to have an option when talking aloud to your PDA or phone isn't possible (e.g. a meeting).