What’s called “computer use” will be a big trend. Instead of operating apps directly, you will be telling the AI what you want to have done within the apps available. For example, “create an album named $name in Photos and add all pictures from yesterday’s hike to it”. Or “list all my MacRumors comments from the last five years relating to $topic, with links to each comment”. Or “initiate a return of the monitor stand I ordered last week on Amazon and send me a copy of the QR code by email”. The point is that apps won’t have to provide AI functionality or special hooks themselves, but that the AI will be capable of using apps (including web browsers) in the same way a human can, and you’ll be able to instruct it to perform tasks in that way.The rest of the industry is sprinting ahead… but where?
Eat a full lamb including the wool.So what you are saying is that I should scale back my watch party menu. Instead of prime rib and loaded baked potatoes I guess I will just go with miniature pizza bites and potato chips.
App intents, Apple calls this. AI should be a truth seeking, knowledge answering, task doing set of software tools. And should reward human creators of content and data wherever possible, rather than wholesale stealing their data, their jobs and lying about it. For task doing it should be basically accessibility on steroids, ask using everyday language and the system does it.What’s called “computer use” will be a big trend. Instead of operating apps directly, you will be telling the AI what you want to have done within the apps available. For example, “create an album named $name in Photos and add all pictures from yesterday’s hike to it”. Or “list all my MacRumors comments from the last five years relating to $topic, with links to each comment”. Or “initiate a return of the monitor stand I ordered last week on Amazon and send me a copy of the QR code by email”. The point is that apps won’t have to provide AI functionality or special hooks themselves, but that the AI will be capable of using apps (including web browsers) in the same way a human can, and you’ll be able to instruct it to perform tasks in that way.
Yes indeed. Expressing your own opinions is the general idea of a forum like this.According to you
Developers building an app with rich-text editing functionality may be able to build it faster. Theoretically, if enough developers use it, it could make rich-text editing more consistent across apps. Only minor user benefit overall. It’s more that SwiftUI is still a bit restricted in its feature set compared to what is possible outside of SwiftUI, so this is one point where they are improving it.For those of us not in the development scene, I’d like to ask: what does a rich text editor add to SwiftUI? Will it make it easier for the developers? Will us, customers and users, notice something about it on the apps we use?
No, App Intents are a separate interface that has to be provided by the app, and the app can decide which functions it is offering that way (including “no functions at all”, which is the default). “Computer use”, on the other hand, means that an AI can use apps without the apps having to provide anything for it. The AI will click buttons and look at the app UI just like a human user does.App intents, Apple calls this.
Skeumorphic software makes the interface more tactile. It can go too far like the Podcasts app that time where it becomes an old reel for no good reason, but in many cases it makes software more satisfying. Things like physics based pull-to-refresh, and appropriately textured materials tastefully selected make software nicer to touch, swipe, look at and live with. Ive went for the ultra minimal software take, but it largely just took away the tangibility and satisfying elements of the software without gaining much.People hate change for the sake of change, fixing something that is not broken, like the difference between Photos app from iOS 17 and older vs the one on iOS 18.
I only started owning an iPhone in 2016 but I do prefer the iOS 6 or skeumorphic look.
Your opinion?Yes indeed. Expressing your own opinions is the general idea of a forum like this.
There’s risks, so it would need to be done with a great level of thought and security, but yes, this is an ideal use for vocal instruction around computing devices big and small, which could be branding under the “AI” umbrella.No, App Intents are a separate interface that has to be provided by the app, and the app can decide which functions it is offering that way (including “no functions at all”, which is the default). “Computer use”, on the other hand, means that an AI can use apps without the apps having to provide anything for it. The AI will click buttons and look at the app UI just like a human user does.
I quite like the idea of speaking questions and having the AI tool talk back to me, especially if it means I can ask a quick question without leaving my existing UI.Obviously Gurman has really good sources, and he’s valuable for it, but his analysis are usually not great from my point of view.
“The company will be doubling down on the decades old touch-screen and point-and-click interface paradigms while the rest of the industry has moved on to AI.”
No one has moved away from the “old” paradigm. AI is very interesting, but the overwhelming majority of customers still use a visual UI. I think that’s because, even if LLMs are impressive, chatbots are not the best way to interact with a phone for the majority of tasks — traditional UIs are faster and provide a better feedback loop.
I’m much more interested in this year’s WWDC.
Yes, an app permissions system for AI access would be needed. Also some way for the AI to use an app in the background, so that the user can do other stuff in parallel.There’s risks, so it would need to be done with a great level of thought and security, but yes, this is an ideal use for vocal instruction around computing devices big and small, which could be branding under the “AI” umbrella.
The first one, I can see it, and it’s feasible. However, the problem (and this is the case with many other tasks) is that usually I want to add just some of them, avoid duplicated ones, exclude receipt photos, etc. And sure, you could potentially also do it with a prompt, but there’s a point where a visual UI is faster and provides better feedback.What’s called “computer use” will be a big trend. Instead of operating apps directly, you will be telling the AI what you want to have done within the apps available. For example, “create an album named $name in Photos and add all pictures from yesterday’s hike to it”. Or “list all my MacRumors comments from the last five years relating to $topic, with links to each comment”. Or “initiate a return of the monitor stand I ordered last week on Amazon and send me a copy of the QR code by email”. The point is that apps won’t have to provide AI functionality or special hooks themselves, but that the AI will be capable of using apps (including web browsers) in the same way a human can, and you’ll be able to instruct it to perform tasks in that way.
It’s not supposed to replace all human computer use, not even most of it, but just the tasks where you go through a sequence of predetermined steps that don’t really require your attention. It’s for use cases where you know what you want to have done in advance, and/or can make minor remaining adjustments after the fact (such as in the photo album example). It also enables tasks that otherwise would be just too tedious, as in the MacRumors comments example.The first one, I can see it, and it’s feasible. However, the problem (and this is the case with many other tasks) is that usually I want to add just some of them, avoid duplicated ones, exclude receipt photos, etc. And sure, you could potentially also do it with a prompt, but there’s a point where a visual UI is faster and provides better feedback.
The second one sounds a bit more complicated (not even possible with that promised rabbit AI agent), but I’ll focus on the third one. The Amazon return process is relatively straightforward, but it has some steps that are better presented in a visual way. Like “do you want to add more items?”: it’s much easier if I see those previous purchases, together with some photos, and I can quickly select some checkboxes, instead of having to do it via a chatbot interface. Same with return options: I can quickly see the cost of each option on the left side, and the total amount on the right side. I can also see a map with drop-off locations, etc.
People already lost their damn mind over the Photo look looking differently. Imagine a whole new design language
No hardware does make it a bit more boring for many. I still have some outside hope for a AppleTV refresh.
I think there’s still a place for shortcuts as they don’t require talking to your computer. A shortcut clicked in the dock or menu bar a click away saves the user having to describe their needs.Yes, an app permissions system for AI access would be needed. Also some way for the AI to use an app in the background, so that the user can do other stuff in parallel.
The thing is, this approach is much more flexible and powerful than apps having to provide specific actions like for iOS Shortcuts.