Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The rest of the industry is sprinting ahead… but where?
What’s called “computer use” will be a big trend. Instead of operating apps directly, you will be telling the AI what you want to have done within the apps available. For example, “create an album named $name in Photos and add all pictures from yesterday’s hike to it”. Or “list all my MacRumors comments from the last five years relating to $topic, with links to each comment”. Or “initiate a return of the monitor stand I ordered last week on Amazon and send me a copy of the QR code by email”. The point is that apps won’t have to provide AI functionality or special hooks themselves, but that the AI will be capable of using apps (including web browsers) in the same way a human can, and you’ll be able to instruct it to perform tasks in that way.
 
What’s called “computer use” will be a big trend. Instead of operating apps directly, you will be telling the AI what you want to have done within the apps available. For example, “create an album named $name in Photos and add all pictures from yesterday’s hike to it”. Or “list all my MacRumors comments from the last five years relating to $topic, with links to each comment”. Or “initiate a return of the monitor stand I ordered last week on Amazon and send me a copy of the QR code by email”. The point is that apps won’t have to provide AI functionality or special hooks themselves, but that the AI will be capable of using apps (including web browsers) in the same way a human can, and you’ll be able to instruct it to perform tasks in that way.
App intents, Apple calls this. AI should be a truth seeking, knowledge answering, task doing set of software tools. And should reward human creators of content and data wherever possible, rather than wholesale stealing their data, their jobs and lying about it. For task doing it should be basically accessibility on steroids, ask using everyday language and the system does it.
 
For those of us not in the development scene, I’d like to ask: what does a rich text editor add to SwiftUI? Will it make it easier for the developers? Will us, customers and users, notice something about it on the apps we use?
Developers building an app with rich-text editing functionality may be able to build it faster. Theoretically, if enough developers use it, it could make rich-text editing more consistent across apps. Only minor user benefit overall. It’s more that SwiftUI is still a bit restricted in its feature set compared to what is possible outside of SwiftUI, so this is one point where they are improving it.
 
  • Like
Reactions: nt5672
Most sane people are not looking forward to a visual "re-design" - change for the sake of change is always a bad idea.

I'd be fine with it if Cupertino let users keep the old interface. But when you shove it down everyone's throats, it better be phenomenal, because if it adds even an ounce more friction to the user experience, it should be scrapped.
 
App intents, Apple calls this.
No, App Intents are a separate interface that has to be provided by the app, and the app can decide which functions it is offering that way (including “no functions at all”, which is the default). “Computer use”, on the other hand, means that an AI can use apps without the apps having to provide anything for it. The AI will click buttons and look at the app UI just like a human user does.
 
  • Like
Reactions: johnsawyercjs
People hate change for the sake of change, fixing something that is not broken, like the difference between Photos app from iOS 17 and older vs the one on iOS 18.

I only started owning an iPhone in 2016 but I do prefer the iOS 6 or skeumorphic look.
Skeumorphic software makes the interface more tactile. It can go too far like the Podcasts app that time where it becomes an old reel for no good reason, but in many cases it makes software more satisfying. Things like physics based pull-to-refresh, and appropriately textured materials tastefully selected make software nicer to touch, swipe, look at and live with. Ive went for the ultra minimal software take, but it largely just took away the tangibility and satisfying elements of the software without gaining much.
 
Last edited:
I'm not interested in artificial intelligence at all. As far as I'm concerned, it's much more interesting to modify the interfaces on the various platforms.
 
  • Like
Reactions: boswald and uller6
No, App Intents are a separate interface that has to be provided by the app, and the app can decide which functions it is offering that way (including “no functions at all”, which is the default). “Computer use”, on the other hand, means that an AI can use apps without the apps having to provide anything for it. The AI will click buttons and look at the app UI just like a human user does.
There’s risks, so it would need to be done with a great level of thought and security, but yes, this is an ideal use for vocal instruction around computing devices big and small, which could be branding under the “AI” umbrella.
 
Obviously Gurman has really good sources, and he’s valuable for it, but his analysis are usually not great from my point of view.

“The company will be doubling down on the decades old touch-screen and point-and-click interface paradigms while the rest of the industry has moved on to AI.”

No one has moved away from the “old” paradigm. AI is very interesting, but the overwhelming majority of customers still use a visual UI. I think that’s because, even if LLMs are impressive, chatbots are not the best way to interact with a phone for the majority of tasks — traditional UIs are faster and provide a better feedback loop.

I’m much more interested in this year’s WWDC.
I quite like the idea of speaking questions and having the AI tool talk back to me, especially if it means I can ask a quick question without leaving my existing UI.

Those Google Gemini ads look very impressive. I wish I could do that on Siri.
 
There’s risks, so it would need to be done with a great level of thought and security, but yes, this is an ideal use for vocal instruction around computing devices big and small, which could be branding under the “AI” umbrella.
Yes, an app permissions system for AI access would be needed. Also some way for the AI to use an app in the background, so that the user can do other stuff in parallel.

The thing is, this approach is much more flexible and powerful than apps having to provide specific actions like for iOS Shortcuts.
 
What’s called “computer use” will be a big trend. Instead of operating apps directly, you will be telling the AI what you want to have done within the apps available. For example, “create an album named $name in Photos and add all pictures from yesterday’s hike to it”. Or “list all my MacRumors comments from the last five years relating to $topic, with links to each comment”. Or “initiate a return of the monitor stand I ordered last week on Amazon and send me a copy of the QR code by email”. The point is that apps won’t have to provide AI functionality or special hooks themselves, but that the AI will be capable of using apps (including web browsers) in the same way a human can, and you’ll be able to instruct it to perform tasks in that way.
The first one, I can see it, and it’s feasible. However, the problem (and this is the case with many other tasks) is that usually I want to add just some of them, avoid duplicated ones, exclude receipt photos, etc. And sure, you could potentially also do it with a prompt, but there’s a point where a visual UI is faster and provides better feedback.

The second one sounds a bit more complicated (not even possible with that promised rabbit AI agent :)), but I’ll focus on the third one. The Amazon return process is relatively straightforward, but it has some steps that are better presented in a visual way. Like “do you want to add more items?”: it’s much easier if I see those previous purchases, together with some photos, and I can quickly select some checkboxes, instead of having to do it via a chatbot interface. Same with return options: I can quickly see the cost of each option on the left side, and the total amount on the right side. I can also see a map with drop-off locations, etc.

So my point is that, even if everything worked well all the time when it comes to processing intents (which is a big IF), the complexity of those processes is usually better handled with a visual UI than with a chatbot.
 
The first one, I can see it, and it’s feasible. However, the problem (and this is the case with many other tasks) is that usually I want to add just some of them, avoid duplicated ones, exclude receipt photos, etc. And sure, you could potentially also do it with a prompt, but there’s a point where a visual UI is faster and provides better feedback.

The second one sounds a bit more complicated (not even possible with that promised rabbit AI agent :)), but I’ll focus on the third one. The Amazon return process is relatively straightforward, but it has some steps that are better presented in a visual way. Like “do you want to add more items?”: it’s much easier if I see those previous purchases, together with some photos, and I can quickly select some checkboxes, instead of having to do it via a chatbot interface. Same with return options: I can quickly see the cost of each option on the left side, and the total amount on the right side. I can also see a map with drop-off locations, etc.
It’s not supposed to replace all human computer use, not even most of it, but just the tasks where you go through a sequence of predetermined steps that don’t really require your attention. It’s for use cases where you know what you want to have done in advance, and/or can make minor remaining adjustments after the fact (such as in the photo album example). It also enables tasks that otherwise would be just too tedious, as in the MacRumors comments example.

It’s also a replacement for stuff of the kind that Siri is supposed to be able to do, like for example anything that can be done in Settings, except that no one has to program those specific actions as Siri actions. Instead it’s all available automatically just by virtue of existing in the UI. Users aren’t limited anymore to the use cases that the Siri designers have come up with. Instead, everything available to the user for interactive use is also automatically available for automation via AI.
 
  • Like
Reactions: 0049190
And I thought because Apple won't show any concepts, designs or ideas (I call it product lies) this year. Instead, it has learned and is showing real products again.
 
The power of marketing and people buying into it: when ML was the state of the marketing art (and it reflected reality much better), Apple was at the top of the heap. But that wasn't "sexy" enough so they remapped it all onto "AI" and suddenly Apple's behind.
 
No hardware does make it a bit more boring for many. I still have some outside hope for a AppleTV refresh.

This is my number 1 want. That, and maybe some significant improvements to Apple Music, but Apple TV hasn't been updated for years and I think they need to bring some new things to it. I like the idea of separating the Apple TV+ app but still having an app act as a hub to your whats next, and maybe an announcement that Netflix is finally on board.
 
Yes, an app permissions system for AI access would be needed. Also some way for the AI to use an app in the background, so that the user can do other stuff in parallel.

The thing is, this approach is much more flexible and powerful than apps having to provide specific actions like for iOS Shortcuts.
I think there’s still a place for shortcuts as they don’t require talking to your computer. A shortcut clicked in the dock or menu bar a click away saves the user having to describe their needs.
 
  • Like
Reactions: Starfia
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.