Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
68,543
39,399


Apple has shared its first "real-world example" of Image Playground, the upcoming Apple Intelligence feature that generates cartoon-like illustrations based on a text prompt. The picture was apparently made by Apple's senior VP of software engineering Craig Federighi for his wife, in honor of his dog Bailey's recent birthday.

image-playground-bailey.jpg

The picture shows a cute dog wearing a party hat and smiling behind a birthday cake. Apple shared the picture with Wired, which includes a watermark on all AI-generated images that appear in its publications as a matter of policy. Pictures made with Image Playground include EXIF metadata in the file that also indicates it was made with AI, similar to images edited with Apple's new Clean Up tool in the Photos app in iOS 18.1.

Apple describes Image Playground as a "fun" feature that can produce original images in seconds right within apps. Users can create an entirely new image based on a description, suggested concepts, or a person from their Photos library. From there, users can adjust the style and make changes to match a Messages thread, a Freeform board, or a slide in Keynote, for example.

According to Bloomberg's Mark Gurman, Apple plans to make the Image Playground feature for generating images and the Genmoji feature for generating custom emoji available in iOS 18.2, which will likely be released in December.
Apple plans to introduce the first Apple Intelligence features with the release of iOS 18.1, which is expected to launch in October. Among the new capabilities are writing tools for generating and summarizing text, as well as a feature that provides concise summaries of notifications. The Messages app will gain the ability to suggest replies, while a new function will allow users to record and transcribe phone calls. Lastly, the Photos app will benefit from the aforementioned "Clean Up" tool, which is designed to swiftly remove unwanted objects from images.

Article Link: Apple Shares First Example of Image Playground in Action, and It's Based on Craig Federighi's Dog
 
Last edited:
That’s what they said about emoji's. People hating on this need to get a life. If you don’t like it, don’t use it. But don’t hate on it. Unless of course it’s why you are here. Feel free to thumbs down so we know who you are,

Emojis are for old people is what I learned as a 36 year old from the next generation
 
damn, you guys have no idea how technology works or what. There is a lot going on here. 1. AI-generated images can be much more realistic, but Apple seems to have chosen this approach to find a middle ground between looking good and also being obvious that's its genAI. That's one of the biggest issues with genAI these days. Where do you draw the line? You can see images on Twitter created with Flux.ai, and some of them look too realistic. Very dangerous for obvious reasons.
2. We are still in the early stages of genAI. What, it's been like 2-3 years since SD1.5 came? Things are rapidly developing. Video generation is already good.

Soon, you'll be able to ask a question, and AI will respond with a video, using proper clips to explain exactly what's happening. So don't be so dismissive of early-stage iterations.
 
That’s what they said about emoji's. People hating on this need to get a life. If you don’t like it, don’t use it. But don’t hate on it. Unless of course it’s why you are here. Feel free to thumbs down so we know who you are,
Why should anyone "get a life?" This is genuinely unimpressive, especially when you look at the large version of the image in the article. The candles are a mess.
 
Why aren’t they developing the iPhone to be the most powerful pocket computer you can purchase at an affordable price, something you can truly hook up to a computer monitor and use as a makeshift laptop?
Or develop the hell out of some shatter-proof oled screens?
Or even try to move into the hologram era or something?
Where is the true innovation?
 
damn, you guys have no idea how technology works or what. There is a lot going on here. 1. AI-generated images can be much more realistic, but Apple seems to have chosen this approach to find a middle ground between looking good and also being obvious that's its genAI. That's one of the biggest issues with genAI these days. Where do you draw the line? You can see images on Twitter created with Flux.ai, and some of them look too realistic. Very dangerous for obvious reasons.
2. We are still in the early stages of genAI. What, it's been like 2-3 years since SD1.5 came? Things are rapidly developing. Video generation is already good.

Soon, you'll be able to ask a question, and AI will respond with a video, using proper clips to explain exactly what's happening. So don't be so dismissive of early-stage iterations.

I'm very familiar with this technology. The issue isn't so much that the image is cartoonish in style. It's lacking in fine detail, almost like it needed a higher step count, and the candles are an indistinguishable mish-mash. It also looks like they used a "bokeh" term in the prompt which is a cheap way to avoid having to generate a finely detailed background.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.