Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
67,618
38,022


At its annual MAX conference, Adobe today provided insight into its plans for generative AI technology in Photoshop, Illustrator, and other popular design apps.

adobe-generative-fill-photoshop.jpg

Adobe has iterated on Firefly, its AI image generation model, with the Firefly Image 2 Model. Improved Text to Image capabilities can be used on the Firefly web app, and a Generative Match option lets users generate content in user-specified styles. Photo Settings allows for photography-style image adjustments, and Prompt Guidance helps users refine their suggestions for better end results.

As for quality, Firefly Image renders better skin, hair, eyes, hands, and body structure, plus it offers better colors and improved dynamic range. The generative AI model is used for generative fill in Adobe Photoshop, allowing users to add, expand, and remove content with text prompts.

There's now a Firefly Vector Model, which Adobe says is the world's first generative AI model for creating vector graphics. Integrated into Adobe Illustrator, Firefly Vector Model can be used to create all manner of vector graphics, including logos, website designs, and product packaging.

According to Adobe, the Firefly Vector Model can create "human quality" vector and pattern outputs with a Text to Vector graphic feature. Generated graphics are organized into groups and fully editable, with options to match an artboard's existing style.

Adobe Express also has a new Firefly Design model for generating templates for social media posts and marketing assets. Firefly Design Model can generate templates in popular aspect ratios that are editable in Express.

More information about Adobe's new AI features can be found on the Adobe website.

Article Link: Adobe Highlights New Generative AI Features for Photoshop and More
 
This is at least as big a revolution in graphic arts as we saw with the dawn of desktop publishing.

In the ’70s, a graphic designer mostly used pen and ink and scissors and tape, with photographic techniques to create halftone images in the more sophisticated jobs. Unless you were a really super big company, you had zero creative involvement in preparing mass printed materials that required equipment more sophisticated than a typewriter and a mimeograph machine. Instead, you hired a company to do it for you, and explained what you wanted to their graphic designer.

Then, with the advent of the Mac and the LaserWriter, you could create camera-ready artwork yourself. Get your company’s logo digitized a single time, buy a computer and printer, and your secretary could suddenly be your graphic designer. (Granted, secretaries typically had zero training in graphic design and their work was usually correspondingly amateurish … but companies didn’t care.)

Now you don’t even need to learn the mechanics of Photoshop and Illustrator. And it’s not just simple black-and-white text-heavy graphic design … it’s full-color imaginative art at least good enough for pulp fiction covers of the ’70s.

Are the results as good as a really good human illustrator and designer? Mostly not — but, just as companies in the ’80s didn’t care that their secretaries weren’t the best graphic designers in the world, they’re not going to care that they can still get marginally better quality at the expense of the time and money to hire a dedicated professional.

And, just as even the best grandmasters can’t begin to understand the best computer chess players, it won’t be all that long before the computers are unquestionably better commercial artists than humans, no matter how you slice it.

… and it’s not just graphic arts, of course. Ultimately, all professionals will find their jobs similarly at risk.

I don’t know how this all ends, but it certainly doesn’t end with humans going to the office five days a week to earn a salary.

b&
 
I'll just as a random stranger wearing a backpack if I can take a picture of them in the streets walking. I already have a phone in my pocket and its gonna look really, real!
 
  • Like
Reactions: StellarVixen
I'm starting to see that generative AI imaging will evolve over time sort of like HDR photography did. First it will be painfully obvious and full or artifacts, then it'll get good enough for mass adoption. And eventually it will be mastered to the point of ubiquity and everyday use.
 
Having demoed the beta of this, the most compelling usage is adding or subtracting items, modifying backgrounds, or expanding the canvas. You can highlight an area and ask it to “add flowers” or highlight the field around a subject and ask it “create a grassy field with purple flowers”. It provides multiple results so you don’t have to rerun it 10 times. It also does a very respectable job of common tasks like clearing facial blemishes without making the subject look “photoshopped”.

I consider the quality so good that old school skills will be a thing of the past for most users. The computer is doing all the hard work…
 
  • Like
Reactions: amartinez1660
Yea plus one for the nay sayers here. It’s cool and all but as someone who charges hourly for photoshop work and has already lost clients to AI this all is kinda a downer to me. It’s also very hollow and without soul but I know I sound old.
Yeah there's no doubt jobs are going to get wiped because of A.I. Stuff like this has also happened in the past whenever new tech came out (letter pressing, etc). Going to have to adapt and combine different fields together before it gets too late. Also doesn't help that many execs just want it done and dont focus too much on quality
 
Tried this with the image Adobe puts in front of you to try it out. It looked like absolute crap, even at low resolution. Zoom in at all on a “person” it generates and it’s just a horror movie.
 
  • Like
Reactions: mw360
This is at least as big a revolution in graphic arts as we saw with the dawn of desktop publishing.

In the ’70s, a graphic designer mostly used pen and ink and scissors and tape, with photographic techniques to create halftone images in the more sophisticated jobs. Unless you were a really super big company, you had zero creative involvement in preparing mass printed materials that required equipment more sophisticated than a typewriter and a mimeograph machine. Instead, you hired a company to do it for you, and explained what you wanted to their graphic designer.

Then, with the advent of the Mac and the LaserWriter, you could create camera-ready artwork yourself. Get your company’s logo digitized a single time, buy a computer and printer, and your secretary could suddenly be your graphic designer. (Granted, secretaries typically had zero training in graphic design and their work was usually correspondingly amateurish … but companies didn’t care.)

Now you don’t even need to learn the mechanics of Photoshop and Illustrator. And it’s not just simple black-and-white text-heavy graphic design … it’s full-color imaginative art at least good enough for pulp fiction covers of the ’70s.

Are the results as good as a really good human illustrator and designer? Mostly not — but, just as companies in the ’80s didn’t care that their secretaries weren’t the best graphic designers in the world, they’re not going to care that they can still get marginally better quality at the expense of the time and money to hire a dedicated professional.

And, just as even the best grandmasters can’t begin to understand the best computer chess players, it won’t be all that long before the computers are unquestionably better commercial artists than humans, no matter how you slice it.

… and it’s not just graphic arts, of course. Ultimately, all professionals will find their jobs similarly at risk.

I don’t know how this all ends, but it certainly doesn’t end with humans going to the office five days a week to earn a salary.

b&

This is all the same nonsense we heard in the late 90s when they said CGI was going to replace all actors.

We got a satirical movie S1M0NE about it.

What happened? Nothing. CGI became better and better but didn't replace actors. It was only used for VFX.

Stop getting baited by bad journalism and tech bros.

They will always present glossy demos, but won't show you all the demonstrations that look terrible which are the majority of the time.

They won't tell you how much electricity and water machine learning needs to scale, because that's really bad for their PR and it also tells you how they won't be able to deliver at scale.

What we will have at scale and are already seeing it is lots of bad "art" and lots of AI spam. That won't change. Social media companies need that spam because they want any kind of content.
 
I'll just as a random stranger wearing a backpack if I can take a picture of them in the streets walking. I already have a phone in my pocket and its gonna look really, real!
Yeah but what if that stranger is not a young Japanese female.....

: a great volume of generative image programs are used to generate countless images of highly sexualize Japanese females, often in some anime style.
 
  • Like
Reactions: arkitect
I feel like we should always talk about Adobe as we would Cosby or some other popular person/thing that turned evil. I say this as someone who spent $4,600+ in Adobe products over the years in Adobe Suite purchases for Mac, Windows, and updates/upgrades only to have nothing but be forced into paying monthly and forced into the heavy Cloud option without any say or choice. Same as GoodNotes, I would never trust them again.
 
I usually have to go through 10 options before even getting anything remotely usable if ever and that’s for my web ads. Anything I design for large format print simply cannot come from generative AI. It’s extremely low resolution. If I try to add extra grass to an outdoor scene from my 47mp camera it’s gonna look like Nintendo graphics next to it
 
I believe that if these AI hands become more sophisticated, they will indeed bring convenience to people. However, at present, it cannot entirely replace the need for designers to further process and adjust the pictures. But what if this technology continues to advance? I might be overly concerned, but I still think more people could lose their jobs or see their income reduced.
 
Yea plus one for the nay sayers here. It’s cool and all but as someone who charges hourly for photoshop work and has already lost clients to AI this all is kinda a downer to me. It’s also very hollow and without soul but I know I sound old.
The only surprise Ai has hit me with is how fast I've gone from wowed to bored with the images it produces. Like low-budget cartoons, they have no character & convey no artists personality whatsoever. Bleh.
 
Having demoed the beta of this, the most compelling usage is adding or subtracting items, modifying backgrounds, or expanding the canvas. You can highlight an area and ask it to “add flowers” or highlight the field around a subject and ask it “create a grassy field with purple flowers”. It provides multiple results so you don’t have to rerun it 10 times. It also does a very respectable job of common tasks like clearing facial blemishes without making the subject look “photoshopped”.

I consider the quality so good that old school skills will be a thing of the past for most users. The computer is doing all the hard work…
For me, „doing all the hard work“, yes — creative work, no …
 
  • Like
Reactions: arkitect
Photoshop is eventually going to lose the dozens of buttons and panels and there will just be a simple input box, maybe a preset panel with a few sliders and a place to save favorites.

It’s funny, one of the reasons I switched to a design degree in 2009 was that I thought it would be difficult for AI to replace an artist. Turns out it’s one of the easiest things for it to replace. Glad I pivoted to development since then, but even those days are limited. I know because AI has already made my job a lot easier.

My next move is to pivot to being the architect at the company in charge of AI deployments. That is not a role that will be replaced anytime soon. Companies will ultimately want a human in charge of the AI—at least until I can reach my early retirement plan.

I’ve been saying this for years, though: What is coming is a bigger shift than the Industrial Revolution and the Internet Revolution combined. This will fundamentally change work forever and we will have to build entirely new systems of laws and regulations to tackle this. Put aside the tropes of rogue AI destroying the world. Even under complete control, AI is insanely powerful. It will eat our economy if left unchecked. Greedy capitalists only interested in cashing out as quickly as possible will use this to destroy everything in their path and I mean that wholeheartedly without restraint.

Even if we do manage to get UBI passed, people with no purpose in life will eventually become depressed and commit suicide. We all need something to drive us. Some of us will be okay. I will be okay. I have many passions in life, including the art of photography, painting, and woodworking. Many will not be so lucky. These are many tough problems to solve in a very short period of time, and maybe even shorter than we think once the AI starts accelerating development of itself. I don’t see our ability to cope surpassing the rate of change, especially amongst more conservatively minded people. This will lead to great conflict, for we are but flesh and blood compared to this god of a new age.
 


At its annual MAX conference, Adobe today provided insight into its plans for generative AI technology in Photoshop, Illustrator, and other popular design apps.

adobe-generative-fill-photoshop.jpg

Adobe has iterated on Firefly, its AI image generation model, with the Firefly Image 2 Model. Improved Text to Image capabilities can be used on the Firefly web app, and a Generative Match option lets users generate content in user-specified styles. Photo Settings allows for photography-style image adjustments, and Prompt Guidance helps users refine their suggestions for better end results.

As for quality, Firefly Image renders better skin, hair, eyes, hands, and body structure, plus it offers better colors and improved dynamic range. The generative AI model is used for generative fill in Adobe Photoshop, allowing users to add, expand, and remove content with text prompts.

There's now a Firefly Vector Model, which Adobe says is the world's first generative AI model for creating vector graphics. Integrated into Adobe Illustrator, Firefly Vector Model can be used to create all manner of vector graphics, including logos, website designs, and product packaging.

According to Adobe, the Firefly Vector Model can create "human quality" vector and pattern outputs with a Text to Vector graphic feature. Generated graphics are organized into groups and fully editable, with options to match an artboard's existing style.

Adobe Express also has a new Firefly Design model for generating templates for social media posts and marketing assets. Firefly Design Model can generate templates in popular aspect ratios that are editable in Express.

More information about Adobe's new AI features can be found on the Adobe website.

Article Link: Adobe Highlights New Generative AI Features for Photoshop and More
Why do i think that AI does not stand for Artistic Innovation? … Abysmal Invention, maybe?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.