What’s interesting about DALL-E’s integration is that I believe ChatGPT creates the actual prompt, so what I like to do is set the stage for ChatGPT and give it a brief historical context of the aesthetic I’m going for.
So for this I went into detail about Apple’s design aesthetic and the products it should be influenced by and then the typefaces Apple typically uses. Then when it comes up with two responses, I’ll tell it to focus on one or the other and what I specifically like about the design choices it made, or I’ll click the little button underneath to have it regenerate two more. Typography is always a challenge, but getting better.
But yeah, it can be quite a fight and if you go a few prompts deep you can lose control. I think that it helps that I have a background in graphic design, photography and art history and have provided some art direction to a team of designers over the years. So I kind of know how to talk about this stuff, and I’ve noticed that it responds well to this approach since it’s similar to what it has absorbed through its training data. ChatGPT converting all of this into DALL-E optimized prompts helps a TON and is a very interesting development.
But there is still a ton of work to do. I would never turn in this image as final work, but it can really help knock out a concept quickly and get everyone on the same page. I imagine this will also be amazing for creating video storyboards on the fly, which are more about having a ton of concepts and visualizing framing and planning out lenses and other such gear.