[Music] [Mark Heaps] So good morning, everybody. Thank you for joining us so early this morning.
Truly, troopers, I appreciate you guys.
Okay, so this session is called Work Happy, Not Harder. This is actually a theme that I've done with Adobe now. We were doing the math last night. I think we're pretty sure this is my 16th year speaking at MAX. So it's been a while.
I appreciate that. Just a quick question. Have any of you ever been to any of my sessions before? Okay. A few people. Cool. Cool. I we rarely do this on the East Coast, so it's nice to see so many new people coming to sessions. Here's the nuts and bolts of why I have this theme.
I hate getting stressed out, especially doing production work. And so you've heard the old term work smarter, not harder. And I think smarter doesn't necessarily mean you're going to be less stressed, right? So I think of working happy as how do I find efficiencies? How do I find ways to automate things? How do I find ways to do things in my workflow that don't freak me out? And even better if you can let the computer do the majority of the work, my goal is always, and I have a member of my team here today, Amanda, they hear me say all the time, hey, 10-80-10. I want you to work the first 10%. I want the rest of it to be 80% done by systems, automation, templating, whatever you can. And then the last 10% should be the polish that you put into it, right? That's hard to get to that goal, but that is the goal. So we're going to talk today a lot about just some techniques to be more efficient. A lot of it now because of AI will get into prompting techniques. But if you're really into production, you're curious about this, Adobe has put almost all of my Work Happy, Not Harder series on the Adobe Max YouTube channel. I have playlists on there for Illustrator, InDesign, Acrobat, Photoshop, you name it. So feel free to check that out.
I guess I didn't say. I'm also the Chief Tech Evangelist and VP of brand at an AI company called Groq, not Elon's Groq. We've been around a lot longer than that.
And I'll talk a little bit more about that later. But for the last four or four years, I've been really deep in the AI world where we build compute for AI and AI applications at its core level. Before that, I was also at Google and I worked on the Google Book Scanning Project and the AI systems for Google voice. And then I also worked at Apple before that in the Lava Lounge, which is the design group production house at Apple. So I've been doing this a really, really long time, probably about 30 years now.
I do get very anxious on stage. I don't know if you've had the experience of having a lot of people stare at you while you stand on stage. So one of my techniques to help me get over that, if you give me the opportunity, I just like to talk about my family very quickly. It helps relax me, especially as a dad. So I'll just introduce them real quick. On the top left is my son and daughter. That is my now 13-year-old, 6 foot 1 son. So I'm going bankrupt trying to keep him fed at this point. And then that is his sister, Fiona, next to him. They are my everything. Fiona's amazing. As we speak literally this exact moment, she's taking off on a flight over to Holland to go compete at AAU level for Texas State in Taekwondo. And so she has five fights tomorrow. And then if she wins three of those, she goes on to Romania to fight again. So she is my little badass, and I can attest to how strong she is. I have held boards for her. She's broken the boards, gone all the way through them, and cracked three of my ribs. So I don't say no to her much when she's like, "Can I try driving the truck?" I'm like, "Whatever. Just take it. Please don't hurt me." And that picture below, she's in blue. That's her landing the back of her heel to another opponent's head in a tournament. So she's amazing. And then lastly, my two dogs, my wife up there as well. That's Willow, our Goldendoodle, doesn't realize that she's 76 pounds. And then my dog down below, we call him a liar. That's Dax. And Dax we were told was going to be a corgi dachshund. Turns out during the pandemic, we got bored and decided to have one of those DNA breed tests done. The one breed of dog I said I would never own is a Chihuahua. Turns out he is a Chihuahua beagle. So which is why he's looking like that at you right now. So thank you for that. It helps me relax.
I did just come in from another event where I was speaking on AI. This was an event called World Summit AI in Amsterdam, which was pretty crazy. They had me on stage talking about what's happening with some of the models and processing for those things. So if anybody's interested in AI conversations, there's a lot of videos of me talking about that as well online. But they didn't tell me when they said, "Hey, can you come do the Keynote for this?" that they had 54,000 people registered for the event. So, try telling people about your Taekwondo badass daughter when English is not the native language and you've got tens of thousands of people there. It's pretty wild.
All right. So let's dive in. My first love always and has been since the '80s Adobe products. So let's get into Photoshop and some other applications. If we have time at the end, I'll show you some of the AI stuff that we're also working on separately, but I want to make sure that you guys are getting what you need out of Photoshop. So the first thing that I will tell you guys, I do have a handout for this session. I'll give you guys the URL for that at the end so that you can download the zip file with everything. And then they do record these sessions as well, so never fear, you're going to get whatever it is you need. I will also say that this is considered intermediate to advanced session. So if you hear me say something like blend mode, you're like, "What's a blend mode?" Seek me afterwards and I'll happily fill in the blanks for you, okay? But we are going to go at it. All right. So the first thing that I want to mention is prompting. There's been a lot of confusion about prompting over the last several years as we're all ramping up in this world for AI apps. Adobe finally actually published a guide. It's in a prompt structure guide for Firefly. And the first thing I'll tell you is if you're new to this...
With prompting, there is no one size fits all because each model is trained a certain way and it's been guided a certain way, there are prompt standards that differ for each model. So if you're jumping online being like, I downloaded the prompt guide for DALL-E or Stable Diffusion or MidJourney or whatever it is, they will work, but you're not going to get the same thing out of each of them, okay? It's good to learn prompt engineering, learn the best practices, but they're not going to be exactly consistent between each other. This is really helpful. So they just published this. This is in the zip download that you guys will get at the end, okay? But what I really want to talk about just briefly is what we're seeing here in the breakdown of the prompt guidance. So a couple of key things that you should know about prompting, and I don't have-- I don't think I have a laser. Maybe I have a laser? Oh, I do have a laser. Laser. Okay. So first thing that you should know is we've been told for a long time, use a minimum of three words, but don't go too far with your prompts. If they're too long, it could get confused, can have competing language. Well this is actually a really good example of how you should think about prompting. It's about the right scale for Firefly. So a couple of key things. Do you notice that they color code the prompts starting in red and going to green? The reason for this is prompts actually have weight emphasis. So whatever you put early in a prompt is going to get more emphasis in what's generated than something that's later. So I see some people writing prompts because it's natural language processing. They think they can just write like a story, right, or a normal sentence. Now to recognize how important this is, I grew up as a child in Louisiana and Mississippi where everybody I know was talking about this man, where we're going to school and everybody doing my thing, man. And I have British parents, which meant when I came home and we were having tea and we were very formally sitting down with each other. So I live somewhere in between. Depending on how you structure your sentence changes the emphasis of the prompt, right? So that's why it's read at the start. Early on, more emphasis. And that's why they start with photo of a man.
Tells the prompt exactly what you should get first. Now the next key thing and this was sort of revealing to me, you'll notice that not grammatically correct, but they use semicolons to break out sections. And then there's commas as core details between those breakout components, right? That actually shifts like an ordered list of how the prompt interprets the information. So that is the formal way that it's the first time I've seen Adobe say this. How they divide the sections of a prompt is using semicolons. You can use other techniques, but this is what they're suggesting. Now in this doc, they give a ton of examples about this, but we're going to break it down. What I also like to do with my own work is build prompt templates, okay? So I'm going to show you something that I learned from a voice AI company that's working on MPCs with video game characters and how they actually build out their templates.
So let me get to that. Okay.
And this is another way that you can build a prompt guide. Now I'm using colons and then I'm ending my line with semicolons. But what you're seeing here is I'm breaking out a template for when I want to generate an image. So you can see that I've got the scene in here, modern restaurant, what's required in the scene, table counter surface, type of photography, product photography, hyper realistic, so on and so forth.
I keep a ton of templates like this so that when I'm doing some work that I'm generating, it falls into a certain category. I just fill in the blanks like it's a Mad Lib, right? Now people don't talk about doing it this way with the line breaks because the interface in the Adobe tools is always like a Google search bar. So we think, like, "Oh, we should just do this really short little thing." You don't have to, you can copy and paste this into that area. So I'm going to do that now.
Let's copy that, and we'll go over to Photoshop.
Actually, we'll open an image first.
Okay, so for example, let's take two of these guys.
And we're opening. We're opening. Excellent. Okay. So here's some simple product photographs, right? So that template that I have as my prompt, I can go in here now and I'm just going to go ahead and quickly remove background. This was a nice thing that they just added to the Photoshop beta. So we've had Remove Background for a while, that's not new. But as soon as you Remove Background now, there's a new button that says Generate Background. You guys can see that? And this is basically like generate fill, but historically, the techniques we've been taught for the last year or so is, oh, you remove background, you select everything, you contract that, you invert the selection, now you use Generative Fill to fill something behind it. It does all that without you needing to do those steps when you choose this Generate Background button. And there's our prompt entry field. So what I'm going to do here is actually paste in that text. Now we can't see it because it's broken out by doing any of the lines, but it is all there. And so now when I hit generate and we pray to the AI overlords that we get something that's not nightmare fuel, I, for one, welcome my robot AI overlords.
Okay, so now think about the prompt that you just saw. The requirements are all there, right? We've got the restaurant, we've got the counter, we've got the reflective surface, we've got the lighting, we've got some of the texture in there in each of these.
And so no matter what and I could have written camera angle, it will understand all of that. I can do that again and again...
With any of these. Now the cool thing is once I have that template, I could just change some of those key terms in the template. So if you're an art director or senior production artist and you're working on your team and you're like, "Man, we've got this efficiency that AI will generate things, but we're losing efficiency because we're constantly battling consistency," right? And that seems to be the real world I observe with a lot of designers and production artists now. Prompt templates will help with that. So now, in my Properties, you can see there's actually the prompt template that came in. And again, I would argue that that...
And that are giving me pretty consistent results between the two.
But I could go in here and modify this. So if I say scene instead of modern restaurant, I say sporting event...
And we just change that-- And we wait like it's a filter in Photoshop in 1999.
There you go. So I still got the counter. I still got the lighting. I still got the reflection, but it changed the environment for me, right? So that's the goal here is to figure out how do we use these templates in a way. Now I have had some weird things like yesterday when I did the demo, it was a tennis court, but then there was like a football being thrown. And so I was just like, all right, you don't understand sports, but this is pretty good.
And you can try other things like complementary colors, analogous colors, use your design language that you hated studying in school, right? Yeah.
So the question was, if you want to put more emphasis on the style of photography, the earlier you put in the prompt, the more you're going to get. Just remember there's a tax here. So if you said photo of man, but you put the style of photography before that, it might not give you that subject as the focal point as the photo of the man. It might be a photo of a scene in that style and a man will be in there somewhere, right? So that's the balance you got to battle with.
All right. Is this making sense to everybody? - Yeah. - Helpful? - Yeah. - Okay. Cool. Cool. I know it's early. Try to get you guys into some of the good stuff. Okay. So let's look at the next thing.
So again, you'll get that template, you'll also get the PDF from Adobe about this. I'm not going to make you hunt for it on the website because it's a nightmare to find. Okay, so let's talk about expanding, contracting, and some of these techniques, inside of Photoshop using Generative Fill. Okay, so one of the things that I see pretty consistently is when people are doing demos...
With these Generative Fill techniques, they're showing how to widen the canvas, right? And then like, oh, I want to make it wider so I can push the person onto the thirds, and that's great. We do that a lot. But I want to challenge more on what happens if you need to contract. So, for example, one of the things that our team has to do a lot, we have a marketing manager who says, "I need a social media graphic and I need it in an hour." And if you're going from this beautiful wide shot and you want to go to a square for Instagram, it's kind of a pain in the butt. Now we could crop it to a square, that's fine. But this image, I would say that the attractive part of it is obviously the subject, but also this beautiful sunset on the horizon, right? I want to put those together and they're a little out of the square aspect ratio or they'll be tight. So in this instance, it actually works really well to just go in here and make your selection around this...
And I'll say that's the one that I want to keep.
And I know this is going to hurt some of the old schoolers in here. I identify as an even older schooler, but we're going to do destructive editing. And after Adobe spent two decades ramming down our throats, nondestructive, nondestructive, nondestructive, now we're like, " we got AI. Destroy everything. It's fine." So what I'm going to do here now you could put this on a new layer. I'll do that because I know it would offend some people if I didn't, but ultimately it doesn't matter. You can just move this over and bring it closer to the subject. And now I know that I'm getting into that square aspect ratio to be able to fit that in, right? But what do I do about this mess? Yeah. We would have previously done clone stamping or some kind of repair technology, right? The healing brush, etcetera. But now, what you can do is let's start with the first technique that I like to use a lot. And it's a selection technique that I don't see a lot of people using and they haven't. It's been in Photoshop forever. But if you take a selection, active selection of that layer, right, so just Command click on the layer, that selects all active pixels. I'm going to go Select, Modify, Border, okay? Now if you haven't used Border before, whatever number you put in here, it's going to take that number and go half that number outward and half that number inward. It splits on the line and makes a border, right? So let's just try like 30, okay? Okay. So it went 15 out, 15 in. Not quite big enough for me. I'm going to go a little bit more than that.
Let's go Select, Modify, Border again and let's use, let's say, 60. Let's see what that does. Okay. That's pretty good. Now, to repair seams like this, and we'll do this again later, to repair seams like this, we go to Generative Fill and we actually don't enter anything. And I see people trying to do things like repair this. It's not actually intelligent, guys. That's why it's called artificial intelligence. It's faking it till it makes it. It is using probabilities, it's built on tokenizing, there's a whole bunch of stuff I could talk about there. But, ultimately, Adobe has done a really good thing where if you leave it blank and just hit Generate, it will actually interpret the entire frame with an emphasis on where you want to repair. It actually does treat it like that prompt of what goes early. The tokenizing of the area selected becomes early in the prompt. But you can see there it's done a little bit of a blend for us, right? So I can go in here, click through these. Now, we know where the cut was, but someone's scrolling on Instagram, probably not going to notice. Yeah? But if you don't want to use that technique, which is fine, I won't be offended.
You also have the ability to just go in now with your selection tools, grab the selection brush and just paint over this. And this is kind of nice because you can get some variance in the mask. It's really the only advantage here.
If my mouse will catch up with me.
And it'll do the render for you. I will tell you this though, do you see how I got really close to the cuff there? Don't do that. Either go all the way in or stay away. That's the way to think about making selections for, like, Gen Fill. So in this case, I'm probably just going to go right over the sleeve, make sure it grabs it. That way, I don't get any weird artifacts, and then we'll do Generative Fill again. Generate.
Now, if you're old school and you've done like quick masking, you could fake this into a quick mask and you can experiment with blurring some parts of the mask and not other parts of the mask. That'll give you some really interesting results as well. But I find for the most part, with the speed that we all have to work at these days, this works really, really well.
All right. We're almost there.
They've really got to write jokes in the tip section or something because I'm tired of reading the same tips. Just give me something funny, Adobe. So there you go. So that's a nice result, right? And so finishing that out, I like that little glow right there. Just hit my crop tool, go in there, hit square, line this person up, and I've got a nice post and they would never know the difference. Yeah. We're going to get a little bit more into this, but what about some different expand and contract in other areas? Where does this get even more playful? Okay. So let's look at an image like this.
Photographers, I love you. I'm one of you. I identify as a shutter clicker, but the designers are like, "Yeah, yeah, yeah. You want to take a picture of the people looking at the chasm in the Grand Canyon. I wanted you to put them far to the side so I could fit text in the middle of the design." So herein lies the dichotomy of photography to design.
This technique works really, really well, and it's basically making vertical strips, okay? So let me show you what I mean. Because there's some consistency throughout the image, the models work really well when you take something like this. Notice I'm not trying to get on his foot. I'm going to split that. All right. Now I'm going to offend some people by doing this destructively. I'm going to just grab these two and go, "You are in the way," and move them over there.
I know I've made a big hole. If only we had technology that might be able to repair this. All right. I'm going to come in a little bit here. I don't want a big stripe of white down there. All right. I'm going to take this guy, and then what I'm going to do is actually float move him over.
This is looking like he found something more interesting off to the left. I'm not happy with that, so I'm going to free transform it and I'm just going to say flip horizontal. All right, now, we look like we're at the same party, right? And I'm going to apply that. Okay. So now we've got what looks like one of those bad iMovie transition effects with the shutters.
But look, we can start repairing all this. So now I'm just going to go down and select this part. And so that it's a little bit more consistent, I'm going to do these together. I'm going to take this guy.
Why does it keep doing that to me? I think I have overscroll on. Yep. Overscroll is on. Okay. Zoom out, Mark. That will make it easier.
Okay.
I can probably touch his heel, but I'm very scared of it.
And then we're going to go over here and grab all of this.
I'll go pretty healthy into that. So we're just moving these strips around. So we'll hit Generate Fill again. Generate.
And then it will recompose for us.
And we're thinking and we're thinking and we're thinking. I need cat memes or something. Adobe, give me something. All right. So it's recomposed there. Let's check out our options.
That's pretty good. Now I got a new guy, nightmare fuel.
Random. Yeah. Don't want that guy. So I could repair that and Generative Fill, patch them out. But I think for the most part, this is a good start, right? Now, any areas that are maybe feeling like a glitch for some reason this little strip of cloud is offending me as if clouds don't do something random, but I might go in and like repair that and few other parts. But now the designers actually have this nice negative space where they can put in their copy, right? So just think of these strips as a good way of working because a lot of your image data will have the variance you need to tokenize for the model through that strip. So you can do that vertically or you can do that horizontally. When you start getting into amorphous shapes, it'll do a good job with that, but again, you'll find you're doing a lot more repair work, okay? All right. Let's keep it going.
Oh, yeah. We'll do that one later. Just making sure. Oh, this was the last one I was going to show you guys with this. Yeah. So we'll do an amorphous shape one.
When you can do a shape repair that there's some kind of narrative related to it, that helps. So I found this on Adobe Stock. It should be titled Here's Man Photographing Nothing.
Awesome. I paid for that. So this is one of those situations where I want to make it more interesting. There's this hole in the ground. I'm assuming that's what he's taking a picture of. So I'm going to go in here and use the guide that they tell you in the structure PF of if you draw something kind of to the shape of what you want, it will render that. I will tell you, if you are too specific with that shape, it will give you something that is very reflective of that silhouette. So an example of this was the other day, I drew, like, a shape that was kind of like a wine glass and I ended up getting this weird bucket that showed up. It was very weird. So don't be too specific, but I do want to get something up here. So I'm just going to do this.
Okay. And then in the Generative Fill area, I'm going to type something that's a bit of a narrative. So the first thing is, what do I want to emphasize first? Right, so I'm going to say, shooting water, right, there's my first part, coming up and out of a lava tube hole in the rocks, okay? So rocks, less emphasis, shooting water, more emphasis. Yeah? Generate on this.
Please don't be a bucket. Please don't be a bucket. Please don't be a bucket. Do you feel like you're playing a game every day? Now you're, like, "Please, please, please, please, please, please. Got you." All right. Not too bad. Okay. I like that one the most. That one looks the most real. But now this isn't an exciting image because he looks like he's scared of it. So same technique, right? Grab him, pull this over, get in there, son. Yeah! Right? And now I go in here, get a little marquee, do a little dance, make a little pixel, get down tonight. Oh, wait. You guys are all here. Sorry. All right. That happens in my head sometimes.
We'll just repair this part.
And then, sure, yeah, I could add a mountain goat and I could do some other stuff. But I know I'm repeating some of the styling here, but I really want these to land. So in that one, you saw the emphasis of the prompt technique, the strip technique, the empty Generative Fill technique, etcetera. So there we go. We have a pretty good image, right? And now he doesn't look like he's scared of the big hole in the ground where nothing was firing out of it.
Very strange. All right.
Let's keep going.
All right. We're going to close those out. Okay. Are any of you dealing with headshots for your organizations and portraits of people? Okay. Number of people. Excellent. I mean, you just did a bunch of these for us. Okay. So let's talk about that real quick.
Let's close this one out. I found this photo of this lady and it just makes me happy.
Now she could be laughing or someone could have just kicked her in the toe, but I still think it's great. So one of the things for those of you that are comfortable with actions should know that prompting can be recorded in actions. And we're going to do a lot of that from here on out for the rest of the session, okay? But if you need to do a mass batch of something, this becomes very, very helpful because you can automate it and go back to that 10-80-10 rule. So, in essence, what I have in here is a bunch of headshots...
And then I've got a style reference graphic that I want for the backdrop, right? So this is just a quick graffiti texture shot. So when you use references for your generation and you complement it with good prompting, you can get really good results of consistency while not having it look like you copy and pasted the same background. And we've all done this. Buy a really big graphic for a background and then you just move it slightly with each headshot so it looks slightly varied. I'll blur this one, skew it. But what I've done in here, I'm not going to make you guys watch me do the remove background and all that stuff again. I think we get it now. But in our Actions panel, I actually have an action in here that explains the entire breakdown. So it removes background, it makes a new layer, it moves the current layer, it selects the forward layer, it hides the current layer. Now why do I hide? I don't want the photo of her to influence what it generates, right? I could end up with a graffiti wall with a mural of a redhead in the background. I don't want that. So I'm going to hide that layer. Then it goes back to the bottom layer, it sets a selection to all, and then it does Generative Fill. And this is the part that you should see. There's my prompt. It records it.
And so I can use that as a template, etcetera. And then I have a reference image and you guys have probably all seen the reference capability. I have a reference image that I showed you earlier. So let's actually run this.
Start here, I'm going to hit play.
And we're going to do this one more time after this as a batch, okay? So that's not the image that I gave it. It's actually generating options of that graffiti background, but they're all really consistent. And because the prompt is something as simple as graffiti, grunge, textured background. So you can see where I put the emphasis in the prompt, right? I didn't say background make with grunge and graffiti. Yeah. Question? Oh, so when you go in and do reference images-- The question was, how do you tell it where your image is stored? That's a really great question. Thank you. One of the things-- Let me go actually in the Action, I'll show you. In the part here, it says Generative Fill. When I click on this, it'll actually bring up the prompt. It's a totally different interface. So you can modify it in the action, which is another good reason to have templates, but right here there's a button that shows the reference image and go from there, right? Now here's the warning to all of you, and you're probably already ahead of me. Just like a graphic for a web page, if you move the reference image into a different folder, it will give you an error and say, I can't find it, okay? So just know that, right? So if it was me, I would just keep a folder hidden somewhere that's just references and I would just keep them all there for that project at least, right? Great question. Thank you.
[Woman] Do you need to put a prompt if you have the reference image? Do you-- So the question was, do you need to put a prompt if you have the reference image? Technically, no. And it will emulate it, but you have to remember that it's not intelligent. It doesn't look at that and know that it's a graffiti wall. When a model is trained, it's tokenizing a pattern of converting pixel values into tokens. And what happens is through its training, it looks for any tokens that have a similar pattern to them. So it uses probability to say, "Hey, I identified this on conversion. This is kind of similar in my training. Should I make something like that?" The moment you add a layer of instruction over that that says grunge, texture, graffiti, categorically, it has some tokens that it's been trained on that have those labels. And so between those two together, you have a higher probability of getting what you want. So do you need to? No. Do I feel more confident when I do? Yes. Yeah. Now where this is going to get rad, I'm going to do a little future telling for you. I hope Adobe, at some point in the near future, is going to add computer vision models that, basically, look at the reference and then it writes a description in the background and then generates a suggested prompt from that. And then from there, that would become your prompt library. I don't know, but I suspect we'll see that probably in the next year maximum, right? And I have a demo of that on Groq actually that already works. So if you don't know what that really leads to, sorry, I'm geeking out for a second because it's a great question, it's going to look at your layer thumbnails and write a name for your layer for you so you don't get in trouble for not naming your layers anymore.
Yeah. Okay. I do that now using one of the scripts that I have from Groq, which is really nice. All right. So, sorry, jump ahead. So now what I want to do is I'm going to open up, let's revert this image back.
Brain, work. Brain, catch up. Okay.
Yeah. Squirrel there for a second. I'm like, squirrel. All right. Let's just do four of them because I don't want to waste too much time processing, but I do want to make the point here. Two, three, four, five. Let's close this one out. Okay. So if you're not familiar with batching, you could go File, Automate, Batch. Now you can do it with a directory of images, you can do it with images that are open, doesn't matter.
There are also script automations that you can use. I'm just going to use the old school batch method. And then you can see in here I've got my action set, 2024 art backgrounds. And then in here, I've got art background with reference. So this one doesn't have the reference. This one has it in there. I've told it where it's going to say that as a folder, and then I need to log errors to file. Your batch will not work if you do not set a log error to file. And I'm just going to press okay and pray to the demo gods this works.
It was taking about a minute last night when I was running this, so we'll see how we do. But the goal here, again, go back to the 10-80-10 rule, this could be running on a massive batch of images while I'm in the Zoom meeting or while I'm in some other activity. So now if you get a little time management for production throughout your day, you could say, "Oh, I know I want to batch these 30 headshots." I'm going to do that while I do this other thing, right? Just think a little bit about your project management, your time management.
All right. It's running through. The reason I'm doing a batch, not just to show the saved time but I want you to see that it's really not using that image as the background, right? And thankfully, the model is terrible at writing words, so you will never have a risk of it writing some inappropriate word in the background. Because if you think about it, these things generate what they're trained on. Imagine saying, you're trained on all the graffiti libraries that we've ever given you. Please don't use bad language.
All right. So now if we go through.
So that's all been automated, right? And now I could go through and go, I don't like that one. Don't like that one. Sounds pretty cool. It's got less red in it. Let me use that one. And then this becomes very fast. We can use an export automation, save all these out as PNGs. Project's done, right? Now if I want to automate adding cast shadows, do I want to blur the background? Again, you can build all that into your action, right? But again, if you want to change anything, it's pretty fast to do that in the template when it gets saved for the action. So your action library now could turn into think of it as a prompt button library, right? And you could just have things you commonly do as buttons in there and that becomes very helpful.
All right. Let's look at our next thing.
The weird thing with you guys all wearing the headsets is I'm sure you can feel yourselves breathing and hear things in there, but you're so silent. It's amazing. I'm like-- All right. Let's look at another quick thing.
So this is a quick piece that I did, and I just want to show how far away you can get from the origin of something as a reference when you make selections. So this image, I'll just quickly show a few things in here as a technique, but it started as this.
- [Woman] Very cool. - Right? Yeah.
That is not the same lady.
So the prompt in here that you'll see was, where's my properties? Young woman wearing a black leather jacket.
And I will say I got very interesting results with this, but you'll see something here, the reference image. The reference image is literally just the leather jacket. So why did I do that? Because it knows what a young woman looks like. So what I don't want is there to be such a variety of women with leather jackets, I want it to focus on, as a reference, getting the right leather jacket, right? And so that's really key with that. And when you choose something like that, there's also this button that says Remove Background and that's helpful as well. It'll isolate the background pixels. All right.
Oh, yeah. [Woman] So, one thing that I noticed that all you [inaudible] previously... they're all gonna have a filter, they'd have a problem... Oh, yeah. [Woman] So if I was to say... to American women, would I be able to generate that? Yes. Yes. Yeah. The ethnicity thing gets a little slippy in some areas because you can't say Vietnamese, for example. Like, it'll struggle with some of that. They are getting better at it, which is why we keep seeing the models get updated. But, yeah, you can experiment with certain labels. I've seen things like African American, Black American, and then you go, "What does an American look like?" And so that gets into, like, some interesting things. So the question was, could you get it to be a bit more diverse in what it generates, right? That's where that bringing that prompt early in the emphasis will really help. Yeah. I will say, again, it doesn't know that it's a person, so if you said a person with dark colored skin, you'll have a higher likeliness of success than if you said an African American. So think of it as, like, what it can convert to tokens. It can convert simple descriptives, and simple descriptives are things like color, shape, shade, etcetera, right? It understands that better.
All right. So let's just look through here. So I gave her that prompt.
It gave me her.
And then I realized I didn't like this background anymore.
And so there's a new filter, I'm just going to quickly show you guys this, that gave me a series of backgrounds.
And in this particular instance, I think I used for the label of this fashion retail store background, bright natural light, etcetera, etcetera, angled, all that jazz. You'll find this if you need to generate a lot of backgrounds. There's a new neural filter that exists.
You have to download it to install it...
And it's called Backdrop Creator. And when you go in here, you can simply enter a prompt if I turned it on.
And I love this. From Behance now, it's pulling popular prompts. So you could click on one of these.
And so here's Cyberpunk. I'll say create and it adds that to the prompt over here and it'll generate backgrounds. So what's cool is you can generate a lot of backgrounds from this. This is called a temperature slider.
Temperature, basically, means how rigid do you want to be or how much variance do you want to have, right? So if you have a high variety, you're not going to see them trying to be too consistent with the general architecture and geometry in what's generated. So I could keep generating these, but what's cool is at the bottom, it says, do you want to export these to a layer group? And that's what you saw in this version. So I got two-- I threw away the third one. I've got two in here that I really liked. It export them to this group, and now I could mix with those in my composition to see if I like something, or I could put the two of them side by side, do that overlap technique, and do Generative Fill and make it fuse the two environments together. And that can be really cool. Yeah. [Man] I have a question about consistency. - Yeah. - [Man] What prompts would I see... Yeah.
Yeah. So the question was if you write a prompt and you keep generating using the same prompt, would you eventually get the same results? That really comes down to temperature, and Adobe doesn't give us access to that in Firefly like you would in an LLM. So my short answer would be no. But if you had control of seed keys and temperatures and things like that, you could make it very rigid. Theoretically, you'd get about 95% consistency. So this is the big problem for GenAI. People say, well, I like that. I want to keep that. But in all the other generators, we don't have Photoshop. So Photoshop allows us to mask something out, then keep generating and then allow that to overlap on it. So it's a little bit of manual work, but that's the best way to get the consistency. Prompt templates help a little bit, but there's still some 10% labor on our side.
All right, so the one last thing I want to show in here. Two last things I want to show in here. One, so it generated her. I've got her over the background. Cool. But one of the things remember I showed you guys that bordering technique earlier? Yeah, I want you to think about it in a different way now. So here, I've got her picture. I've got her again here. You notice her edges change? Okay, we're going to talk about that. So what I did is I made a selection of her, I did select modify border and split it, and we all know when we mask people out, they can look like pretty badly cut out, right? Selections have gotten a lot better at masking that over the years, but still. But here's the part that we don't think about. Not like, oh, do the hairs look cut out or not? But it thinking about the environment it's generating for. So I generated this with that new background, and I want you to see this edge.
This edge, okay? There's a giant light source behind her. She's wearing a shiny black leather jacket, but she looks cut out. So when I took a selection of the edge, split it and I said, generate for this, watch what it did along this edge. Let me just zoom back a little bit. Turn it on. Do you see it put highlights around the edge of the jacket? Look right there. Now that looks like a bad cutout because there's white pixels, except you realize it diffuses, comes back and if I scroll down, it goes away at the shadow line.
So it understood that there was a giant light source behind her on that side. So this is one of those ways that I can get them to blend really well into the scene when I'm compositing them. And then after this, the usual stuff, right? Like, again, look at the highlight on the right.
And then her hair even gets some of the highlights.
So again, good way to comp in if you're not really proficient at finishing and retouching.
But her images got a darker black in it than the background, the contrast isn't the same. This is where we get into our traditional techniques, get into levels and identify I want to match the background. So when I look at the background graphic and I would use levels to evaluate the histogram, which is this chart, on the background layer, I would see that it doesn't have a true black. Now if you haven't played with this before, I have tons of videos online for free of me explaining digital color theory and color mapping and tone and luminosity mapping that explains everything about histograms and levels and curves, okay? But what you should know is just if I look at a layer like this, the backdrop, and I bring up levels and I zoom in over here, do you see there's this gap? That tells me that there are no pixels in the low frequency of the image. So that means somewhere in here, the first shadow, oops, exists somewhere over here and I can find it by holding option. If I drag that in, I can see which pixels go dark, okay? So I want to choose a number there that shows me the darkest value. It's about 24-ish. So that means on her, because I want her to match it, I, on her image, went in here and I changed the input to output and you can see I've got it set to 18, which means I'm in that range of where the darkest value should be. That's how I get the shadows on her jacket to be no darker than the ones in the background, right? It's a little bit more of a retouching thing, but it's those little details that matter. And then, lastly, this is a trick that I've done for years. If you're into color grading at all, using a solid color layer at the top and let me set this back to normal and I'll explain it. This is one of my favorite techniques for just bringing a scene together. It's a pretty simple formula. So I'll show you the formula first.
5100 200. This is the blue that I always use, okay? It's just double double. But there's a blend mode in Photoshop that if you set it to exclusion, unless you're making cool Polish posters, you're like, "Why would I ever use that?" But what it's actually doing is it's taking whatever color you choose in the color fill layer and applying it to everything from 50% to the dark zero frequency tonal range of your image, the shadows. So what we've done is we've made the shadows cold. And then everything from 50% above, it complements it with a warm complementary color, that's where we get this gold color. So now when I take something like that and I just reduce it down to anything-- Usually I find anything below 10%, but there, watch when I turn it on and off.
Do you see it just kind of pulls it together? All the highlights get a little warmer, all the shadows get a little colder. And so this is one of those techniques you'll see a lot of retouchers using. But my golden rule in this, 50, 100, 200 exclusion, low percentage. That works really well.
All right. Are you guys still hanging with me? You guys surviving so far? It's very early to get into this stuff.
All right.
Let's talk about building a scene. Okay. Let's start with this one.
So here's a technique that I haven't seen anybody using.
We're all pretty good now at using Gen Fill to amend the scene, but something dawned on me on the way here. There is a metric bleep ton of content on the web about Photoshop techniques, and all these models are getting trained on data from the web.
Why don't I use the word Photoshop in my prompt? It's literally a verb we've all been using. "Photoshop it. Just Photoshop that." And I wondered, will that do something? And it turns out it will. So I'm going to play it first, and I want you guys to see it.
So up here, I've got an action I've built that's called evening and rain.
Now generating rain is an interesting idea, so let's just play it. Let me zoom out a little bit so you guys can see it better. Is that okay up there? Yeah. Cool. Hit play.
I was thinking about this on the flight out here. I'm like, "Wait. I've written probably 500 articles on Photoshop. Those must have been used in training at some point." So that turned it from day to night and added rain to the scene, right? Now here's what it did. I went old school. I thought about Photoshop techniques. How do we used to do this back in the day? We used to put a gray layer in, render clouds, filter noise, add noise, motion blur on it, we'd all warp the motion blur a little bit to create some variance, we'd set our blend mode to screen, and we'd have something that looks like bad digital rain, right? And the challenge in that was variance. But here is what it actually produced.
That's better than I can get out of Photoshop filters.
But my prompt for this was actually...
Black and white rain texture for Photoshop compositing.
And so it made the texture and all I had to do was set the blend mode on that layer to screen and I get different scenes of rain.
So now, if I generate 20 of these, I can make an animated sequence, I can do all kinds of stuff. Now, I've got some color grading over this. I'm using a color lookup table, some other things, but I want you to see this change briefly. I'm going to turn these off for a second and we got daytime rain now, but watch this. I'm going to change the white rain texture...
To snowfall, and say Generate.
And we've all spent years making these textures on black layers to composite these into scenes.
Isn't that cool? It dawned on me, like, "Oh, I can prompt for things that would complement techniques I already use in Photoshop," right? So that's pretty cool to me. I can get scenes like this that are quite believable. And then, again, using an adjustment layer like, if you've never seen it, Color Lookup. It's a really old image adjustment that has a terrible interface, but one of the pull down options at the top here for your loot, your color lookup table is Night From Day.
So by building that into the action, first, it does the prompt, then it adds this layer.
I've got that set, so it's a nice dark day scene and I've got a little saturation in here just to make an adjustment, just desaturate a bit. But you can now do this for a number of techniques. I could go in here-- The question was, could you prompt it to make snow on the ground? You could. Yeah. You could do selections of areas on the ground and then say piles of snow and it would generate that on there. And if you wanted to, you could use a save selection as a reference map for it to map that. So the short answer is yes. You might have to play with it a bit for your mapping, but yeah. I would probably do that now at this stage, after the snowfall because each image is going to be slightly different.
But here's another version of this.
Let's look at this one.
So I did a different version of this. It's a cool scene, but I wanted something like a lighting texture.
Just something that adds a little extra something to the image. I feel like I'm a chef there. I'm like, just a little extra. Just drop some salt in there.
So what's one of the common things we all bought plugins for? Lens flares and solar flares, right? But down here, it gave me a bunch of these. Do you notice it grabbed the truck? It will do that. It'll actually look for some geometry that it's trying to reflect off of. So I like this one. It's a little bit more subtle. And then I just added a curve to it to give it some color and then a warming photo filter.
So now I can just get a little bit more ambient environment. And then again, it's just a black layer set to screen so that it's masked over. Yeah? So now you're probably thinking, "Oh, my God. What textures did I do 10 years ago that I can suddenly bring back into my arsenal of techniques, right, and have it generate some of those for you." And no need to buy that $8 plugin.
All right. Let's look at-- Was that that one? Yeah, that's that one. Cool. Oh, and then same thing here. Let's talk about this briefly.
So this guy doesn't exist. He's not real. He's never existed.
You'll notice that when it generated, I was using that template they provided in the guide, photo of a young man looking straight ahead, dressed business casual, etcetera, etcetera. And we got a bunch of different images here.
That guy's name is definitely Chad Stone. And if there's a Chad here, I apologize. It's not your fault. Your parents named you. I will buy you lunch if you're named Chad.
I'm named Mark. It's like a dog with a lisp. It's just-- All right. So what I did with this one is I made a rectangular selection, you can see here, and I used that prompt to generate him. Then I had this huge gap. I selected the gap and I generated a downtown Chicago styled architecture and cityscape as a background. So that's this side, and there's variations of that.
Notice it recreated his arm? Now back to the lady's question earlier. This is one of the things that's really troubling. I did a selection of this, and I overlapped him, knowing that I should do that. And I'll just be frank, it gave him white guy hands. And that's a problem because when it's generating somebody and then you're modifying them, it's not looking at the subject and saying, I know the ethnicity of this person. So I ended up getting him with white guy hands. I was like, "Oh, that's a problem." So I had to go back and think about some of those problems. So that's another area when you're doing micro generations, watch out for that stuff. That becomes a problem. And then from here is that technique again. I've got my lens flare in here and just a little bit of coloring on that flare. So you can make scenes really fast. So this poses the question that depending on the sort of work that you're doing pretty regularly, think about things that you would generate or need to buy constantly in your work. I'll give you an example.
In my Actions library, you'll see I have one that's called generic person mail.
He's basic, right? But I've got a prompt in here that has some instruction, set selection, Generative Fill content, and then the prompt. But I'm generating stuff like this all the time. So I'm just going to hit play on that...
And you'll see after it's done what that actual prompt looks like. It's the one from the Adobe guide, but, again, you could change that template. So, it says photo of a young man looking straight ahead, dressed business casual, looking professional, happy and optimistic, walking on a sidewalk in downtown New York City during a sunny day. So, New York, less emphasis later in the prompt, meaning I'm not trying to get a landmark, but still that vibe. What's more important? Photo of young man. His attire is more important to me than the icons in the background, etcetera. So here's my different options.
Don't know what's going on with that watch, bro, but-- Yeah, right? So not too bad. But I'm constantly generating things like this. In the same ilk, I could have another library like that. So what are the descriptors of things that you're constantly looking for on Stock? Here's generic person female.
Yeah. They'll stay with-- So the question was, do the variations stay with the layer, yes. But what's really cool about that is if you wanted to use both of those people, you could simply duplicate the layer, all variations will travel with it, and then you can select between them.
So that's pretty good.
I don't think she goes to the coffee shop.
That's more believable, right? So in that one, that template is modified. It says in downtown Seattle, coffee shop during a winter afternoon, so on and so forth. And you'd be amazed how much things like telling it a season will change the outfit the person's wearing. So you could say, wearing a wool coat, or you could just say, in a winter scene or a winter season, and that will influence it. So again, you just have to expand how you think about prompting and what it generates.
So it's a good question, if you put something into the prompt, would that influence the level of diversity you get? I suppose. Like if you said New York versus if I don't know where you would say, but I grew up on a very small island in England. My parents moved there when I was eight. I can tell you they all look like me, right? It's 7 miles long by 1 mile across. So I wouldn't put the name of that town in and expect to get diversity. But I suspect when you use major metro cities, that probably dilutes the specificity of what it would generate. So it's another way of experimenting with it, for sure. [Man] Can you specify, like, full length shot versus, like, actual? Yeah. So a great question. The question was, can you specify the type of shot? So, yeah, by default, because it's trained, it'll look at something like this. Remember, Adobe trained most of this on Stock. So what you're seeing is think about the patterns what you've seen in Adobe Stock, that's what you're going to get. So if you frame it and say, like, full frame shot, you'll get that. I mean, we can try it real quick. Let's see. Where would I put that? So looking away from camera, let's go there, right after this. So let's say full frame photo. And if I wanted to influence this, how would I make sure I get a full frame shot? I would probably say full frame photo of her walking, and then I would separate that.
So let's see. I don't know.
Yeah, so as the question is, if you have a specific brand and a style, can you share your props? That's why I build text files of my prompt libraries. The ones that I like and then reference images of what you've already used that's brand approved. Tell the team this prompt with this reference, this prompt with this reference.
Yeah, yeah, you can save ATN files and send it across. The only thing is if you save an ATN file from your Actions palette and put that across, it won't send the reference with it, but the prompt will go with it. So you just have to have both of those. So there you go. We got a full frame shot, right? And each of them, it gave us that. So again, you just-- But had I not said walking, my probability level would go down. So that's the point I'm really trying to get across today is you've got to think about the words you use and what influences it, right? A director on a movie set will tell you, "I want to see them walking." Well, every photographer or cameraman knows, "Oh, I need to back up." But if I just say full frame shot, I'll probably get it, but I'm increasing probability, right? All right. We are at time. One last thing I'll add for you guys is-- And let's just do it with this. I'm just going to tell you, if you've used ChatGPT or something like Groq, we have a free service that's chat.groq.com. If you want to use LLMs and different models, you could do things like this.
"Build me a one-day itinerary for a visit to Miami starting at 9am to 9pm and include meal suggestions and activities." And it writes that for me, and it's really fast. So people can sign up and use that today for free and try different models out, but you can also use this to write code. So that's the last thing I'll show you is I wrote a script.
Let's see if we can get there. Scripts browse that does resizing. So I asked the LLM, write me a script-- Let's see. Scripts resize to 1,000 pixels wide.
So I just said, "Write me a script in JavaScript for Photoshop that will resize an image to 1,000 pixels wide." It wrote that JavaScript for me. I saved it out. And when I run it now, it resizes to 1,000 pixels and I don't have to go through much of the interface like image size or canvas size. So start thinking a little bit more broadly about how you can use some of these tools. It will actually allow you to do some automation really well when you're editing. All right, with that, I'm going to show you guys that link that you need. That's the link so you can get your download. That has the prompting guide in it and then the notes for today's session. And with that, I will hang out afterwards in case people have more questions. Otherwise, did you guys learn something? Was it helpful? Okay. Cool.
[Music]