Creativity Super Session: Video and AI

[Music] # Not forget him # # Ooh # # Now it's time for breaking the silence # # Can you see me # # Electricity around you, girl # # Everybody in the joint like a Taipei # # Loosen up # # And let all of your inhibitions go # # Stomping thunder elephant, elephant # # Bending all the elements, elements # # Zebra blazer elegant, elegant # # Make that energy move # # Get em jumping on the elephant # # Stomping thunder elephant, elephant # # Bending all the elements, elements # # Zebra blazer elegant, elegant # # Ooh # # Elephant # # Elements # # Elegant # # Make that energy # # Elephant bending # # Elegant... # [Music] [Woman] Good afternoon. Please welcome Adobe's Director of Product Marketing, Meagan Keane.

[Meagan Keane] Hi, everybody. Thank you so much for being here. I recognize that late in the afternoon on Day 2, it's a lot. I hope you're not here to take a nap because we have a lot of exciting stuff to tell you, and I hope you're here to get amped up before Sneaks. So as mentioned, my name is Meagan Keane. I'm Director of Product Marketing leading business strategy for the Pro Video tools at Adobe. I've been on the leadership team for Pro Video for almost 13 years and for the last 10, we've actually been a leader in bringing machine learning and artificial intelligence into the video creation process. Creatives are at the heart of everything that we do here at Adobe and as we consider our strategies around AI and more recently generative AI, our intent is to empower creatives to spark inspiration and to keep you doing what you do best. Telling amazing stories. Before getting into the technology side of video creation, I actually spent almost a decade as a documentary filmmaker. And my story isn't actually unique on the Pro Video team at Adobe. There are numerous people across our marketing, product management, and engineering teams who come from the professional video world. We are editors, we are motion designers, we are colorists, we are musicians, we are creatives. I feel like this is an important point to make because our empathy for our users doesn't just come from processing market research or analyzing industry trends. Our understanding of your motivations and your pain points come because we have sat in that seat.

Pain points are a main thing that we are addressing when we think about AI. AI has the ability to simplify and even remove many of the most time-consuming, laborious parts of video creation, the parts that you did not get into this to deal with.

And we recognize that with AI, we're really looking to actually get you to be more creative.

So yesterday, you heard reference in the keynote of two different categories of how we think about AI at Adobe. Assistive AI and generative AI.

Now as I mentioned, we have had AI features in Premiere Pro and After Effects for nearly a decade. Some of your favorite features remix, auto reframe, auto color, roto brush 3.0, speech to text, text based editing. These are all assistive AI features. Our intent with the development of these features was really to remove workflow tedium, to take out the most time-consuming parts of your workflow. Now on the other side is generative AI. Generative AI is creating something new. It's net new pixels, net new waveforms. But what's interesting is on the Pro Video team, when we're thinking about how we're bringing generative AI into the Pro Video tools, our motivation is the same. We are thinking about how we can implement generative AI into Premiere Pro and After Effects to make sure that we are keeping you in the creative zone, that we are removing the parts of your workflow that are the most annoying, that are the most time-consuming and ultimately take you out of your creative flow.

Now we recognize that what's at stake is incredibly high. We are understanding why many in the video community have a lot of emotions about what's going on with AI. There's excitement, there's skepticism, there's even fear, and it's totally valid. And we understand the responsibility that we're taking on here. The rapid pace of advancement in these innovations also comes along with a lot of responsibility for us to get it right. And that's why we've spent over a year deeply, deeply engaging with the video community. We've hosted community events all over the world from LA to San Francisco, New York, London, Berlin, Munich, Amsterdam, even right here in Miami, just before MAX started. I personally had the honor to go to Los Angeles at the end of August and sit down one-on-one with editors across feature film, episodic television, documentaries, commercials, social content to talk about what they're thinking about AI. What are the conversations in the video community between peers? What are the questions? What are the concerns? And more than anything, this community engagement has been about listening. We understand that there's a lot at stake here and we need to understand where you want us to be focusing as we forge this path forward. So we've learned a lot from these conversations. What's risen to the top is that when you think about integrating generative AI features into your workflows, you want to maintain the innately creative parts of storytelling. You want to leverage AI to take on the labor, to take on the parts that you never really wanted to do in the first place.

You see the power of AI to communicate creative intent to your creative collaborators. And when you think about AI as an assistant, the best thing that assistant can do is to keep you in the creative zone.

What that means is the way that we are needing to implement these generative AI features into the products needs to respect muscle memory. They need to respect the workflows that you already have established. They need to be not breaking your creative flow, but keeping you in your creative flow.

And finally, we've learned that the approach that Adobe is taking that is responsible, transparent, creator first, as we're approaching GenAI is important to you and is important to the trust of the industry, the people likely that you work for or ensuring that you trust these tools in the hands of your teams. And we don't take that lightly. What you're going to see today is a lot of how these generative AI tools aren't just features, but really they're forming workflows. And with that, I would like to introduce you to my colleague and friend, Kylee Peña. [Music] [Kylee Peña] All right. Hi, y'all. You having a good MAX? Yeah. Thank you for choosing to spend this time of day at this point in the conference with us instead of taking a power nap before Sneaks and before Bash. I know we're up against T-Pain, that's a tough one, but so, yes, my name is Kylee, and as Meagan said, I used to be an editor too. And we aren't just people who have been in the chair, we've been in the chair at 3am, trying to solve a problem. I have been woken up at 4am to a workflow call, trying to solve something because this dailies got to get done, y'all. So I'm just really excited to be part of the Premiere Pro team because I've used Premiere Pro since I was literally a teenager, and to be able to bring a lot of these AI tools into our product responsibly is a huge privilege. And I want to show you a few of my favorites in Premiere Pro today. So I'm going to go ahead and hop over into the demo. I want to start with assistive AI, the things that are already in Premiere Pro right now that help you reduce some of the tedium, do the things you don't really want to do. I'm going to show you a few of my favorites, starting with the new audio workflow updates. So I'm just going to make my timeline a little bigger so you can see it here. With audio category tagging, when you drop audio clips into the sequence, they are automatically tagged as dialogue, sound effects, music, or ambiance. You can see I've got these nifty little badges here that tell me exactly what I'm looking at, but most of all, when I click on these, the Essential Sound panel opens up over here, and I have the tools that are most relevant for that sound type. So if I'm working on dialogue, I have dialogue related stuff. If I'm working with music, I've got music related stuff. So it makes it a little bit faster to get to what I need right away, and it also gives me a clue for what I might want to edit because I'm a video editor, I'm not an audio editor. I'm not naturally good at it, therefore, I don't like it. And speaking of audio, Enhanced Speech. I can't see everyone, but clap if you've used Enhanced Speech. Yeah. So if you haven't-- Okay, this is magical. So Enhanced Speech magically improves dialogue, recordings, so they can sound like they were recorded in a professional studio, which sounds like almost too good to be true. So here is a circumstance where I'm going to go ahead and click on this clip. Have you ever been in a circumstance where your lav mic died on set, or the production dumped everything, and they're like, "Oh, we didn't put the mic on her." Good luck, fix it in post. Let me play so you can hear what that sounds like. [Woman] A freelancer is insanely difficult. It's really-- It's crazy. I feel so grateful and lucky, and I've been-- Ah, I can make that work, or I can come over here to Enhance Speech, and I'm going to go ahead and turn that on, and it'll go ahead and enhance it. This is all happening on device, so you don't have to worry about being connected to the Internet for this one. It's happening locally, and I have this mix amount slider, so I'm just going to put it right in the middle. A freelancer is insanely difficult. It's really-- It's crazy. I feel so grateful and lucky, and I've been doing this for a decade, but just, knowing your worth and like-- Not bad for one click, right? Yeah. Yeah.

Easier audio deserves applause. Yes. Okay, and you might notice also I have a music clip down here as well. So this clip is obviously shorter than my music, and I actually really like doing music editing. I like the problem solving, but it takes a lot of time because you're in there, you're trying to find the right musical phrases, you're in the sub frames, you're doing crossfades, suddenly, you're just like win it way too deep, and then the producer inevitably comes and is like, we have some notes, and then it's all got to change, and there you go again. So I'm going to turn this down a bit, and I'm going to unmute this. And so all I have to do is come over here into my toolbar and click on the Ripple Edit Tool. It's hidden under there, the Remix Tool. Do you all know about remix? Yeah. So all I have to do is click and drag, and it'll analyze it, and before it can even tell you what it's going to do, it does a thing. It finds where the edit should be, and so I'm going to solo this track so you can hear the music. Right here, where it's got these squiggly lines, those are where it's decided the edit should go. So see if you can tell. I don't know if you'll be able to tell. [Music] So you can even keep finessing it too. I can drag it out again. I can keep doing it. [Music] Remix. Awesome. Really quick audio changes. So that's enough audio. Let's go to Text-Based Editing. So Text-Based Editing, as a former unscripted editor, is another one of my very, very, very favorite, favorite tools of all time. It would have been life-changing, so I'm glad you guys can have it because I couldn't have it. Text-Based Editing makes editing video as simple as copying and pasting text, and we make it even easier to use. Come up here to the Workspaces, change to Text-Based Editing, and it puts the Text panel over here with a big transcript. When you import clips, you can set it to automatically transcribe if you want to. So I have done that here, and I already have my speakers labeled, so I know exactly who's talking. And this transcript live updates to my sequence. You can see here. [Woman] Because I was a 400-meter hurdler, and I knew that I was the best. So as I work here, I can scroll through, I can search. If somebody says, I think we mentioned California during the interview, I can search for that. Instead of having to listen to everything first, I can start assembling everything like a Word document, just cutting, pasting, rearranging, searching. If the producer says, "Oh, I know we talked about Utah." I'll type in Utah, it's not in there. I don't have to spend two minutes listening really closely, like, "I'm going to get in trouble with this producer if I don't find it." You can just go back and say, "No, it's not in there." And then they say, "Oh, yeah. We were talking about that at lunch. Never mind." Waste of an afternoon. Gone. And the best thing is, as you edit over here, so I'm going to highlight this section because I'd like to move it up. I can hit Cut. You can also use keyboard shortcuts. I'm going to do cut. I can come back all the way up here and I'm going to click over here and I can do Command-V or Paste. You can see over here it's rearranged. So I can start really just coming in here, deleting whole things. I can also remove all the pauses, I can remove all of the filler words like um and ah, and when I'm done with this piece, I can also instantly create captions so that I can create styles for. So typically, what I would normally do is get up with the cross-country run-- Which is really great for social media and accessibility too. So that's Text-Based Editing.

And let's see. I'm going to switch back to my workspace, my Essentials Workspace, so I've got a little bit of everything. One more assistive tool I really like is Auto Reframe. So how many of you have to edit 16x9, and then-- Oh, we need one for Instagram, we need one for TikTok, so now 9x16. You have to do that? Anyone have the social deliverables? Yeah. Okay. So you could you could do it all manually, and it's not a lot of fun. Or you can come up to Sequence and go to Auto Reframe Sequence, and I'm going to choose 9x16 here and just hit Create. It created a new sequence and it automatically and intelligently reframed everything to keep the action in the middle of the shot. So you can see here...

And if there are circumstances like this one where it doesn't quite get it right, I'll go ahead and click on that shot, and I just come to the Effect Controls here, and I can just do an offset to put it in right where it needs to go. So there's no additional keyframing and fiddling. I just move some things around, and then I have my TikTok deliverable just like that. So that is a round-robin lightning round of, most of my favorite assistive AI tools, but let's talk about what's new. Let's talk about the fresh stuff. Let's talk about generative AI with Firefly Video Model inside of Premiere Pro. So the first way we're bringing generative AI into video editing is with a tool called Generative Extend. It's over here in the toolbar, and it's currently in Premiere Pro Beta, and it's really, really simple to use. It's literally clicking and dragging. Generative Extend allows you to add just a few more frames when you need them to hit that beat on the music, hold a moment that's emotional. It also works on audio. You can drag out to have a little bit of room tone. So does production shoot room tone for you? Because they don't seem to like me very much, and they don't really give me the room tone. I have to create it from little bits and pieces. And room tone is helpful to smooth out your audio edits. So what you can do here is I can find-- Okay. Here's a circumstance that I ran into all the time when I worked in corporate. Everyone wants this sound bite and then to fade up or fade into the logo, so I'll show you right here. I'm going to mute the audio 'cause it is bad on purpose, and you'll see why. Okay. I'm going to make this bigger too. So boom. So you can see here what's happened is he says something, and he pauses, and that's all I want. But then it fades out, and he keeps talking. So my options are I could move the fade earlier, I could shorten it, I can do a hard cut, or I could use Generative Extend and change what happened to rewrite history a little bit. So I'm going to come here to the selection tool, and in the timeline, I'm going to look and see-- So he stops talking right here, and I'm just going to make that my new edit point. And Generative Extend, click and then drag from here. Generative Extend does require an internet connection because generative AI is hard, y'all, and it needs to go to the internet, the models are too big, the compute is too much right now, but things are changing fast. A year ago, if you said we were doing this, I'd be like, "Okay. Sure. Yeah." So what it's doing is it's uploading a little piece of your material, it is generating an extension based on that, and then it's pulling it back down and making a new clip. And I'm going to jump over here into one that I already did, and I do encourage you to go and try this because you have to see it to believe it. So I'm going to mute this audio again. Okay, so you can see what's happened here. I'll play it once full screen, and then I'll play it once so you can see the AI-generate label.

Nice, right? Yeah. So-- Clap, clap, clap.

So he stops talking, and then it fades in. And then I have everything, everything, so it works so well after that. You can see right here, AI-generated. You can see it's very clearly marked in your timeline, so you know exactly what was AI-generated. So what about audio? I'm going to go back to this one and show you why I turned the audio off. So this has some very, very loud room tone, just so it's very clear what's going on here. [Man] We're in California, so hydration is key. - Compared to this-- - We have cramping. She started taking Dryp, and it did amazing for her. We're in California, so-- Okay. Trying to blend those together is going to be tough. Even if I, like, noise reduce that one, it's going to be really hard because the resonance of the room, everything about it is going to be different. So one of the things that audio mixers taught me to do is to bring the room tone for the worst clip into the whole piece and then mix it and reduce it together to balance it out. So I can do that right here. All I have to do is click and drag from the edge all the way back over here, and I'm going to jump over here so where I've already done it, and you can hear what I mean. So I have clicked this is all AI-generated. This product called Dryp, it's amazing. I mean, Janae in particular-- Let me solo that so you can hear it.

Obviously, that's insane, but imagine it's real room tone that you're using, and there it is, and I pulled it back from the very edge of the clip, right at the edge of the dialogue. I didn't have a drop of this room tone, but it works. It's amazing, amazing tool. So what about other types of audio? What about sound effects? Yeah, it works on sound effects. So on this timeline, you can see there is a video that's shorter than the audio. So when I'm playing this back, when you see the video go black, the sound that's continuing is AI-generated with Generative Extend.

I know, right? Really cool. Here's another one.

And then one more.

So it works on all types of audio. So really, go in there, try it, push it, let us know how it goes. And we know, look, when you look at it, it's not always seamless. We know there's a little bit of gamma shift here and there, there's flickering, it's not perfect. There are limitations in beta, 1080p, 720p, up to 30 frames a second, you get back in H26, there's so many things. We're working on it, and we need your feedback as you're working. You can right-click on this AI-generated label here, and you can generate again. It'll save all of your generations as you go, so you can always go back, but also you can rate, is it a good output or a bad output. And you're going to tell us right here so that we're going to build it the way you tell us to build it. So that's really, really helpful for us, because right now, it's amazing, but it needs to be even more amazing to fit into a pro workflow. So speaking of that, let's look at this other piece I have. What about narrative? And I think that generative AI has a really big role to play when you're doing all kinds of storytelling, so I'm just going to play this whole piece right here for you so you can see what we're working with. It's about 45 seconds long.

[Woman] Day 409. Detected unusual readings today.

We think it warrants further investigation.

We'll update with finding soon.

All right. So you can see the vibe I'm going for. You can see I've got some opportunities in there to improve my cut as I'm working on it. Let's start out first with Generative Extend because we've been on that first. So there are a couple of things here that are bothering me as an editor. One is this piece, the tone of this piece is really weird and disjointed and Lynchian, and I want it to be like a disconnect between these astronauts, but in this clip right here, he turns and he nods. It's creating a sense of comfort between them, but I want him to turn and just stand there and stare like a weirdo. So all I have to do is come right to the edge before he nods, and then that's my new trim point. I'll go ahead and go back from here. Generative Extend, drag it out. And then the other thing that really is bothering me, it might have bothered you too. Her eye lines are shifting. I'm like, "Is this really the best take?" Yeah, that's the best take. So she's looking back and forth. I'm not digging that.

So I'm just going to come to-- Oop, right before she moves again, right here. That's my new edit point and Generative Extend. Again, I can do 2 seconds of video, 10 seconds of audio, so this is plenty. I only need a few frames here and there. So we're going to let that cook, and I'm going to switch over to talking about the Firefly Video Model that we have on firefly.com We unveiled Text-to-Video and Image-to-Video, and these are not in Premiere Pro, but that doesn't mean that I can't go and play and bring that stuff into Premiere Pro because it's really handy for some of these things. You'll also notice that this generation screen is backwards because that shot has a flop effect on it, so it's generating the extension underneath the Effects, and then it'll apply the effects back on. So hot tip for you. Okay, so here's one. This is really killing the vibe of my offline edit, when I have people trying to imagine, is this timing working? Is the tone working? This is going to be a shot that I'm going to get back from VFX, you know? And I don't really-- I could put stock in there, but I don't know. And so it just is really hard to imagine. It's like, "Woah, this is really spooky. Is that shot working?" No. I have no idea. But I can go to Firefly Text-to-Video and generate a moonscape as a temporary thing, and while I wait for my VFX company to get that shot back to me, I can generate. In Firefly, you have all kinds of settings here for camera controls and motion, and we have some prompting guidelines on our Adobe HelpX site too because it really is all about getting specific and detailed and putting in exactly what you want, putting in camera controls, and characters, and things like that, writing these really long beautiful prompts and creating art from those. So you can see I have already baked a few here, and I you could see the prompt is short like, sweeping low cinematic drone shot flying over a bleak mountainous snowy alien landscape on an overcast day. That one's fine it's cool. And then I can keep going from here. I can also use the seed to iterate on something I already like without having to start from scratch, but I'm going to go into this one and show you that what I have generated that I liked.

That's better, right? Yeah. It's easier to imagine how this edit is going to work.

Yeah. Yeah. Yeah. Yeah. I love this stuff. It's so fun. Okay, so I think my extensions are done. Yeah. Okay. So check this out.

Nice, right? Yeah. That's pretty cool. So another use case that I really love is being able to generate atmospheric elements and comp them in. So I've already got a few over here, and there's this snow shot. The prompt is like heavy realistic snow on a solid background. Not really quite what I'm going for. It's a little Star Wars. This one-- Yeah, they're a little too similar. I ended up liking this one quite a lot. This one looks a little real, so all I have to do is drag that into my timeline, and it is 720p because the Firefly Video Model is 720 right now, but that'll change. And I'm just going to come to my new Properties panel and hit Fit. Go to Effect Controls, go down to the Blend Mode, and hit Screen, and I'm just going to Option drag this guy over, trim that up. And then now, I have added a little bit more depth to a shot because we couldn't afford snow on set. You know how hard that stuff is to clean up? It gets everywhere, and now we've got a little more depth to our shot. Day 409. Detected unusual readings today. Which is really cool. And one other use case I love is Image-to-Video. You can use your images, you can use your still images to create, to bring life to them, to things that you already shot, but I want to use this as a way to indicate to the director that I really think it's worth shooting a pickup shot. Like, this shot right here-- We'll update with finding soon. The whole vibe of the story is that something sabotagey is happening. It's spooky, but I don't get that from him flipping a switch. I wish on set he had unplugged the cable, and the expense and time of going and resetting up that shot, I think, is worth it, but it's hard for me to get that across to the director without showing the visual. So all I have to do here is go to Export Frame-- Export a JPEG, and then I'll come back over to Firefly here, upload that, and then I can just prompt it from there. And then I can write in a prompt about an astronaut hand coming in. I'm going to jump to my pre-baked cake because we're running out of time and go over here, and so just to remind you what it looked like before. We'll update with finding soon.

And this is the one I have mocked up to show the director to convince them that we need to reshoot this. We'll update with finding soon.

I know. The first time I did this, I was like, "I have to tell somebody. This is so cool." There are little issues with this shot, I could have iterated it further. There are some circumstances where this would work than some things. In my case, I want to use it to convey creative intent so that I can convince people that I'm right, my edit will be better, that whole story will be better if we do this reshoot. And of course, if you have other VFX stuff, another instance is the snow globe shot at the end. The director and I can spend time iterating a little bit with ideas in Firefly, and then give the finished prompts or the finished outputs to our VFX company, so they have a better starting point. So there's less iteration up front, fewer versions back and forth before we hone in on the idea. So I'll leave that for a little surprise at the end. But let me play this whole finished piece for you right here, so you can see that what I've got going is, I think, a little bit better with Firefly and everything in there.

Day 409. Detected unusual readings today.

We think it warrants further investigation.

We'll update with finding soon.

Little scene elsewhere action at the end there. Yeah. So that's better, right? And it didn't take me a lot of extra time. Yeah. I'm really excited about bringing generative AI and intersecting it with that assistive AI workflow because I think it really empowers you to tell stories faster, better, more efficiently, but just to convey the creativity in your head a little bit easier. And we want to do that responsibly too. You can export from Premiere Pro with content credentials inserted as well. You'll find that on the Export tab. And we're all about this responsible innovation. It's really important. It's part of the conversation, and a lot of that comes from our community engagement. We could not have made Generative Extend happen without all the community feedback we've gotten, and we really, really need more. So please, when you go try Generative Extend, rate your extensions. Let us know. I'm going to read every single one. I'll tell you that. So please go check out the booth, go visit along, go see video in our booth, and right now, you can sign up on firefly.com for the waitlist to access Text-to-Video and Image-to-Video. You can, if you have a Creative Cloud subscription, you can download the Premiere Pro Beta right now, and try Generative Extend and let us know what you think. I cannot wait to hear what you do with it. Thank you. [Music] I mean, I will say that is the power of having storytellers as part of your marketing team because I love the way that Kylee puts those stories together. What's so exciting for the Adobe Video teams however, is that when we put in new features, we slave over them. We have so much intent for what we think that they can unlock for customers, and then we put those features into your hands and all sorts of new use cases come out. We get so much more from what you guys actually do once the features are in your hands. And for that reason, we have invited the very talented guys from Versus Creative Studios here today to show you how they're already implementing generative AI features, and assistive AI features across our tools into their workflows. So with that I would love to introduce you to Justin Barnes from Versus Creative Studios. [Music] [Justin Barnes] Hey. Hello. Hello. Hello. Everyone can hear me? Great. Hello. Excited to be here. My name is Justin Barnes. I'm the Executive Creative Director. Joining us momentarily will be Brian Sanford who is the Head of Post Production, and we are from the creative studio, Versus.

[Music] Okay.

That Miami edit. Okay, so AI in real world production. Versus is a very busy studio. We're working around 40 projects per month. So AI for us, it needs to be production ready, and it needs to work to deliver work for our clients. Not just experimentation, but actually making really work that just goes out for what we're doing.

So let's talk about a case study for that. So I'll just go over this really quickly, the creative brief overview. We're looking to create a 10-second animated end-tag for this natural fruit energy drink. We're going to use this on the back of commercials, we're going to use this for social media. So a couple caveats for that. We want to feature this product in a natural environment, and that natural environment is depending on what flavor. So an apple would be in an apple orchard. A melon would be in a melon patch. A melon patch? And we want to build one case study for it on the apple, and then just move on and replicate that across all the different flavors for the drink. So a pretty simple brief, but how are we going to pull this off? Well, you can see here from the storyboards, we've got this 10-second board here where we're going to start on this apple, the camera's going to boom down, tilt down, move down, through this apple orchard forest, and then land on this tree stump with the product on there with some sliced apples around there. So we have these two hero moments, right? We have this apple hanging in a tree, really beautiful, and then we move through the forest, and then we land on this tree stump, in our second hero moment. So how are we going to pull this off, and how are we going to use AI to get this done? So just a quick look at the workflow that we're going to be walking through here. We're going to create the environment in AI. We're going to generate images, we're going to use those images to generate video, and these are going to be our hero moments. Remember, the apple and the tree and the tree stump. We're going to fill all that in between, again, using AI and Photoshop. We're going to make our product in 3D, composite, render, put it all together into After Effects. So let's get started. So first, we started just generating images, right? I think the prompt here was, an apple hanging from a tree in the sunlight. And we just started generating more and more images just to see what we're getting, see what we're really liking here. We were tweaking the prompts here and there, getting different things. What we're really liking were frame 04 was, and frame 05, I think, were really hitting home to us. We like the vibe, we like the feel, but they're not perfect, right? So very simply, especially like a stump one, we need to make room for our can. So of course, just jumping into Photoshop, removing this really quickly, not even prompting anything, and it just goes away. We've all done this, it's amazing. Cloning that out would be an absolute nightmare, but just very simple. So now we've got these two clean hero images that we were using. So now it's time to start bringing them into video. So we just wanted to bring some life to it, just have some leaves blowing in the background, bring just a little bit of normal movement, just make it feel like alive. So you can come and see here the prompts that we were doing. I think we prompt, subtle movement in the plants, that's all the prompt, and then using these images to drive that. And now you can see here, it was looking good until this hand just comes out of nowhere, unprompted, comes in with a glove on, right? I don't know. AI's going to AI.

How can we solve this? Well, we can, of course, just grab this clip and bring it into Premiere and just use the Gen Extend tool, right? We would just cut it right before, Gen Extend it out there, and we do get this really great hero clip moment. I think these are sped up a little bit. So it works and it's cool.

But we could do this, or we could just keep prompting, and that's exactly what we did. So we were playing with stop motion on the left, we were playing, you could see frame 04 and 05, we're like, "Hey, add a little bit more water on the apple, it starts going wild." Like, frame 07, a corporate handshake comes out of nowhere.

It's so wild. Well, that'll be the director's cut. But we were really liking 02 and 03. They were feeling really nice. It's just a little too much camera movement for what we wanted. So we were playing around a little bit more and really fell in love with, 08 and 09 on the bottom, right, that you can see up there. Those were working really, really well. So now we have our two hero clips it's now video. Amazing. So just to recap, right? We're starting up at this apple, now we need to build out this environment to get all the way down to the tree stump where we're going to place our product in eventually. So that's exactly what we did. We brought this into Photoshop, just made some stills, built in 1920x8000 comp, and just went in there and just started generating things as fast as we could to fill that in. And we weren't being very precious with this, we didn't need to be. Because when we bring this into After Effects, the speed at which we're going to move from one video to the other, motion blur, all of that, it's not really going to matter, and if we did need more detail, we'd go back in there. So you can see here on the left the actual comp that we ended up using. So again, on the top generated video, we move through our Photoshop, we land on the generated video as well. And then bringing that into After Effects, you can see the effect here.

Very convincing. Looks good. We were really, really happy with this. I want to play through a few times.

Thank you. Thank you. So great, now it's time to put the can in there. No problem.

We did this in Cinema 4D. We just built the can, we lit to the scene, we placed it in the right angle, did all of that. You can easily do this in After Effects now with the 3D, so you can just stay in After Effects the entire time. We had cinema, we're paying for it, we might as well use it. So we only needed to render out a frame because we were just going to composite it into that last shot with the tree stump. So that's exactly what we did. You can see a very simple composite here, just putting it in there. It looks good.

Right. It works. So however, we wanted to bring this to life with just more drama, something more atmospheric, we wanted to create a really better vibe for this. This is clean, it looks good, it could work, but what we ended up doing is jumping back into generative video, and we started creating just composite ready assets. So here you can see I'm slowly dripping water drops on a black background. This, we would composite over the can to add some sweat coming down from the can.

Then we would start creating this whole library of composite assets of sunrays coming in, particles and dust catching sunlight, just some smoke stuff if we wanted more atmospheric, and we started building this really big library of composite assets all on black because we were going to bring them into After Effects, we were going to layer them all together, and using screen mode and overlay, you know all the tricks. So that's exactly what we did. So we bring the can in here, you can see adding all of those different effects, putting it all together, and just like bringing it to life in just like a really more cool atmospheric way. And we did it with the apple as well, adding some more drips on there, all with this composite library that we have, and I'll play that through a little bit more, which looks a lot better, right, than just clean. So you can see the final result here.

And now we've got this really beautiful end-tag, and we can cut this into social, do a few different things, add some more clever copy, and it's very convincing. Add the audio, add some VO, add the call to action, and this is ready for prime time, so we're good.

So this became our workflow for all the different flavors. So when it came time to do the melon one, it was that exact same process, right? We would generate some images, we would use those images to generate some video, we'd fill out this environment, all in Photoshop. This time we would be moving from left to right, but it's that same idea. We built composite assets again, we used some from the apple, and then you can see the result.

And then it had this like little shake at the end, something was going to burst out of the water melon, which we kept and I loved it, but we'll have to probably go in and fix that. Client's not going to be happy about that.

So pretty wild, for one artist to be able to do this and not need to jump into full 3D and build out all of these environments is absolutely incredible. I mean, this would have taken three days to build that environment in 3D, we'd have to do it for every single flavor. We've got this workflow down now to around a half day to make one of these for one artist. And it's extremely powerful, especially not needing to jump into Cinema 4D, you can just have at it and make it happen. So again, just very interesting ways that we're thinking about, and we're using AI and all of these new tools to bring things together, and cut and paste, and make all these things happen.

Thank you.

Okay. So excited to bring on Brian Sanford who heads up all of the post production here at Versus to talk a little bit about how we use AI to really influence the productions that we are going out and doing bigger work with. Brian, I'll let you take it. Thank you, everybody. [Music] [Brian Sanford] Nice work, bud.

Hi. As Justin mentioned, my name is Brian Sanford. I'm the Post Production Director of Versus, and I've been an editor for almost 20 years. Editors play a huge role in storytelling and nowhere is that greater than in documentary filmmaking, taking hundreds of hours of interviews, shoots, reshoots, B-roll archival, trying to tell a great story. So today, I want to talk about how AI tools are influencing our workflows in a film that we are in development right now in called The Abductee. It's about a man who claims to have experienced the most alien abductions ever.

So let's take an overview of our workflow here. We're going to do some story exploration in Edit using AI tools, and those tools are going to allow us to edit and prep our footage more quickly, put together a quick audio edit. But then we're going to use these other AI generative tools to audition and ideate visual solutions. We're going to use that previs that we create to then inform real production decisions, giving the editor more creative control.

So let's jump into it. Let's start creating a scene, right? So here you'll see that we took a three-hour interview with the investigators for The Abductee and built a scene to introduce them to this character, the most abducted man. We used Speech-to-Text for really quick transcription. Then we moved into Text-Based Editing for really quick search and editing to land on our final audio edit. And we were able to do this from that three-hour interview to that quick audio edit in about, I don't know, about 90 minutes.

So let me play you back this audio edit.

[Man] Heard about this guy who claimed to be the most abducted man, and his name was Stan Romanek. He talked about his first sighting of a UFO in the Denver area. He talked about three aliens showing up at his door one night and they wound up taking him up. He brought along a video, in fact, of an alien peeking in his window. You know, my first impressions of him were very unusual, seemed like he was hiding something.

Okay. So we all know what that is. That's our Frankenstein cut, right? We've all landed on that, right, as documentary editors. It tells a really good tight story, but the visuals need to be developed. So this is where, image and video generation are going to really change the way the documentary editors work. In the past, you'd take an audio like this, and you would just send it off to the animation team and just put a big graphic in it that says, really cool animation here, and just walk away. But now I'm going to take control of this and I'm going to help tell this story. And so we auditioned multiple approaches until we found one that we thought worked, and I'll show you how we did that here.

So first, we thought about, "Hey, let's do like a B-roll shoot. Let's capture some dramatic footage, very Lynchian, mysterious." So we used reference imagery and prompts to develop a style and then re-referenced that image to create different angles, different shot compositions, things that we could use to test it out in a Board-o-Matic or something. But looking at the images, they didn't really feel quite right. They felt like a little too sleepy for a story that's out of this world.

So we tried to use animation to tell the story, right? On the left there, you see, a woodcut look, green and yellow, alien colors. Also, tried one that was a little bit more cartoony. Both really fun, all GenAI, but a little too playful for this story, right? We wanted something to go a little bit more dramatic. I wanted to go bigger with it.

So we started exploring this more visual effects focused approach. This was really cool. We decided we would create this highly stylized, dreamlike world for the film, played with shadowy figures, blurry UFOs, even treated some of the archival footage that we had of our subject. It felt really good for a story that was laden with mystery and conspiracy and the unknown. So next, since we were liking these images, we brought them into Gen Video.

Okay. So now you can see, essentially, the dailies that I'm building for myself in this cut. I'm directing these scenes with camera motion controls. I'm controlling how the subjects are moving in frame. But you'll see in the top right there, we do have a little problem. Not so good up there. So let me show you guys again a little bit about how we're going to use Gen Extend to fix a shot like that, but also think about it in the context of a generative video workflow. So I just thought this was amazing while we were doing this. Obviously, you've seen us use Gen Extend throughout this presentation, cleaning up a clip, that wasn't working. But as you all start to work with generate video, you're going to realize that you do run into these frustrations of funky generations and hallucinations, but this tool is going to become a part of how you overcome that, just looking at the AB between the two.

Now let's say you even have a clip that is great and it's just five seconds long. You could also use Gen Extend to actually overcome that and make the clip a little bit longer. This is going to become a critical tool for editors as they're working with generative video. So let's see, where we landed with the cut.

And we heard about this guy who claimed to be the most abducted man. He talked about his first sighting of a UFO in the Denver area. He talked about three aliens showing up at his door one night. They wound up taking him up. He brought along a video, in fact, of an alien peeking in his window. My first impressions of him were very unusual. Seemed like he was hiding something.

Okay. Pretty cool.

So this previs, it doesn't just become a guide, it becomes a blueprint for everyone on the job. The director is going to know how to capture the footage. Our VFX team understands the intent that you're bringing to this as an editor.

This allows this transfer of craft and experience in a way that's just so much different than how we're used to work. We're thinking about going from that little slug of graphics here to something like this. This is absolutely game changing, and it allows me to put more of my creativity into the piece. And most importantly, we also don't waste any time going down those other paths that we looked at, right? Very valuable.

So what's the takeaway here, right, from both what I just showed you and what Justin put together here today? There is no one click solution to make great work. AI is an extremely powerful tool when it's in the hands of experienced and talented people like everyone here in this room, and I really can't wait to see everything that you all make with it.

I want to thank all of you for being here today and everyone streaming at home and thank Adobe for really giving us access to these amazing tools that are completely changing the way that we're going to be working at Versus and the way that we're all going to be working over the next few years. Thank you.

[Music] All right. Thank you so much, Brian and Justin. That was awesome. And thank you guys for being here. As Kylee mentioned, please go and download the Beta of Premiere Pro. Use Generative Extend. We want to hear your feedback. And the wait list is open, so if you are interested in accessing the new Firefly Video Model, get on that wait list. We are going to be rolling new participants in at a very rapid pace, so I encourage you to sign up. And with that, we are not going to keep you from getting something to eat, a cup of coffee so you can get to Sneaks, enjoy the rest of the evening. The Bash is amazing, so do not miss it. Thank you guys all for coming.

[Music]

Creativity Super Session

Creativity Super Session: Video and AI - SS3

Sign in
ON DEMAND

Closed captions in English are available in the video player.

Share this page

Speakers

Featured Products

Session Resources

No resources available for this session

About the Session

Discover how AI is revolutionizing video creation by automating tedious tasks, inspiring creative ideation, and helping creators craft more captivating content. Join top video professionals Meagan Keane and Kylee Peña from Adobe, along with Brian Sanford and Justin Barnes from sought-after creative production studio Versus, who are known for their visually rich and original work that has helped define today’s most influential brands, to see how AI is used by leading creatives to produce award-winning work.

You’ll learn how AI features in Adobe Premiere Pro and After Effects can increase productivity and enhance creativity, including:

  • Innovations in generative AI to tell new stories and reimagine existing ones
  • Text-Based Editing that auto-generates transcripts and makes video editing as easy as cutting and pasting text
  • Enhance Speech to improve dialogue recordings
  • Essential Sound to speed up audio edits

Technical Level: General Audience

Category: Inspiration

Track: Video, Audio, and Motion

Audience: Motion Designer, Post-Production Professional

This content is copyrighted by Adobe Inc. Any recording and posting of this content is strictly prohibited.

Not sure which apps are best for you?

Take a minute. We’ll help you figure it out.