My deepfake videos - "fixing" movies with de-aging FX and younger characters

After watching the clips Bloop posted of his work I applaud his efforts as his craftsmanship and skill are clearly evident. As an academic exercise he has shown what is possible and what VFV technics future filmmakers might use on future movies.

I will say I would personally find it odd to have a studio replacing the face of the actress who played young Black Widow with young Scarlett Johansson, or replacing the face of young Willy Wonka with young Gene Wilder, as these are films already released and are already in the public space.

It would be hard for me to imagine any movie already released and seen by millions of people having its effects or characters digitally altered and a ‘new official version’ being released replacing the original version of the movie- even if the updated effects actually look better and make the movie better and more accurate overall.

I mean, if anyone ever did this to my favorite movie of all time, Star Wars, I know I would be upset. I saw Star Wars in movie theaters in 1977 and that is the only version of the movie I want to see- exactly as I first saw it when it was released. As groundbreaking and exceptional as the special effects were they were not perfect but that only adds to the charm and really shows the creativity of the filmmaker and what was possible in films at the time. I can’t imagine anyone going back in and changing anything to ‘improve’ Star Wars from how it was originally released. It’s imperfections make it perfect.
 
Last edited:
By 2066, you will be able to do your own TOS.

Having the TOS characters speak stardates that make sense would be good.

The beginning of “The Alternative Factor”
in Star Trek I would splice with Hux and Starkiller Base…the energy pulse opening a rift that shakes Enterprise with Star Destroyers coming through as an invasion as was thought the case early in the episode.
 
The thing is, having an intellectual property free-for-all was NORMAL for most of human history.

The Bible is not all the relevant stuff that was written about those "characters" back in the day. It's a cherry-picked collection of texts that the church decided to canonize. There were plenty of other writings circulating in the ancient world, some of which have the characters acting in ways that don't fit our ideas of them at all.

Would it ruin our copyrighted famous works to let other creators mess with them? Yes.

And no. It would also improve them. Let's not pretend that any of our favorite stuff is flawless or immune to aging.



It's a matter of time before some kind of fan-edit reworking of a show (movie, TV, etc) hits the internet that is deemed flat-out superior to the original.

There are plenty of "I personally like it better" versions of things right now. But eventually there will be a case of "Everyone likes it better and nobody even watches the original." This will provoke some heated debates about ownership & morality in art.
 
The thing is, having an intellectual property free-for-all was NORMAL for most of human history.

The Bible is not all the relevant stuff that was written about those "characters" back in the day. It's a cherry-picked collection of texts that the church decided to canonize. There were plenty of other writings circulating in the ancient world, some of which have the characters acting in ways that don't fit our ideas of them at all.

Would it ruin our copyrighted famous works to let other creators mess with them? Yes.

And no. It would also improve them. Let's not pretend that any of our favorite stuff is flawless or immune to aging.



It's a matter of time before some kind of fan-edit reworking of a show (movie, TV, etc) hits the internet that is deemed flat-out superior to the original.

There are plenty of "I personally like it better" versions of things right now. But eventually there will be a case of "Everyone likes it better and nobody even watches the original." This will provoke some heated debates about ownership & morality in art.
Indeed...sometimes the quest is not "I transformed it because I could" but rather "I transformed it to make it better"...or worse since you cannot please everybody;)
 
I wasn't sure if this belongs anywhere on this site, but I figured it falls under "Entertainment" as much as anything else. I've been making deepfake videos, mainly focusing on fixing things like deaging in older films using current deepfake tech.

My most recent video, though, is "fixing" the new 2023 movie, Wonka, by replacing Timothée Chalamet's face with Gene Wilder, from the original Willy Wonka and the Chocolate Factory. I wish I could change his voice, too, but that'll have to wait:


It may seem like it's easy to swap faces nowadays, but in order to do it relatively convincingly, there's still a lot of work involved. For example, in addition to the hours of prep work, the days (even weeks) of time running the deepfake software to replace the source face, and finally, the finishing edits, I had to recreate Chalamet's face in all the shots where his hand goes in front of his face because the deepfake tech isn't quite good enough to swap a face when too much of the original face is obscured. There's masking tools that I tried to use - which worked well for the "hover-chocs" and the teacup he eats - but they were inadequate for masking his hand.

I also have to do a lot of single-frame editing after the deepfake conversion is finished (both in Photoshop and in After Effects) to try to make the video seamless.

Anyway, I hope you like it.
What sort of computer specs do you need to do these? I have problems just running Davinci resolve on my laptop :lol:
 
After watching the clips Bloop posted of his work I applaud his efforts as his craftsmanship and skill are clearly evident. As an academic exercise he has shown what is possible and what VFV technics future filmmakers might use on future movies.

I will say I would personally find it odd to have a studio replacing the face of the actress who played young Black Widow with young Scarlett Johansson, or replacing the face of young Willy Wonka with young Gene Wilder, as these are films already released and are already in the public space.

It would be hard for me to imagine any movie already released and seen by millions of people having its effects or characters digitally altered and a ‘new official version’ being released replacing the original version of the movie- even if the updated effects actually look better and make the movie better and more accurate overall.

I mean, if anyone ever did this to my favorite movie of all time, Star Wars, I know I would be upset. I saw Star Wars in movie theaters in 1977 and that is the only version of the movie I want to see- exactly as I first saw it when it was released. As groundbreaking and exceptional as the special effects were they were not perfect but that only adds to the charm and really shows the creativity of the filmmaker and what was possible in films at the time. I can’t imagine anyone going back in and changing anything to ‘improve’ Star Wars from how it was originally released. It’s imperfections make it perfect.
Thank you for the compliments! And I do agree that changing movies after they've been released, especially swapping actor's faces for different actors, is not the goal. My thought was more showing where the tech can lead to, so that we might see it used in future projects rather than ret-conning existing works. But I obviously only have access to existing properties.
For Wonka, I wouldn't suggest having Gene Wilder's face swapped in for Chalamet's - or any other actor - for an entire film, but it afforded me the opportunity to create a fantasy where Wilder, who for many people "is" Willy Wonka, reprises the role. Obviously, it's still not and would never be Wilder, as the performance is Chalamet's.

Same thing for Black Widow. I realize it takes something away from the performance of Ever Anderson to replace her face (though her underlying performance is still the basis for the face replacement), but my idea is that she didn't resemble young Scarlett Johansson enough to fully allow for the "suspension of disbelief" that she's supposed to be the same person as the older Natasha Romanoff. While I realize that is this were to be used in the future - replacing young actor's faces so that they look like young versions of older actors in the film - this could be considered bad for actors, since they wouldn't have their own face shown. But you could argue that any role where prosthetics are required also obscures an actors face. And I'm sure there are cases where the "best" child actor didn't get a role because they didn't resemble the older actor enough, so using digital makeup could allow for better performances by better child actors rather than having to cast someone that offers a comprimise between acting ability and appearance.

It's all a slippery slope, though. Eliminating prosthetics means less work for practical makeup artists. Actors may have gotten extra money, or simply gotten roles because of their willingness to undergo extensive prosthetics - Doug Jones, for example, has made a career of that. And there's the rights of the actors whose face is being used to replace other actors, not to mention the rights to use footage for replacement.
 
What sort of computer specs do you need to do these? I have problems just running Davinci resolve on my laptop :lol:
I have a desktop PC, though it's not the newest or most powerful. It's an HP Omen 30L, which has an AMD Ryzen 5 5600G processor at 3.90 GHz, 16gb ram, and an AMD Radeon RX 6600 XT 8gb GDDR6 card. I don't have anything overclocked or undervolted or anything, it's all running "stock."
Using Deepfacelab, you can adjust the settings to fit your specs, so it's possible to train deepfakes on lower spec'd equipment, but it will take longer. Even so, you still run into limits. I'm topped out at a resolution of around 320 x 320 pixels for my replacement images, which is okay when replacing a lot of faces, but if you try to replace a shot of a full screen face, the lower resolution is noticeable.

Here's examples of what I mean, from my Wonka trained images that I chose not to use in my final edit (I also didn't do any cleanup of this, since I didn't plan on using it):

01265.png

This is the merged output on a fairly large face from 4K footage (3840 x 2160, including the letterboxing). Trying to blow up a 320 x 320 face to overlay onto the 4K footage looks terrible. I have to pick and choose which footage I use, though I've been trying some workarounds.

For this next image, I tried to use EB Synth - which can be used to make some deepfake-style de-aging on it's own - to replace the low resolution output produced by Deepfacelab, with an image I edited in Photoshop using a high-res frame from Willy Wonka and the Chocolate Factory.

1704149168975.png

Unfortunately I don't have the original, unedited merged output to show you, as I deleted files to save space after I completed my work (deepfaking also requires a lot of hard drive space). Anyway, it looked better, but it still doesn't look quite right, since the AI of Deepfacelab does a better job of recreating the lighting conditions of face it replaces. Also, sometimes single frame images can look pretty good, but seeing a full sequence in motion reveals flaws. I end up doing a lot of editing in Photoshop and Adobe After Effects to try to get things looking as seemless as possible. As I mentioned, EB Synth can also be helpful, but can require a lot of editing and trial and error. Eb Synth works best when the shot is static, and the face of the person doesn't move much within the frame. Any panning done by the camera, or movement of the person's head, blinking, or mouths opening and closing, and EB Synth can't understand what it's supposed to replace. But it's a lot faster than Deepfacelab and can be helpful in trying to clean up DFL's output, or just if you need to edit a quick shot.

There's a lot of deepfake utilities that run better on Nvidia hardware, so if you were looking on upgrading, I'd go with one of the newer RTX 40 series cards if you can afford it. I think there used to be the option of running Deepfacelab or Faceswap on Google Colabs, so that the GPU processing is done in the cloud, but I don't know if that's still an option.
 
I have a desktop PC, though it's not the newest or most powerful. It's an HP Omen 30L, which has an AMD Ryzen 5 5600G processor at 3.90 GHz, 16gb ram, and an AMD Radeon RX 6600 XT 8gb GDDR6 card. I don't have anything overclocked or undervolted or anything, it's all running "stock."
Have you tried using Topaz to upscale?
 
Have you tried using Topaz to upscale?
I haven't used Topaz, but I've tried other upscalers, like Cupscale. Deepfacelab has a built-in upscaler that you can apply to your output, but it's limited. The problem with any upscalers is they still have limits. I've also used Remini, which can create very good results, but as far as I can tell, it basically tries to recreate faces with bits and pieces of higher res pictures, so it isn't always accurate. And since it's for images vs video, you have to run it on every frame, which can lead to lightly different results that don't look good in motion.

The other issue is, your results sre only as good as your source images. So if some of your source images are lower res or a little blurry, it leads to bad results. Unfortunately, sometimes I have to use some lower quality source images in order to cover all the angles. Like, if you need to replace a shot of a person where the camera is at an odd angle, and you only have low quality source images from that angle, you're kind of stuck. If you don't use the low quality source images, the AI won't be able to train on that angle, leading to bad results. If you use the low quality images, at least the AI has something to train from, but the results won't be great. It's the old "garbage in garbage out" rule of computing.
 
I want a Deep Fake sequel to THE MALTESE FALCON where Zeros hit San Francisco and all the players besides Wilma are tracking down the Russian. Gutman and company escape in the attack.
 
There are a multitude of actresses in H'w'd. I'm sure that a couple dozen would have been better than the model used. I have personally met a non actress that would be a good double for Young. They exist.

The problem is that every art technique needs to have a driving master before it is adequately respected.

So far, the current system still uses the standard of "good enough" rather than striving for excellence.

Laziness and lack of vision are the problem.
 
Since my post got resurrected, it reminded me that I had uploaded another video a while back. I de-aged both Picard and Data from the 1st episode of Picard, Season 1, in the dream scene of Data painting at Picard's vineyard. I've seen other deepfakes of this scene online, but they only did Data. I know it's a dream sequence, so Picard can be whatever age, but my thinking was that since he and Data are both wearing the Next Generation versions of the Starfleet uniforms, they should both look closer to how they looked from the series.

I wasn't 100% happy with the results for Data. The footage presented some problems, such as him turning his head, which is tougher for deepfake software to handle. Plus, I had to try to blend his de-aged face with his older head and neck (and the wonky wig they put on him). But since I had been working on trying to get Data "right" for a while, and figured I might as well release it anyway. I re-did the color grading to remove a lot of the yellow tone of the scene, but I also tried to get Data's skintone looking better.

 
Back
Top