How to DeepFake

You can currently use two ways to realize you own DeepFake Video. One is the DeepFaceLab Tool and the other is the OpenFaceSwap GUI. See below the tutorials for both methods:

DeepFaceLab

[crypto-donation-box]

Downloading the software

We will use DeepFaceLab to create the deepfakes. Another software, FaceSwap is also available, and will have a separate tutorial.

  • Download DeepFaceLab
    • Make sure to pick the right build for your GPU. If you don’t have a GPU, use the CLSSE build
    • In that folder, you will find some pre-compiled face-sets. Go ahead and download one of them to get started quickly (otherwise you will have to build your own face-set from videos / images)
  • The downloaded .exe will extract and install the program to the location of your choosing.
    • A workspace folder will be created. This is the folder where all the action will happen.

Extracting faces from source video

  • Name the source video data_src and place it in the \workspace folder.
    • Most formats that ffmpeg supports will work
  • Run 2) extract images from video data_src
    • Use PNG (better quality)
    • FPS <= 10 that gets you at least 2000 images (4k-6k is ideal)
  • Run 4) data_src extract faces S3FD best GPU
    • Extracted faces saved to data_src\aligned.
  • Run 4.2.2) data_src sort by similar histogram
    • Groups similar detected faces together
  • Run 4.1) data_src check result
    • Delete faces that are not the right person, super blurry, cut off, upside down or sideways, or obstructed
  • Run 4.2.other) data_src util add landmarks debug images
    • New images with _debug suffix are created in data_src/aligned which allow you to see the detected facial landmarks
    • Look for faces where landmarks are misaligned and delete the _debug and original images for those
    • Once you’re done, delete all _debug images by using the search bar to filter for _debug
  • Run 4.2.6) data_src sort by final
    • Choose a target image number around 90% of your total faces

Extracting faces from destination video

You may choose to either extract from (1) the final video clip you want, or (2) one that is cut to include only the face you want to swap. If you choose 1, you may have to spend more time cleaning the extracted faces. If you choose 2 you will have to edit back the final video (and audio) after the swap.

  • Name your final video data_dst and put it in the \workspace folder
  • Run 3.2) extract PNG from video data_dst FULL FPS
  • Run 5) data_dst extract faces S3FD best GPU
  • Run 5.2) data_dst sort by similar histogram
  • Run 5.1) data_dst check results
    • Delete all faces that are not the target face to swap, or are the target face but upside down or sideways. Every face that you leave in will be swapped in the final video.
  • Run 5.1) data_dst check results debug
    • Delete any faces that are not correctly aligned or missing alignment, paying special attention to the jawline. We will manually align these frames in the next step.
  • Run 5) data_dst extract faces MANUAL RE-EXTRACT DELETED RESULTS DEBUG
    • We run this step to manually align frames that we deleted in the last step. The manually aligned faces will be automatically extracted and used for converting. You must manually align frames you want converted (swapped) even if it’s a lot of work. If you fail to do so, your swap will use the original face for those frames.
    • Manual alignment instructions:
      • For each face, move your cursor around until it aligns correctly onto the face
      • If it’s not aligning, use the mouse scroll wheel / zoom to change the size of the boxes
      • When alignment is correct, hit enter
      • Go back and forth with , and .. If you don’t want to align a frame just skip it with .
      • Mouse left click will lock/unlock landmarks. You can either lock it by clicking or hitting enter.

Training

Run 6) train SAEHD

SettingValueNotes
iterations100000 Or until previews are sharp with eyes and teeth details.
resolution128Increasing resolution requires significant VRAM increase
face_typef
learn_masky
optimizer_mode2 or 3Modes 2/3 place work on the gpu and system memory. For a 8gb card you can place on mode 3 and still most likely be able to do 160 res fakes with small batch size.
architecturedf
ae_dims512Reduce if less GPU memory (256)
ed_ch_dims21Reduce if less GPU memory
random_warpy
truefacen
face_style_power0Can enable if you want to morph src more to dst. But disable after 15k iterations.
bg_style_power10Turn off at 15k iterations. Styles on consume \~30% more vram so you will need to change batch size accordingly.
color_transfervariesTry all modes in the interactive converter
clipgradn
batch_size8Higher if you don’t run out of memory
sort_by_yawnNo, unless you have very few src faces
random_flipy

Optional: History timelapse

Before converting, you can make a timelapse of the preview history (if you saved it during training). Do this only if you understand what ffmpeg is.

> cd \workspace\model\SAEHD_history
> ffmpeg -r 120 -f image2 -s 1280x720 -i %05d0.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p history.mp4

Convert

  • Run 7) convert SAEHD

Use the interactive converter and memorize the shortcut keys, it will speed up the process a lot.

SettingValueNotes
interactive_converteryDefinitely use the interactive converter since you can try out all the different settings before converting all the frames
modeoverlay
mask_modelearned
erode_modifier0-50If src face is bleeding outside the edge of dst face increase this to “erode” away the src face on the outside
blur_modifier10-200The more similar the face the lower you can set erode and blur and get great results.
motion_blur0
color_transferebsTry all of them, can even use different ones for different scenes / lighting
sharpen_modebox
sharpen_amount1-3
super_resolutionRankSRGANEnhances detail, especially around the eyes
color_degrade_powern
export_alpha_masknOutputs transparent PNGs for use in post-production tools if you need it
  • While conversion is running, you can preview the final images data_dst\merged folder to make sure it’s correct. If it’s not, just close the convert window, delete /merged and start conversion again.
  • Run 8) converted to mp4
    • Bitrate of 3-8 is sufficient for most

_______________________________________________________________________________________________________

OpenFaceSwap GUI

We identified OpenFaceSwap as being the best deepfake tool currently available (07/05/2018). OpenFaceSwap is a GUI layer over a high performing python script it is free, open source and people at deepfakes.club really like it. The following is a quick tutorial on how to make your first Deepfake using OpenFaceSwap.

Installing OpenFaceSwap

  • Download OpenFaceSwap v9.0 (Mega.nz) (Link)
    • Choose install directory that is NOT in Program Files (due to possible user permission errors). The default is set to “C:\OpenFaceSwap”.
  • Download and install Microsoft Visual Studio Redistributable 2015 (Link)
  • Download and install CUDA 9.0, NOT 9.1 (Link)
  • Download and install cuDNN 7.05 for CUDA 9.0, NOT 7.1, (Link, requires email registration)
  • Download and install Latest NVIDIA graphics card drivers (Link)

Using OpenFaceSwap (Making the Deepfake)

1. Click Video A to specify the video where you want to swap the face (This is the porn video ? )
2. Click Images A to extract all of the frames from Video A. When done, hit any key at the prompt to continue. If you like, you can press the magnifying glass icon next to the directory name to inspect the results.
3. Click Faces A to extract all of the faces from Images A. Ideally, your video only has one face present. You may wish to inspect your results and remove any erroneous face extractions.
4. Repeat the above three steps with Video B, which is the face that you will insert.
(If you have a set of images instead of a video, you can instead skip ahead to the Images B text box. Click the folder icon to select the directory that has all of your images. Then, proceed to click the Faces B button as before.)
5. Click Model. When you hover over the button, the necessary input folders will be highlighted. In this case, those are folders for Faces A and Faces B. Training your model will take many hours. You can wait until the printed loss values are less than 0.02. Also, check the previews for the quality of the faceswap.
6. When ready, press the Enter key to stop training.
7. Click Swaps to apply your model to turn face A into face B. If you hover over the button, you will see that you need input folders for Faces A and Model. When this is finished, you may wish to inspect your results as before.
8. Finally, click Movie to generate movie file. Your movie file will be named as shown in the text box and be placed within your OpenFaceSwap installation directory. Click the magnifying glass to open up the folder and play the movie file.

When you are done, you may wish to click the Trash icon and empty your default folders. If you want to delete your model files, you can also do that by checking the appropriate box.

Advanced Options

Click the gear icon next to each command to see a number of options.

Not all command line options are available from the GUI. You can enter custom commands by checking the “Custom” box. You may wish to highlight and copy the original commands first and then edit them.

You can save and load all of your settings, including your custom commands, using the icons in the upper left corner.

The GUI shell runs using python backends or “engines”. The default engine in the installation is an exact copy of the most recent faceswap GitHub repository. To load the experimental or low memory faceswap packages, edit the openfaceswapconfig.txt file to point to the appropriate paths. This will normally only involve inserting a “_exp” or “_lowmem” in the appropriate paths.

Note that you can mix and match different extraction and conversion scripts from different packages in the engine configuration file, although there could be unforseen compatibility issues.

Some notes on the engines:

  • DFaker only works in the experimental engine.
  • The Original model uses loss balancing in the experimental engine with a minimum training ratio of 1:4 (see the code).
  • The LowMem model in the low memory engine should work for 2GB graphics cards. The extraction uses face_recognition instead of face-alignment, so the results will be slightly different. This can be useful if you are having errors with one extraction module. Note that the alignments.json files in the experimental engine have a slightly different format.

The portable Winpython package is a complete and independent no-install platform. If you wish to use the python package, run the “WinPython Command Prompt.exe” prompt from the “python” directory. This will setup the proper environment and let you use commands such as “python faceswap.py” from the CLI.