All of the most recent cameras have raised the bar when it comes to frame rates. It’s common to have access to a camera that shoots 120fps in FullHD these days. There is, however, a set of limitations that we have to deal with, like all sorts of thermal, buffer and card issues – key factors that make the price of a true super slow-motion camera extremely expensive.
But again, the advancement of technology seems to be on the content creators’ side as Nvidia has developed a software algorithm to apply super slo-mo to any kind of footage, and all through the power of machine learning. Here’s how you can pull this off on your own.
Before we delve into details, keep in mind that this tutorial requires to use some developing libraries and a command-line interface, not a user-friendly software where you can just press a button or input value in a box, so if you are not comfortable with that, you should probably pass.
That being said, it’s not that difficult either as Gerald walks us through the process step by step. Basically, in deep learning, you write an algorithm and then you train it to recognize some patterns while repeating the process down the line.
In this case, the software needs to analyze the movement in the clips and slow it down while creating interpolated sub-frames to have smooth motion in playback. To get the job done, you’ll need an Nvidia card. It’s not an essential prerequisite as it will work on CPU alone, but keep in mind that a 17sec clip took 12 minutes to convert with Cuda, and 6 hours without.
With that out of the way, the first step of the workflow is to get a Phyton distribution. In the video, Gerald suggests working with Anaconda, a common and cross-platform IDE. Besides that, you’ll need to download the actual package of the software – the repository for the slow-mo. Once the download is complete, you should unzip the content and set up a folder accordingly. Put a simple name with no spaces as it’ll help later in the command line.
At this stage, it’s completely fine to work with a pre-trained model as shown in the video. Get it and put it in the main folder you chose before. Meanwhile, you should download FFmpeg converter, unzip it to a folder with a simple name, and a not too deep folder path. Afterward, launch Anaconda and set it up. Through this website, you’ll find the correct string to input so that the prompt will be configured correctly.
As a next step, you should create two folders inside the main directory you chose that would be an Input and Output folder respectively. Now in Anaconda, direct the prompt to the folder you’re at with the cd command, like the following:
Simply open a text editor and prepare the string you’ll input in the prompt. You’ll need to customize the filenames and the directories as well as a couple of parameters. The sf4, for instance, is for the number of times you want the fps to be slowed down (i.e. from 25p to 100p that would be 4). On the second line, you’ll see an fps value, that’s the final output clip fps.
The batch size command is not mandatory, so it could be better to leave it intact. If you have a very beefy computer, you could go for a bigger size, like 2 or 3, that will cut down the wait time.
So, once you’ve launched the elaboration, go grab a coffee, it will take a while. The result should be seamless, especially when there are simple movements and not too much unpredictable and erratic changes taking place.
If instead, you have fast-moving objects, or messy backgrounds and foregrounds, or talent moving randomly, you’ll see some parts of the image covered in strange glitches emulating a watery effect, as a result of the algorithm failing to interpret correctly the actual movement.
Overall, that’s an interesting alternative solution, that surely needs to improve but is a clear sign of where this kind of software is headed. Needless to say, this workflow would be impossible to pull off on a home workstation even a few years back, so it’s definitely worth a try now.