Cinematoraphy Fake Slow Motion Better with NVIDIA’s Artificial Intelligence

Thảo luận trong 'ENGLISH' bắt đầu bởi Jakub Han, 21/6/18.

Lượt xem: 417

  1. Jakub Han

    Jakub Han Guest

    A research team from NVIDIA presented new deep learning based system which is capable of creating decent-looking fake slow motion only through post processing. They claim their software can generate the in-between frames in a more convincing way than any other existing solution.

    [​IMG]
    Example of the Resulting Slow Motion Footage. Source: NVIDIA


    The idea of creating “fake” slow motion in post production is not new. Computers can generate frames artifically in between existing frames and therefore increase the resulting framerate of the footage. Normal NLEs do this through simple frame blending (whereas Final Cut Pro X seems to do it better than Premiere Pro CC still). However, the existing market-leading solution is a special piece of software from RE:Vision Effects called Twixtor– which however gives nice results only under perfect circumstances. For example, on a simple moving object on an even background. As soon as there is a complex very fast movement and/or uneven background included, the resulting frames are not convincing and objects are falling apart.

    Researchers from NVIDIA developed a new system based on deep learning which can produce better results. They aim to present their work at the upcoming annual Computer Vision and Pattern Recognition (CVPR) conference on June 21st in Salt Lake City, Utah. The team achieved the results by using NVIDIA Tesla V100 GPUs and cuDNN-accelerated deep learning framework. Scientists trained the system on over 11,000 videos of everyday and sports activities shot natively at 240 frames-per-second. Once trained, the convolutional neural network was able to predict the extra frames with a better accuracy.

    NVIDIA team claims their method can generate multiple intermediate frames that are spatially and temporally coherent. To demonstrate the capabilities of the new system the research team published a short video on YouTube.


    The demonstration video initially shows results of slowing down a 30fps footage down to 240fps. The first clip of a moving car looks quite convincing in my opinion. The NVIDIA software did a pretty good job calculating the additional 210 frames for every second (7 artificial frames between two real frames). Later in the video, however, we can see some artifacts in the clip with a hockey player (the movement of his glove and skates). Another artifact can be visible on the dancer’s hair. Obviously, the new deep learning based system is not perfect yet.

    In the next part of the video, NVIDIA took some footage from the youtube channel The Slow Mo Guys which was already filmed in high framerate and slowed it down even more. The demonstrated clips look quite astounding.

    The new system from NVIDIA seems to deliver better results than the existing solution although it is yet to be seen on more examples. So far from the video I have the impression it does a better job than Twixtor – so here’s only hoping that this technology will be integrated in affordable post production solutions and NLEs as soon as possible.

    What do you think of the resulting footage? Do you sometimes fake slow-motion in your videos? Let us know in the comments below.

    Via: Gizmodo, NVIDIA

    The post Fake Slow Motion Better with NVIDIA’s Artificial Intelligence appeared first on cinema5D.