Dinosaurs walking the earth. On-screen villains turning to dust before our eyes. Classic shots of the Empire State Building and The White House. It’s the stuff of movie magic that viewers have come to expect. Thanks to VFX, those dinosaurs look real. The dust feels like it’s falling in our laps, and you feel like you are scaling the Empire State Building or sitting in the Oval Office.
We all know that VFX has dramatically changed pre- and post-production workflows. The reason is simple. Viewers have come to expect awesome immersive experiences from movies and games. Behind the scenes, it’s time consuming and costly to create.
As the VFX bar continues to be raised, it’s making it harder for smaller production houses and independents to compete. One way to address the high cost of VFX is through the use of portable 3D laser scanners for reality capture. They’re proving to streamline production, create more realistic experiences and save costs. Let’s explore how this is possible.
Eliminating Guesswork in VFX Post Production
Previously, VFX workflows were often guesswork. The typical process was to take what’s been filmed on set and create a seamless virtual recreation in 3D. Sounds simple. Except the VFX added to camera shots after shooting had to follow the exact camera movements throughout a shot to ensure everything lined up. Otherwise, the scene won’t look right.
To fix this, many VFX pros were forced to take lots of measurements. Some did it themselves. Others outsourced it, further adding to production costs. It also left the VFX teams beholden to the outsourcer every time they wanted to reuse the measurement data.
Even in the best scenarios, what often resulted is the production team had close guesses on how to recreate the motion yet were never assured of 100 percent accuracy. It was an arduous process on a good day.
Now, reality capture devices take the guesswork out of VFX workflows. They ensure the highest degree of accuracy possible and are incredibly fast. For example, in just a few minutes one scanner can capture millions of individual measurements on set and provide accurate spatial data to ensure the 3D virtual creation will perfectly align.
Creating Reusable VFX Assets
Another benefit of using reality capture scanners is the ability to not just capture the dimensions of a set or a location, but to capture exact visual data as well.
For example, one scan can give you data to speed up camera tracking and data to build all of your virtual environment assets. Rather than guessing and recreating, you’ll have the virtual environment to accurately add all the extra pieces to a shot. This creates stronger workflows without relying on someone else’s data. In feature films, this is a huge advantage, especially if the data is going to be reused for sequels, VR rides or other virtual experiences based on the film.
By having captured a location, a VFX pro can create their opportunities to resell those digital assets of 3D models. Other VFX pros are confident when they buy the assets that they can reuse them in production because they know the model is accurate and to scale. You can see how these assets would come in handy for frequently used locations such as the Empire State Building or the White House, and other famous buildings and landmarks.
From location scouting to camera movement, handheld reality capture devices offer precise LiDAR and visual data that helps VFX teams produce outstanding work. When it comes to the purchasing decision process, the right reality capture scanner should be simple enough to use that anybody can easily master it, transport it, create their own asset libraries, and quickly turnaround entire shoots – often within 24 hours.
Today’s VFX pros are embracing newer technologies. As the use of reality capture increases, everybody in the industry wins. Independents and small houses can more effectively compete. Larger production studios will create even richer user experiences and most important, the viewer experience will continue to get better.
Allan McKay has been working in Visual Effects for over 20 years. Today he works mainly as a VFX Supervisor and FX Technical Director based in Los Angeles and relies on a Leica Geosystems BLK360 reality capture scanner. Some of his more recent projects include Transformer 3, The Equalizer, Star Trek: Into Darkness, Flight, Superman, Metallica, 2012, The Last Airbender, Daybreaks, and many others. He’s worked on video games including Destiny, Call of Duty and The Division. He’s also worked closely with Oscar and Emmy nominated directors Robert Zemeckis and M. Night Shyamalan. Additionally, Allan builds tutorials and training content, has written for magazines including 3D World, and has published books on VFX. He has spoken at conferences like SIGGRAPH and Autodesk University, and given talks at Industrial Light and Magic, Ubisoft, and Activision.