Author: Mark Sawicki, Co-Chair, Animation Department, New York Film Academy Los Angeles
One of the most challenging effects to recreate in animation is live action lighting. This has most commonly been caused by the differences between real light and mathematically simulated light in a software program. Real light follows the laws of physics such as the inverse square law that states that doubling the distance of the subject from the light halves the amount of light striking the subject. In the computer, the fall-off of light can be drastic or non-existent depending on the settings. A virtual light in the computer can be designated not to cast a shadow or even subtract light.
While this extreme control can lead to amazing effects, it offers little assistance when virtual lights must match real light. The main reason virtual lights have trouble mimicking real light has to do with the amount of calculation needed to simulate a real light. For example, to light an object without a cast shadow may take a second to render; with a cast shadow perhaps two seconds, with a cast shadow with realistic soft edges using a calculation called ray tracing, perhaps five seconds. A full simulation that renders the light with a cast shadow and the reflected light off adjacent objects, will take a radiosity calculation that can have a time frame of a minute or more per frame.
For a producer, time is money and any way of saving time will save money. As a result, many virtual lighters use the technique of rendering one frame with radiosity to use as a reference and then use faster calculations and many more lights to mimic true light. As an example, to avoid calculating bounce light off of a floor, a thrifty artist will merely create another light to simulate the bounce light off the floor by placing a virtual light underneath the floor projecting light through the floor and onto the animated character. This simple technique can save untold hours of rendering time on a project.
A new trend in the industry that is used on higher budget projects is to record HDRI data on the set. HDRI stands for High Dynamic Range Imaging and was used on the fantasy film G-Force directed by Hoyt Yeatman. Essentially the process consists of shooting the live action scene and after recording enough takes to satisfy the director, a special HDRI camera is placed in the middle of the set. It is essentially a still camera pointing into a mirrored half sphere so that the entire area is recorded. This is a wide-angle photograph of all the instruments used to light the set. The camera records this image several times at widely different exposures from very dark to very light. These disparate exposures are subsequently blended together to create a wide range of light where the images of the lights can have roughly the same brightness range as the real lights on the set — hence the term high dynamic range.
The next step is that the HDRI fish eye view of the set is mapped to a dome that surrounds the virtual set in the computer. This dome covered by the “image” of the live action set lights and used as the light source for the animated character inside. We have essentially lit the animated character with the original live action lights represented by the HDRI image. As a result, the lighting on the character will match perfectly with the live action as the same lights are used by both the live action and animation.
The HDRI technique can be a valuable asset when the technology is available. If you are on a low budget project and need to use the standard tools within MAYA or other software, learning how to simulate real light with off the shelf virtual lights will always be a valuable skill set.
Mark is the author of “Filming the Fantastic” and “Animating with Stop Motion Pro” both published by Focal Press.