|The Importance of Invisible Effects|
|A Creative Cow Magazine Extra|
Los Angeles, California US
© CreativeCOW.net. All rights reserved.
Normally, visual effects are designed to be highly visible. The movie producers have paid a great deal of money to have that T-Rex thunder through the streets of New York, and they really want the audience to see it.
Many effects, however, are designed not to be seen. As a twenty year veteran of visual effects compositing I have done hundreds of these kinds of effects -- invisible effects. They fall into several categories, and almost all of them are compositing effects, my personal specialty.
One category is removing things that were needed during the principle photography, but must be taken out for the finished shot. This would include wire removal and rig removal.
The entire category of scene salvaging is dedicated to removing things that were put into the images by accident, such as scratches, hairs in the gate, light leaks, and reflections of the film crew.
Then there are invisible effects designed to dramatically change how the shot looks, but still appear perfectly unaltered, such as monitor replacement, speed changes and shot stabilization. In this article we will explore the technology and techniques that go into these common invisible effects.
In an action movie there can be literally hundreds of wire removal shots. A production technique called a “wire gag” is used where the talent is rigged up with wires to either assist him to leap over a tall building with a single bound or as a safety feature to save him from certain death.
A less graceful but equally important use of wires is for explosions. An explosion that would REALLY blow the extras 50 feet into the air is going to hurt, so wire harnesses are added to the on-screen talent and they are yanked 50 feet instead. While the lad in the picture here is contemplating suicide from the sixth floor in the story, the movie producers don’t want him to actually die during their production, so a safety wire harness was used, which I then had to remove.
Whenever you are trying to remove an item from a shot, a background frame must be created for the area covered by the offending item. This background frame with the item removed is called the “clean plate.”
So the first question becomes “what is the best way to create the clean plate?” Making the clean plate requires movement between the target item and the background so you can select “clean” regions from different frames where the item is absent then assemble them into a single clean plate. Since this shot had a camera push in it was easy for me to select earlier and later frames to resize and paste together to make the clean plate.
Next I had to align the clean plate with the original shot, and since it had a camera move this obviously meant motion tracking. Because the tracking target was a building festooned with excellent tracking points in the form of nice square window corners all over the place it was an easy track.
I now had a clean plate nicely tracked to the background, so the final question was the best way to composite the clean background over the offending wires. Since this was a short shot I did not want to go all nuclear and start tracking in masks over the wires, so I simply painted the wires out using a soft-edges clone brush available in the paint feature of the compositing program.
Cloning the clean plate through to the foreground only took about 10 minutes. Wires gone, client thrilled.
A close cousin of wire removal is rig removal. A rig is any kind of device used on the set to hold up an item up for filming. After the rig has done its job, it must then be removed from the scene. It is usually rigid like a rod or pole, is much larger than wires, and there is typically only one to worry about. They can be as simple as a stick held by a prop guy or as sophisticated as a computer controlled multi-axis articulated robotic arm. Either way, it's got to go without a trace.
These two pictures illustrate another major application of rig removal, namely location obstacles. This location shot from the pilot of “Deadwood” that I composited lost some of its crusty western charm with the traffic light in frame.
The city fathers were unwilling to cut the light down and the director simply had to have this particular camera position to get his shot. The solution – rig removal.
In the wire removal example above I mentioned that a clean plate is created by taking clean pieces from different frames and pasting them together. This obviously requires movement between the target and the background and this movement can come from either a camera move or the target object moving in frame.
But this shot was locked off. Since there was no movement there were no clean pieces that could be cobbled together to make the clean plate. What’s a compositor to do?
What the compositor does is call in a digital matte painter. Of course, many compositors are also formidable matte painters, especially for the modest challenge that this shot represents. Looking around the scene there were plenty of elements that I could clone brush into the offending regions to paint out the rig and build the clean plate. I then used a soft-edge matte of just the painted region to composite it over the shot so only the painted region was replaced.
With the locked off camera the composited repair totally removed the rig, but it looked “dead” while the rest of the shot had “life”. I then looked around for bits and pieces from the rest of the scene to layer over the repair site to add a bit of life to it.
In this particular case there was considerable dust drifting by which was noticeably absent from the repaired region. To address this I created a dust element that approximately matched the speed, color and density of the rest of the plate, then gently blended it over the repair.
Since this was a film job I also regrained the repair as the painting obviously had no moving grain. Adding these little touches of life to a static repair are essential if it is to remain an invisible effect.
While not impossible, it is very difficult to actually shoot monitors and TV sets with images displayed on them. The lighting and exposure on set must be matched to the monitor plus the camera shutter must be synced with the video refresh raster. It is actually much easier to shoot the scene with a blank monitor then later composite in the video footage as another invisible effect.
In the opening shot of “Coneheads” the technicians in the NORAD early warning center were originally watching a baseball game on TV while they looked out for encroaching aircraft on their radar scopes. The producers later thought it would be a hoot to have them watching James T. Kirk battling the Gorn instead. I agreed, and accepted the shot.
The problem was, this was in the days before there was any such thing as motion tracking software (I was an early adopter of digital compositing), so I was obliged to painfully hand track the entire shot.
I would liked to have warped the video footage a bit to give it the bulge of the curved TV screens of the day, but technology did not permit. Of course, today this would be trivial, but today we have flat screen TV sets. Oh well. It would have been glorious.
Even so, it can be difficult to convincingly tie freshly composited video into the live action plate. In this case, I noticed that the TV bezel had a noticeable reflection of the original baseball game which did not match the action in the Star Trek video. What was needed was a plausible reflection of Captain Kirk on the bezel instead.
First the bezel was replaced with a clean one to remove the baseball reflection, then a blurred version of the video was gently screened over the clean bezel to really sell the shot.
Keep your eyes open for the small details like this, that truly anchor your work in the real world.
It occasionally happens that a reflection or shadow of the camera, crew, or microphone boom will show up in a shot. Sometimes it is noticed during principle photography but there is nothing that can be done because there is no option to reposition things. Other times it is not noticed until after the film is developed or the video is being edited. Either way, it is has to be discretely fixed in post as yet another invisible effect.
This shot from a swanky beer commercial shows a reflection of the camera in the window behind the product. The creative vision here was to shoot the beer glasses on location in front of the windows with an impressive view of the harbor below. The product could not be repositioned, the window pane could not be removed, and the camera had to be in this specific location to get the shot.
The picture below shows the final shot with the camera reflection removed.
This particular example illustrates a very serious challenge where the region to be replaced has a continuously varying gradient, the sky in this case. It is nearly impossible to perfectly match such a gradient around the edges of an object to be replaced. A mismatch anywhere along this edge will become painfully obvious.
In these situations I look for a nearby hard edge to use to hide the seam. In this case it was the metal strip just below the camera reflection, then up around the outside edge of the glasses to the vertical metal strip, then to the top of the frame. The whole idea was to avoid having to perfectly match the original sky gradient.
Fortunately this was a locked off camera shot. I drew a single matte for the entire area to be replaced with very carefully aligned matte edges. In Photoshop I was able to construct a reasonable replacement sky gradient, sampling RGB values from the original plate to match as closely as I could. Since it would not have a compositing edge in the middle of the window it did not have to be perfect, which was why I chose this approach in the first place.
Once it was composited, a light dusting of video noise was all that was needed to perfectly integrate the new window element.
As a movie goer I hate “wiggle cam” – my derogatory term for jiggly hand-held camera work that is claimed to “immerse the viewer in the moment.” For me, it immerses into a migraine. As a visual effects artist, however, I love wiggle cam because the overly exuberant cinematographer that shot it will now pay me handsomely to stabilize his wild shots. Shot stabilizing is one of the most magical things that a computer can do and it never ceases to amaze me every time I do it. Properly done, the audience will never know the shot has been stabilized and it will remain an invisible effect.
Here is a shot that you can play to see the wildly gyrating camera in the original photography. Once it was edited into the movie it became obvious that the gyrations are far too extreme for the sequence and the shot must be stabilized.
If a camera move is too bouncy it can be smoothed out with shot stabilizing, another unique capability of computers. But shot stabilizing has a couple of down sides that are revealed in these three pictures.
The first picture is a frame from the original photography of the shot.
Click the image to play a small movie of the shot.
As you play the clip, you can see the wildly gyrating camera in the original photography. Once it was edited into the movie it became obvious that the gyrations are far too extreme for the sequence and the shot must be stabilized.
The first step is to perform motion tracking analysis on the clip to determine how it is moving, rotating, and zooming. Once the computer has this data the artist then decides which of these motions to remove, and by how much. The vertical and horizontal wiggle can be removed completely, for example, or simply reduced by 50%. The same goes for the rotation and zoom. If all of the axis of motion are completely removed then the shot becomes locked off. Most of the time, however, the camera motion needs to be mellowed out rather than eliminated completely.
The computer stabilizes each frame by shifting it horizontally, vertically, and rotationally to keep the picture steady. This introduces black edges that can be seen in the second picture, stabilized with black edges.
Click the image to play a small movie of the shot.
Finally, to clear out the black edges the stabilized frame must be zoomed in, shown in the third picture.
Click the image to play a small movie of the shot.
The final shot can be seen here which is now stabilized and zoomed in. The camera move is now much reduced and the black edges have been eliminated by zooming into the picture to push them out of frame. This means that the finished stabilized shot is zoomed in tighter than the original. Note also that there was motion blur introduced into the picture by the moving camera, and this has remained in the finished shot. Optical flow cannot fix that.
The bottom line is this: when stabilizing a shot you must warn the client that the finished shot will be zoomed in to some degree and any motion blur from the camera will still be in the shot. The more the shot is stabilized, the more it will be zoomed in. These are unavoidable consequences of shot stabilization, so setting the client’s expectations before the work is done is the key to good client management.
Speed changes are another invisible effect that use very sophisticated computer technology to slow a shot down – sometimes way down. While speed changes may be used to speed shots up as well as slow them down, we will focus on the much more common case of slowing down a shot. A film camera normally runs at 24 frames per second (fps). If slow motion is needed in a shot, the camera has to be “overcranked” – run at very high speed, say, 250 fps – then the shot is played back at normal speed, which slows down the action.
There are several problems with running a camera at high speed. First, these cameras are special, and therefore expensive, so they may not be in the budget. Second, since the shutter is open for much briefer moments of time more light is needed. A lot more light. This extra lighting is not only expensive, it is also hot. It might melt the product - or the talent. Third, the idea to make the shot a “slo-mo” might be the brilliant idea of the filmmaker after the scene was shot. The only answer is a computer generated speed change.
There are three basic ways to slow down a shot. First is frame duplication, where extra frames are created simply be repeating them. This is easy, fast, and ugly. The motion becomes jerky and the length of the shot can only be even multiples of the number of frames. This is so ugly we won't even look at it.
Next, on the rendering pass, the artist tells the computer how many frames to stretch the shot to, then the computer creates the new in-between frames by actually shoving the pixels around based on the motion vectors. This clip shows the astounding results. Keep in mind that this shot has been slowed by 500% so fully 80% of the frames you see are synthesized by the computer!
Optical flow is also an essential step in creating the “bullet-time” shots made famous in the Matrix movies. Several dozen still cameras are set up around the actor then fired off either simultaneously or sequentially separated only by milliseconds. The multiple still cameras may produce 30 or 40 pictures, but the finished shot needs say, 150 frames. The in-between frames are synthesized using optical flow.
And now for the bad news. First of all, optical flow is computationally expensive, meaning it is slow. Second, it introduces artifacts that must be fixed after the speed change. The optical flow algorithm can get confused and pull on the pixels in the background when it shouldn’t or create a double exposure of a rapidly moving object. These all have to be repaired using rig removal types of techniques. Once completed however, this invisible effect can make a shot look like it was filmed at 5000 frames per second.
There is an entire class of invisible effects called “scene salvage”. Something has gone wrong on the set, in the camera, in the lab, or at airport security screening and the film is damaged.
Since video has no physical film to handle and process in a lab it has far fewer scene salvage issues than film. On the set there can be HMI lights that are subtly flashing out of phase with the camera shutter introducing a flicker in the finished shot which can affect both film and video shots.
In the 35mm film camera there can be a hair in the gate or a light leak where the camera box is not totally sealed and stray light exposes an edge of the film. In the lab the most common damage is a scratch down the emulsion side of the film from a bur on the film processing equipment. At Cinesite we once repaired the infamous “2000 foot scratch” where a 2000 foot reel of film was scratched from head to tail. The lab didn’t miss a single frame. Yikes!
Scratches are perhaps the most common type of scene salvaging done. This is such a common problem that there are actually plugins you can buy that specifically do scratch repair. While a camera can also scratch the film, the lab is the usual culprit. The film is pulled through multiple baths of chemicals on rollers running at high speed and if one of them develops a bur it can scratch into the yellow, magenta, or cyan layers of the film to varying degrees producing a colorful scratch. The picture has been physically removed in the scratch, and to make the problem even more exciting the scratch also weaves back and forth as the film wobbles through the pulleys.
The first step is to create a matte or mask of the scratch so the repair can be limited to this area. This can be a daunting challenge because the scratch is not only drifting back and forth but it is also changing in width and density. Those plugins mentioned above use sophisticated image processing algorithms to identify the scratch distinct from picture content in order to create a matte. The next step is to decide what rule will be used to repair the scratch. A common rule is to simply “grow” the pixels in from each side of the matte to the center, but this alone can leave a noticeable line. Some repair algorithms are very sophisticated and attempt to actually match the detail on either side of the scratch. Once the scratch is filled in with the appropriate pixels a light dusting of grain is added to the repair to match the surrounding region.
A light leak is a much more daunting problem simply because is covers so much of the picture.
I have fixed some with the usual standby of cutting and pasting previous or later frames to make a clean plate, then compositing it over the offending area. This would not work in the light leak example here because there is action in the damaged area, namely people walking through it. In this case all that can be done is to crop out the good region and scale it up to fill the original frame size. A bit of sharpening may help reduce the resulting softening of the picture somewhat, but because image sharpening also kicks up the grain there is a limit to how far this can be taken.
There are other invisible effects, of course, but this should give you a sense of what is being routinely done by today’s top visual effects compositors lurking in the background silently salvaging hopeless shots. While it is certainly very exciting to composite that T-Rex thundering through the streets of New York, it can be very challenging and rewarding to a compositor to save a shot that might otherwise have to be cut from the movie if it cannot be salvaged with an invisible effect. Even though the compositor’s effects might be invisible, their names still go up in the credits.
Thanks Steve! And speaking of credits, here's Steve's IMDb profile.