Freeing Artistic Vision from 3D's Limitations
COW Library : Stereoscopic 3D : Neil Feldman : Freeing Artistic Vision from 3D's Limitations
In-Three originally developed its technology to revivify dormant film libraries. We call our process "Dimensionalization®," using patented software tools and techniques to create a second view from any two dimensional image.
With delays in the rollout of 3D theater screens and platforms for 3D video in the home, we have had only modest opportunities with legacy films. However, as our R&D has continued, we have seen a strategic opportunity to solve problems in the creation of live action 3D features. We believe that these problems, particularly around the hassle of using two cameras to get a 3D picture, are among the reasons why so few 3D features have been released.
Let me preface this by saying that I don't want to be perceived as dumping on shooting. Shooting has its natural domain, which includes anything related to broadcast - ballgames, operas, and other live events that are already being presented in 3D in theaters, to great success. That's natural. Those kinds of presentations don't need Dimensionalizing, and they don't need CG - which also has its natural domain of course.
To begin with the difficulties around multiple cameras for 3D production, one is lens differential. There are inevitably subtle physical differences in things like lens refraction. There are differences in perspectives for things like specular highlights and reflections that may be fully visible in one eye, and not present at all in the other.
Each of these is exaggerated by vertical disparities in the lenses. Our eyes obviously handle horizontal disparities just fine - this is how we see in 3D as we look at the world, by resolving horizontal differences in perspective. But resolving vertical disparities would require one eye going up as the other is going down!
Even if each of these errors is small, they accumulate over the length of a movie, and become disturbing. Your brain is telling your eyes to adjust to something that they can't quite sync. It wears you down.
The challenges become even greater when shots have to match. If two shots inside a room are converged on a different point in space, the room can look like it is a different size in each shot! It is shallow in one shot, deep in the other. Even if your eyes can figure out what is going on, it's not very believable, and it's certainly not very comfortable
There are some powerful warp engines and other sophisticated tools to make things a little better, but really, there is no way to fix them. Once you take the pictures, these things are locked in.
Footage shown above is undergoing Dimensionalization and is a scene from James Cameron's Titanic.
We start with the footage from a single camera. We take that 2D footage, and we copy it to create a right eye, left eye view. Every pixel is mapped perfectly, with no discrepancies from lens effects, no misalignment. Now we can begin to add perspective, shape and depth.
When I say add depth, I mean to move one eye's view relative to the other eye's view. Each eye tracks independently.
We give shape first by rotoscoping, then by morphing. As we extract depth, we expose an area in the background footage that was previously occluded, and have to recreate a background that does not exist in the footage. We also create a kind of binocular disparity that your brain interprets as shape. But no vertical disparities, no light balance disparities, no pixel disparities. You eyes are tracking the way that they do in nature.
If we do what I just described well, we can achieve what I call "perfect 3D."
We are also not just limited to a single set of 3D characteristics for a given shot. Our system can isolate 20 elements in a scene, each with its own camera, so to speak. Which means that, for each of these individual elements, we have a unique set of those three controls - perspective, shape and depth. Instead of those three "knobs" on a single camera to cover an entire scene, we have 60 "knobs" to turn: perspective, shape and depth for each of 20 objects that we have isolated.
Our original software was called In3 Depth Builder, and we're just starting to deploy a new package called In3gue - pronounced "Intrigue." This allows artists to keyframe the individual elements that they have identified to have unique depth, shape and perspective. The software propagates those values, interpolates through the shots, makes it all practical, and our software spits out an occlusion map. Each of these elements can now be treated individually, apart from the shot as a whole.
Now, let's say we go to the next shot, and it's a different view. We can isolate elements that we DON'T want to change, so that the internal depth of a scene stays consistent from shot to shot. Think of it as "depth grading," analogous to color grading, in order to maintain depth continuity.
That is, even if the dailies look perfect by themselves, here come another 200 scenes to cut together. In one of them, the focal element might come halfway out of the screen, and in another only 10%. Your eyes starting ping-ponging back and forth, and in and out, as you're trying to follow a single object from one cut to the next.
The problem for you as a viewer is that it drives you crazy to watch. The problem for the director is that, once the footage is in the camera, those disparities are locked in. It's hard to get immediate feedback during shooting, and it's virtually impossible to match 3D composition across shots in the edit.
We solve that problem. We make sure that the depth grades smoothly from one scene to the next. Directors see this and are amazed. They tell us, "This is the first time I've been able to have this kind of control over depth. I know what I'm getting." As we work, all changes update in the blink of an eye. I have signs all over our facility that say, "Blink Time." Our engineers are driven to keep things moving just that quickly in depth grading sessions.
SEEING IS BELIEVING
The specific placement of Dimensionalization in the production pipeline depends on our client and their schedule. We generally receive the .dpx image sequences in log space before they have been color graded, but on a single project we can work with files both before and after grading.
We prefer to work on locked shots with handles, but we have often had to work as the cut, and even the final image, is in flux.
We recently worked on the Jerry Bruckheimer film, "GForce," directed by Hoyt Yeatman, distributed by Disney. Our stereo work fell into three major categories: full live-action shots to be Dimensionalized, full CG shots to be rendered in stereo, and live-action shots with CG elements.
The full live-action shots were Dimensionalized by In- Three. The full CG shots were rendered completely by Sony Pictures Imageworks. Often, the third type of shot was handled through a combined effort by In- Three and Sony Imageworks.
Dimensionalization is ideal for this, because we store depth metadata and can use it to conform stereo CG with stereo live action in real-time.
For these live-action shots with 3D elements, Sony would create a temp stereoscopic render of the CG elements, and we would Dimensionalize the live-action plate to match. The reverse can also work, but this approach was the one predominantly used in "G-Force." Sony would then do a final render of the CG elements. Then, depending on the specific shot, either In-Three or Sony would final the composition, and touch up the shot as necessary.
In other words, we don't necessarily see Dimensionalization solely as an alternative to live action shooting or CG. As "G-Force" has shown, it can be an excellent complement to them.
DIMENSIONALIZATION DESIGN DETAILED
THE FUTURE OF 3D
There are a lot of smart people in the world, and a lot of them are working on shooting stereo. As a result, I think that we will get past these problems. In the short term, I'm hoping we can sort the problems out well enough to see some really, really good sports events, because that's what's going to drive 3D television in the home - and THAT's what's going to make 3D become the financial blockbuster we know it can be.
While I'm rooting for stereo shooting, I also believe that directors are always going to want to have the most possible control over their shots. I think that we as a company, and as a representative of conversion technologies, are going to be a key factor in day and date releases of live-action features going forward.
When all of that happens, people will truly get caught up in 3D as a new entertainment vehicle. I think that that is when Dimensionalizing legacy movies is going to bring new financial life to film libraries.
In the meantime, our goal is to continue to maintain the quality of Dimensionalization, while driving down costs through technological innovation.
Whether through shooting, CG, or post-production Dimensionalizing, we can all participate in constructing 3D content. By understanding what our strengths and weaknesses are, we can construct 3D content together, and work together to achieve the highest quality at the lowest cost, and the fastest pace.
Find more great Creative COW Magazine articles by signing up for the complimentary Creative COW Magazine.