LIBRARY: Tutorials Reviews Interviews Editorials Features Business Authors RSS Feed

FotoKem: 3D DI

CreativeCOW presents FotoKem: 3D DI -- Stereoscopic 3D Feature


Burbank California United States
CreativeCOW.net. All rights reserved.


3D DI is generally the same as 2D DI, with some variations depending on the cameras used, and how the stereo rig is configured.

Parallel rigs are usually used for back of the house, longer shots, and vistas. The convergence point of parallel cameras are at infinity.

A beam splitter is two cameras, perpendicular to each other, with a two-way mirror angled between them. The camera that's pointing forward shoots through the mirror. The other camera points down, and is getting light bounced to it from the reflective side of the mirror. The advantage of this approach is that you can get much closer to your subject, so it works well for Steadicam dolly shots and close ups.

Once the footage comes into post, we begin making adjustments to balance the two eyes, as the camera that is shooting into the mirror tends to have a more of a yellow kind of green cast to it. It's not necessarily linear. Depending on the angle of the mirror, sometimes it's just on half of the frame. While it's not a rule, there is often more vertical misalignment with beam splitter cameras than parallel cameras.

Parallel rigs tend to have the kinds of problems that you can imagine from two cameras next to each other: they just can't get as close together as your eyes are, and there are often problems, such as lens flares, or a corner of duvateen slightly reaching into the frame, that only show up in one eye, and not the other. The advantages of each rig, especially related to distance, are why most shoots use both rigs.


CAPTURE AND CONFORM

The rigs we see are, all equipped with different flavors of cameras. I can tell you about two projects in a little detail because they've shipped already, "Hannah Montana/ Miley Cyrus: The Best of Both Worlds in 3D" and "Jonas Brothers: The 3D Concert Experience." Hannah used Sony 950s and F-23s, and Jonas was mainly F- 23s.

Both recorded to the Sony HDCAM SRW-1 deck, recorded 4:2:2 x 2, interleaved. That is, the SRW-1 can record two 4:2:2 feeds to a single deck. Both of the cameras connect via HD-SDI, and the deck records Frame 1: left eye/right eye, then Frame 2: left eye/right eye, and so on.

We use the Quantel Pablo for virtually every aspect of our stereo post, and one of its great advantages is that it digitizes both eyes simultaneously from our SRW-5800 studio deck. This makes it incredibly easy to conform: it loads both eyes from one EDL, always keeping the stereo pairs together as one source clip

The Jonas Brothers and Best of Both Worlds concerts were similar projects, the big difference being the time to complete each one. Hannah Montana was done in 11 weeks. For Jonas, we had four months, with the last two being heavy on the DI side of things.

With Hannah we were involved from editorial dailies through finishing. Jonas we were involved from the time the tapes were pulled out of the deck. This included logging the tapes, dailies screenings, editorial 3d previews, an ever changing "on-line" conform, final color grading, 3D convergence choices, and ultimately digital cinema packages and tape deliverables.

I put "on-line" in quotes because things were pretty well online already. We initially loaded the original footage onto the Pablo, and played stereo video out to a Quvis DDR disk recorder. We then sent the drives to Disney for their dailies screenings.

They started to cut, and by a week or so into the dailies screenings, we started to see our first EDLs. They came back to FotoKem, we pressed a button, and in the snap of your fingers, we had our conform. We didn't have to load any tape, because the footage from the dailies was already on our drives.

So, most of our 3D projects come in at 1920, shot HDCAM SRW interleaved, at 4:2:2x2. We're working on a VFX-heavy project now, being shot with the Sony F-35 at 4:4:4, which requires two separate decks, ganged together.

(The footage we've seen from the F-35 has been absolutely amazing. I can't get over it - by far the most beautiful images I've seen out of digital tech. I have no doubt we'll be seeing much more being shot with this camera very soon.)

We're also starting to see some Silicon Imaging and RED cameras bringing in 2K files, which typically come in on FireWire drives or LTO tape. FotoKem has a new division called next- LAB, which handles every aspect of file-based workflows. We've just started it, and it's already booming. They convert the RAW files to DPX files that I load into the Pablo from our shared Isilon storage.

Most clients are shooting 4K with the RED. When we receive the R3D files, nextLAB debayers the images and processes them to oversampled 2K images before we load them to the Pablo.


3D GRADING

Crews do the best on their sets, but they have time crunches and deadlines, and sometimes the cameras aren't necessarily matched perfectly. Maybe the black level on one eye is a bit lifted, maybe one eye is slightly more magenta or something.

My preference is to balance both of them first, then bake that in so that the stereo pair is matching identically. That's because the way that we are working right now, we grade on one eye and then we apply that grade to the second eye. If the eyes match, you just throw the correction from one onto the other and you're done.

That's in a perfect world. Usually we are under the same kinds of time constraints and deadline pressures that people faced on-set. In cases like that, I'll grade one eye, apply that grade to the other eye, and then do a balance correction to match those eyes together.

After coloring, we make a convergence pass. This is usually unsupervised, and is more technical than creative -typically to remove the bumps or misalignments that are part of the nature of live footage.

When the cameras aren't calibrated perfectly, I may need to rotate and scale some shots to make the eyes match. If I have some of the problems in only one eye that I mentioned earlier (lens flares, objects slightly reaching into view, etc.), I use the Quantel Pablo to composite the good eye into the eye with problem, warping to match perspective and lens distortion.

After this, a creative convergence pass is done with the client to heighten the emotional impact of the stereo and help tell the story. During this pass, we may choose to make the stereo a little more subtle in certain spots, just to give the eyes a break, or a little more extreme to punctuate key moments.

Because the world's not perfect, I sometimes have to combine the technical and creative convergence passes into one to meet the deadline.

FotoKem 3D

The Pablo has great compositing and convergence features. Convergence tools are basically a symmetrical DVE, andtake place in real time. If it's a complicated 3D face replacement or something, I'd definitely use a Flame, but for color, editing, compositing, layering, graphics, and working with stereo footage, Pablo is an all-inclusive box.

For all the previews, we play the Pablo through an NEC digital cinema DLP projector using the RealD system, showing on a silver screen from Harkness.

RealD has an external peripheral called the "Z Screen" that we mount on our single projector to get stereo. It oppositely polarizes each eye while triple flashing the image: going left eye/right eye, left eye/ right eye, left eye/right eye before advancing to the next frame. It definitely reduces flicker, and it really saves the headaches during long sessions.

I wear the RealD glasses even when I'm grading a single 2D eye. They have a slight yellow-green tinge to them, so without the glasses, the image would look a slight bit more magenta than I would normally want it for a neutral picture. Once I put the glasses on, though, I can compensate for that.

When it comes time to get to the trim pass, I always work with both eyes, in stereo, making any additional color corrections needed as it plays Working in Pablo takes place in real time, at full res -no proxies! Two streams of files this size, with all of the effects we can lay on them, are going to need rendering on output though. Pablo balances real-time performance with background rendering.

As I finish grading the first shot and move to the second, Shot One starts rendering in the background. When I move to Shot Three, Shot Two starts rendering. By the time I get to the end of the sequence, everything is ready to play out from the beginning, with no waiting.

This process creates new media, but the convergence tools do not. They just allow you to make choices about what appears on screen. They're very interactive - we can tweak this, change that, and keep moving. The client often says, "Push it ‘til it breaks! Now come one step back." They want to see it happening.


DELIVERABLES

We had a LOT of deliverables on the Jonas project, starting with the digital cinema package. This is what goes to the theater where you'd see it in stereo.

After that, we did a film out for international market. We've also finished a DVD extended version. For the DVD, we did versions in 2D and anaglyph 3D. Both of those went out to tape for mastering.

3D Screen

Anaglyph is important because this is how people are watching 3D DVDs now. That's why I've taken the time to develop some special tricks for anaglyph mastering. We essentially take the red channel from one eye, and then the blue and green channels from the other eye, and combine them with an additive transfer mode.

That's at its most basic, and that's about all I'm going to say about that. The rest of what I do is the secret sauce, and I don't mind saying that my anaglyph is a bit superior to a lot of the others out there.


3D Editing

DOWNSTREAM

Most immediately from here, I think that we're going to start seeing more 3D rigs coming directly from the major camera manufacturers, rather than production companies have to fabricate their own. Sony, Panavision, Arriflex - I would expect to see all of these and more with stereo cameras and stereo rigs in the near future.

You'll also see a push to on-set workflows, as the line between production and post-production continues to blur. Imagine shooting to Codex, S2, or other kinds of DDR. You could turn over the disks to the onlocation post group, such as nextLAB. They process the shots for editorial while simultaneously making 3D dailies packages.

With a 3D-capable NLE on the set, the shots could be quickly tied together in a sequence for the director and DP. They could make sure on the set that sequences of shots are landing in 3D space along the lines that they envisioned, and they will cut in with others already shot.

Software companies will continue to develop the 3D toolsets. Avid is already shipping a 3D-capable version of Media Composer that can acquire and work with streams from both eyes on the timeline. The next step will be the ability to make offline convergence choices on the spot.

Online machines downstream will need to read the metadata generated on-set from these cameras and NLEs, and be able to apply it to the full-resolution media in the online. It will happen.

Last year about 70% of the work I did was in stereo. I expect this number to get closer to 90% this year as studios produce more and more 3D content. In short, the trend is "up" - no pun intended with Disney's animated 3D film!

I think the number of editorial preview conforms will fall as NLEs gain ability to work with stereo footage - and while the offline machines get better equipped to handle stereo, the finishing machines will also gain more powerful tools to fix more complicated issues, giving us even more power as more productions move into 3D.

Just as the 2D DI toolset continues to expand, so will the 3D DI toolset.

John DaroJohn Daro
Burbank, California USA

John is a 3D colorist and DI specialist at FotoKem. He has also been a member of Creative COW since 2003, spending much of his time in the Avid and After Effects forums. "Stereo imaging became a hobby of mine since my first version of Photoshop," he says, "and I love getting to work it with it professionally."



Find more great Creative COW Magazine articles by signing up for the complimentary Creative COW Magazine.



Related Articles / Tutorials:
Stereoscopic 3D
NVIDIA's GPU Technology Conference M & E Roundup

NVIDIA's GPU Technology Conference M & E Roundup

There was truly something for everyone at NVIDIA GTC, and a surprisingly rich amount of thoughtful presentations for the Media & Entertainment crowd. GTC provided many opportunities to learn the basics and, for the truly nerdy, many opportunities to dig deep. As a mere semi-geek, Debra Kaufman learned a lot and met a lot of interesting people. Read her roundup of new M&E technology thanks to super-fast GPU computing.

Feature, People / Interview
Debra Kaufman
Stereoscopic 3D
Tim Squyres Edits Life of Pi

Tim Squyres Edits Life of Pi

Tim Squyres, who was nominated for an Academy Award for his work on Life of Pi, got hooked on film when he took an introductory film course at Cornell University in upstate New York. Squyres talks to Creative COW about the challenges of editing Ang Lee's first digitally shot feature film, which was also a stereoscopic 3D release.

Feature, People / Interview
Tim Squyres
Stereoscopic 3D
Debra Kaufman's Review of The Hobbit in 48 fps 3D

Debra Kaufman's Review of The Hobbit in 48 fps 3D

Debra Kaufman started her high school's Tolkien Club when she was a big fan of The Hobbit and Lord of the Rings. Though she hasn't celebrated Frodo's birthday in many years, she did get out to see The Hobbit in 48 fps. Here are her thoughts on The Hobbit in 48 fps and HFR Cinema in general.

Editorial
Debra Kaufman
Stereoscopic 3D
Setting Standards for High Frame Rate Digital Cinema PART 2

Setting Standards for High Frame Rate Digital Cinema PART 2

As part of Creative COW's ongoing look at high frame rate cinema production and exhibition, we spoke with Michael Karagosian, co-chair of SMPTE's HFR Study Group, about his thoughts on the ways that higher frame rates factor into mastering, distribution, and exhibition for cinema.

Editorial, Feature, People / Interview
Michael Karagosian
Stereoscopic 3D
The Making of Flight of the Butterflies in 3D on IMAX

The Making of Flight of the Butterflies in 3D on IMAX

SK Films has released Flight of the Butterflies in 3D, an IMAX/giant screen film that details the true story of the scientist who spent 40 years researching the incredible epic migration of the monarch butterfly. Using, among other technologies, a specialized 3D snorkel camera designed by Peter Parks for close-up photography, the feature film features spectacular real footage of the brilliant monarch butterfly sanctuaries.

Editorial, Feature
Debra Kaufman
Stereoscopic 3D
Stereo D Converts Abraham Lincoln: Vampire Hunter to 3D

Stereo D Converts Abraham Lincoln: Vampire Hunter to 3D

If you think that converting a motion picture from 2D to 3D is a post production process, think again. Stereo D just wrapped up the conversion of director Timur Bekmambetov's Abraham Lincoln: Vampire Hunter from 2D to 3D, and the job began before a single frame was shot.

Feature
Debra Kaufman
Stereoscopic 3D
3net's Best Practices for 3D TV

3net's Best Practices for 3D TV

With a clatter of possibilities for 3D production, 3net has issued a guide - sort of a lion's roar - to help TV producers and creatives learn how to be confident in their knowledge, capable of finding the additional information they need, and resourceful enough to make killer 3D TV programs on budget and on time, all while dealing specifically with all the technical issues required to produce a successful 3D stereoscopic TV show.

Feature
Debra Kaufman
Stereoscopic 3D
Fields of Valor: the Civil War in 3D

Fields of Valor: the Civil War in 3D

Inspired by the massive number of historic stereoscopic still photos from the Civil War, 3net has ventured to deliver the four-part miniseries, "Fields of Valor" in 3D, dubbed "the most ambitious 3D project ever produced for television." 3net will air a special encore presentation on Saturday, March 24, 2012.

Feature
Debra Kaufman
Stereoscopic 3D
The Dog Days of 3D

The Dog Days of 3D

We've seen complicated rigs, higher budgets and audience skepticism deflate the hype of new movie-making technology before. What can the past tell us about 3D's future?

Editorial, People / Interview
Angela Gyetvan
Stereoscopic 3D
LG Debuts 3D Ultra Definition TV

LG Debuts 3D Ultra Definition TV

LG's internet connected super high resolution 84 inch Ultra Definition 3D TV will offer resolutions of four times that of existing HD panels. The "world's largest 3D Ultra Definition TV" has 8 million pixels for a resolution of 3840 x 2160.

Feature
Debra Kaufman
MORE
© 2016 CreativeCOW.net All Rights Reserved
[TOP]