LIBRARY: Tutorials Reviews Interviews Editorials Features Business Authors RSS Feed

VR Cinematography: Exploring How To Represent People in VR

COW Library : VR (Virtual Reality)/360º Video : Michael Naimark : VR Cinematography: Exploring How To Represent People in VR
CreativeCOW presents VR Cinematography: Exploring How To Represent People in VR -- VR (Virtual Reality)/360º Video Editorial



with David Lawrence and James McKee

Last July, shortly after Google announced its Jump VR camera collaboration with GoPro, Google’s head of VR Clay Bavor called me in to meet. The result was a short term, project-based artist residency, their first (using “early early” camera/algorithms with artifacts that aren’t representative of the current Jump).

I’m of the artist-as-bridgebuilder school and found lots of potential symbioses with old and new friends inside Google VR, around Google, and externally, and consequently proposed as a project a “community-based ethnographic VR experiment”, thinking globally but with a fast, lean, local prototype.


Areas of Exploration

Several timely and relevant areas of exploration emerged.

1. Close-up VR imagery from camera-originated material is awesome but tricky.

Headset-based VR is uniquely suited for experiences in the nearfield, “intimate zone”, where image and sound seem within or near arm’s reach. Screen-based immersion such as 3D movies is not very good in this zone, and Hollywood has long learned to keep most of the action behind the screen (except for “gotchas” like blood, bats, and broomsticks). For VR made from computer models such as most games, getting the needed different viewpoints for each eye is trivial, but when the material comes from cameras, getting these different viewpoints is much more challenging.

2. High quality spatial sound is as important as image.

Shooting with panoramic camera rigs often defaults to recording with panoramic microphone rigs, usually an omni-directional thingy on top of the camera, resulting in compromised sound. Using human sound recordists with shotgun mics or booms, or mic’ing every individual subject in view, is far superior but problematic. And also, if you do, how do you hide the recordists and gear?

3. Filling (and unfilling) the panoramic sphere has novel challenges.

Filling the full 360 degree view with interesting material, and unfilling it with uninteresting material, has its own unique challenges and opportunities. Early on, many camera-based VR filmmakers felt compelled to digitally fill in the “nadir hole,” the region at the bottom of the panoramic sphere where either the camera rig couldn’t see, or if it did, saw the tripod.

4. The “hyperimage,” a Holy Grail of interactive media, is well-suited for VR.

So if we’re good digitally filling the nadir hole with nearby ground or floor imagery, how far can we go? The “hyperimage,” an artificially overpopulated scene where “more” is “happening”, has been a holy grail of interactive media from the beginning. Each element can serve as an interactive link, and the more links, the richer the experience. Think interactive Bruegel.

5. Metadata-based interactivity, another Holy Grail, may also be well-suited for VR.

A related Holy Grail is “directed interactivity,” where individual scenes or clips or other media are all parsed and tagged with metadata to allow interconnection with some sort of narrative or direction, more compelling than a random walk. This grail, which includes “interactive movies” and “database art,” has its roots in the grand databases developed over many decades in anthropology, such as George Murdock’s Ethnographic Atlas and most notably, Alan Lomax’s Global Jukebox project. (Alan was a mentor and personal friend.)

6. Community buy-in is essential.

Finally, as VR cameras begin to proliferate, the vision of a “One Earth Model” begins to emerge, spanning from entertainment and gaming to tourism and travel to ecology and activism. For this (attitude alert!), community buy-in is essential, where production is a collaboration and control is shared between producers and subjects, at its best, in the spirit of cinéma vérité pioneers like Jean Rouch and Richard Leacock. (Ricky was also a mentor and personal friend.) Without community buy-in, in the end, the loss will be ours.




GoPro
Optimized for Jump, the GoPro Odyssey packs 16 synchronized HERO4 Black cameras into an all-in-one rig that’s capable of capturing immersive content in stunning 8K30 video. The video from each camera is then uploaded to the Google Jump assembler to deliver an engaging experience from every direction.


Starting with Studies

It was no secret last fall that the Google / GoPro VR launch had been delayed, and being under time constraints, I proposed getting the Jump VR camera rig for a day and shooting some studies. I’m a big fan of studies (think Muybridge) and frankly am bewildered how little the VR community has understood the value and leverage of them. I was also in a good position for this: the joke was while everyone else was flying off to shoot VR in Timbuktu, I had already done that and was good shooting, literally, in Google’s backyard.



Michael with a stereo-panoramic motion picture rig in Timbuktu in 1995.

For this, I enlisted the talents of a couple other VR OG types, David Lawrence and James McKee. We’ve worked together on and off since the Apple / Lucasfilm Multimedia Lab days c. 1990, and together the three of us represent over 85 collective years of experience working with cutting-edge experimental media. Jim and David also made the “fantastic” early VR radio piece based on “Cyberthon” (that’s another story). Most recently David produced “Farm”, a stereoscopic art video with San Francisco artist Dale Hoyt and Jim produced the spatial installation audio for Chinese artist Ai Weiwei’s “@Large” show on Alcatraz.

We took over the “Big Chairs Park” on the Google campus and, using wide blue masking tape, staked off the ground into 12 “one hour” radials with concentric rings at 1 meter out to 5 meters and beyond and got to work.




David and Jim with the Google / GoPro Jump VR camera rig at Google in 2015.


Our intention was to explore how people are represented in VR.


The 360 degree by 180 degree “equirectangular” video format turns the radial lines into parallel lines. Here’s the left eye view from a stereo pair. (Yep, that’s a 360 degree image of the same scene above!)

Our primary goal was to explore how people are represented in VR and to produce some modest, solid studies that would be immediately useful to students and folks getting started in VR. Something both practical and provocative. And our message to you is: Surprise us!


Study #1: Close-Up Tests

As mentioned, getting the needed different viewpoints for each eye in the nearfield from camera-based material is problematic, where these different viewpoints are most different. For farfield imagery like landscapes, it hardly matters since both eyes see essentially the same viewpoint.

Capturing these different nearfield viewpoints require special panoramic cameras which fall into three categories: 1) stereo-panoramic camera rigs with paired cameras, which is instantly viewable but with potentially gnarly lines between the stereo camera pairs; 2) unpaired stereo-panoramic camera rigs, which require computation to produce stereo pairs for viewing; and 3) panoramic camera rigs with additional magic such as laser range-finding (LIDAR), handiwork (such as 2D to 3D conversion), or clever computation (much yet to be invented).

Please don’t get me started on the state of VR cameras today (rant alert!). The Hollywood Reporter recently ran a story entitled “Virtual Reality Stitching Can Cost $10,000 Per Finished Minute.” This is mainly because folks building camera rigs have failed to do their homework. (Ask them what a nodal point is.) “Light Fields” is currently hot but, like “holograms,” even the experts are using the term looser than its technical definition.

The Google / GoPro Jump VR camera is an unpaired camera rig consisting of 16 GoPro cameras equally spaced around the “equator.” Because the cameras are unpaired, the wizards at Google have developed a cloud-based stitching algorithm that automatically converts the footage into stereo pairs for stereo-panoramic viewing. At the time of our studies, they claimed to be able to properly stitch imagery as close as 1 meter from the camera. We put it to the test.

If you look closely, the 1 meter shot is pretty good. Actually, we were pleasantly surprised at how good the 0.5 meter shot looked, with only minor noticeable artifacts.


Study #2: Recognizability

We had a practical agenda here: If we plan to shoot VR in the real world with real people, we may need film permits from local authorities, and we’d like to be able to confidently tell them how much space we need to “rent” by knowing where faces become unrecognizable. Remember, there is no zooming in VR headsets, you can only move or dolly the camera rig forward, so this should be a fairly simple number to determine.

The number, it turns out, is 5. :) About 5 meters radius from the camera rig. See for yourself.

Of course, this number not only depends on the resolution of the camera rig, but also on the storage resolution and viewer resolution. These numbers will all change but gradually and predictably.


Study #3: Camera Height and Eyeline

It’s long been known that imagery of people is greatly influenced by the relationship between the height of the subject and height of the camera, often referred to as “eyeline.” When the camera is below the eyeline, the subject looks “privileged” and when the camera is above the eyeline, the viewer feels privileged. We were surprised to learned how much this is amplified in VR, and found a very specific reason why.

The reason why is called orthoscopy (tech alert, hang with us!). An image is orthoscopically correct when it appears at the same scale and direction as it was captured. Turns out this is always true with VR but rarely true with everyday images. (The punchline to a Picasso anecdote, after a critic shows the artist a small photo of his girlfriend, is “she’s beautiful but she’s so tiny!”) In VR, when the viewer pans left 90 degrees, the image updates left 90 degrees as well, also part of being orthoscopically correct.

The amplification is because viewing VR images requires the viewer to physically pan and tilt their head accordingly, to be “embodied,” which is not the case in screen-based cinema. Theater audiences viewing a close-up in the center of a movie screen simply look at the center of the movie screen, regardless of the eyeline from which she was shot. Because of this, we suspect eyeline and camera height are much more critical in VR than in screen-based media.

Study #4: First Person / Third Person Solo Speaking


While television journalists and anchorpeople, onstage narrators and comedians, and many video games “speak to you” as a first person point of view, practically all narrative cinema is intentionally shot from a third person POV, with talent directed not to look at the camera (which in turn, serves as a “fly on the wall”). First person POV is so rare in cinema that there’s a Wikipedia Page dedicated to “Films shot from the first-person perspective” (it currently lists 33). And there’s a famous shot early on in “Apocalypse Now” where director Francis Coppola, cameoing as a television news director filming beach combat, screams to the soldiers “Don’t look at the camera!” Curiously, on-camera interviews fall somewhere in between, as exemplified by filmmaker Errol Morris’s "Interrotroninvention to maintain eye contact with interviewees.

So where will POV be with VR? We shot a little test.

It was apparent to us, especially when viewed in VR, that at least when someone appears to be speaking to the camera, they ought to be looking into the camera.

(A note about the “thumbs up / thumbs down” notations, please take these with a grain of salt. Our intention is not to provide answers as much as provocations. Remember this was an artist residency.)


Study #5: First Person / Third Person Two-Shot Dialogue

First, please take a look at this sequence. Keep in mind that Zach and Todd are always looking at each other, as exhibited by our VR viewer’s head swinging back and forth.

Here’s what we see is going on.

In shot 1, the camera is literally right between Zach and Todd, and they’re looking “through” it. We’ve seen and heard of VR productions with, say, several people sitting around a table in dialogue shot with a VR camera in the middle. While certainly worthy of experimentation, in our case we found this perspective unrealistic and unsatisfying.

Shot 2 is interesting on several levels. For one thing, neither Zach nor Todd are now looking at or through the camera, which has become a third-person fly-on-the-wall. Remember, they’re really looking at each other. And they’re still head-swinging far apart.

This perspective is “almost” unique to VR. In “Lawrence of Arabia,” an early Cinemascope film, the epic entry scene of the Omar Sharif character ends with Sharif in dialogue with Peter O’Toole at opposite sides of a very wide screen, at the time a unique and revolutionary composition. And in “How the West Was Won,” shot in 3-camera Cinerama for a giant curved screen, the talent often didn’t appear looking at each other at all, just like you see here.

While shot 3 is similar to shot 2 only less so, shot 4 is so much like a conventional “2-shot” that our VR viewer doesn’t even need to move her head anymore.


Study #6: Directed Attention

This little study speaks for itself.

Magicians and illusionists know the trick. Legend has it that Houdini was so brazen that, an instant before promising to transform his lovely assistant into a bag of sand, a planted shill in the back of the theater would scream something of surprise, redirecting the audience to instinctively turn around. Then, in plain sight, the assistant would jump out of Houdini’s arms and a stage hand would replace her with a bag of sand. Then trumpets (from the front) and voila, magic!

Our little study here is perhaps equally a cheap shot. If there was non-singular action, for example, several different people-of-interest criss-crossing each other (think Altman films), we may not be as “locked in.” This may, however, challenge the VR community’s current obsession with needing to fill the full 360 degree frame all the time.


Study #7: Hyper-Real Compositing

Here’s a looping video where everyone was shot separately and digitally composited together (using Adobe AfterEffects).

If you think the shadows don’t match, you’d be wrong. Remember this is in 360 equirectangular format. They do in VR (you have to see it and hopefully will some time soon). Everyone was shot within a relatively narrow timeframe and the shadows pretty much look credible, as does the “group” of people. We may call this a “credible” hyperimage.

Here’s something a little less credible.

From a purely technical perspective, it’s credible (except we intentionally slowed it down and remixed the sound for effect). It was, incidentally, shot in about 5 minutes, with our subject walking up and down each “hour” radial progressively. But since she’s the same subject, it’s an unreal, impossible shot. We may call this an “incredible” hyperimage.

And finally, here’s something combining several aforementioned elements (again, meant to be viewed in VR).

Some notes:

- This is a credible hyperimage, with everyone artificially added via digital compositing.

- Jim, the sound guy, has disappeared, digitally composited out of the scene.

- All subjects were mic’ed separately and a pro-level spatial sound mix was made in post-production.

- The subjects, all shot separately, appear synchronized in both words and action.

Well now, imagine what you could do with all THAT!



Acknowledgements

We’d like to thank our project proposal’s content advisors: Tressa Berman, Author, Anthropologist; William H. Durham, Professor, Department of Anthropology, Stanford University; Judith Fitzpatrick, Consultant, Anthropologist; David Evan Harris, Founder, Global Lives Project; Kevin Kelly, Author and Senior Maverick, Wired Magazine; and Anna Lomax Wood, Director, Global Jukebox Project.

We’d also like to thank our proposal’s production advisors: James Cha and Romalyn Schmalz of North Beach Bauhaus; and Roman Coppola, Susie Wrenn, and Michael Zakin of American Zoetrope / The Director’s Bureau.

And we’d like to warmly thank our friends in the communities of YouTube, Google Research, and Google VR.



[Ed. note: And we here at Creative COW thank Michael for his kind permission to repost this from Medium.]






Michael Naimark Michael Naimark is an artist, inventor, scholar, and coach in emergent media and immersive experiences. was on the original design team for the MIT Media Laboratory in 1980 and was a founding member of the Atari Research Lab (1982), the Apple Multimedia Lab (1987), and Lucasfilm Interactive (now LucasArts, 1989). In 2015, Michael was Google VR's first resident artist.

Along the way, Michael has directed projects with support from Apple, Disney, Atari, Panavision, Lucasfilm, Interval, and Google; and from National Geographic, UNESCO, the Rockefeller Foundation, the Exploratorium, the Banff Centre, Ars Electronica, the ZKM, and the Paris Metro. He occasionally serves as faculty at USC Cinema's Interactive Media Division, NYU's Interactive Telecommunications Program, and the MIT Media Lab.

For his complete biography and key selections from his media arts and research bibliography, visit naimark.net.

Comments

Re: VR Cinematography: Exploring How To Represent People in VR
by Daniel McClintock
Very good and informative article. Thanks to everyone involved for giving us more insight as how to do this.

A hypothesis which I wonder would work would be to introduce a "moving crew." This I think would work in a controlled situation such as in indoor or studio location. It would take some blocking and testing, but I think it could work.

You have two people next to each other having a conversation in a room. The crew is exactly opposite from them. One person decides to leave and goes to the opposite side of the room where the door is. The crew counters the move and shifts so that they are between, but offset from the center point. (If you were to look down from the ceiling, the two actors and the crew would form a triangle.)

Once the scene has been shot, the camera is turned back on, and the actors and crew leave. You shoot a few minutes of video with no one in the room. This shot now acts as a "master plate" when you go off to your compositing program. The camera has not moved so the physical location should match exactly the footage with the actors and crew.

Then just mask away the crew in post production.

I haven't tried this, but I think it could work. Outside locations could prove more challenging because of wind and sunlight.

Thoughts?

--------------------

"Sometimes Life Needs a Cmd-Z!"
@Daniel McClintock
by David Lawrence
Hi Daniel,

What you describe is basically how we produced the "credible" hyperlapse video. Jim interviewed people at various clock points around the circle, always standing on the opposite side of the camera. You can see this in Study #4: First Person/Third Person Solo Lip Sync. Notice how Jim's position moves across the equirectangular frame, i.e. around the circle. We shot an empty background plate and composited the interview subjects on top of it in After Effects. It's a pretty straightforward process for the most part. We only had a crew of 3 and were in a large open space so it was easy to hide from the camera when necessary. A bigger crew in a smaller space might be tricker, but I think with proper planning, what you propose would work.

_______________________
David Lawrence
art~media~design~research

linkedIn: http://lnkd.in/Cfz92F
vimeo: vimeo.com/album/2271696
web: propaganda.com
facebook: /dlawrence
twitter: @dhl
Re: VR Cinematography: Exploring How To Represent People in VR
by Todd Munro
Very interesting article, the spacial audio does add that much more to the image and in many cases is just as important as it is in cinema. Thanks for sharing your insights.

I managed to get a 9k 360 image for a music video last year, but it is hard to get an 8K+ video to play over the internet. Youtube's compression really doesn't do justice to what these rigs can output.
@Todd Munro
by David Lawrence
I feel your pain ;)

The Odyssey camera creates footage at a resolution of 8Kx8K over/under stereoscopic. The frame is so big, my top-of-the-line, maxed out MacBook Pro just says "nope!" We want to build a monster PC workstation to post this material and I got into some great conversations here on the COW about how to spec the hardware.

Even if you get material shot, posted and delivered, I think there's a bottleneck even bigger than playback - screen resolution.

Think about it - Odyssey delivers stereoscopic footage that's 8K per eye. That means to properly view it, we need a 16K display! It sounds crazy but there are already a couple Android phones with 4K displays and I'm sure more will be arriving soon. A 16K phone display may be far fetched but in 5 years, who knows?

All I can say is it can't happen too soon. Probably my biggest complaint about Cardboard is the crummy, soft resolution of the picture (with my phone at least) and the ever present "screen door" effect. It really breaks the immersiveness of the experience for me. I'm glad that whenever that day eventually arrives, the footage we shoot with Odyssey today will be ready for it.

_______________________
David Lawrence
art~media~design~research

linkedIn: http://lnkd.in/Cfz92F
vimeo: vimeo.com/album/2271696
web: propaganda.com
facebook: /dlawrence
twitter: @dhl


Recent Articles / Tutorials:
Field Production
“Before I forget: don’t wear any underwear.”

“Before I forget: don’t wear any underwear.”

Before coming to Creative COW, before his lives in product marketing and product management at Avid and Boris FX, Creative COW Editor-in-Chief Tim Wilson ran a video production company. As we also observe the 100th Anniversary of the founding of the US Parks Service, Tim recalls one one especially memorable adventure to Everglades National Park, wherein he found himself quite literally up to his armpits in alligators. He had no idea that this was going to happen when the day began. At the time, he was focused on a brand new fear: getting sliced in half by burning underwear.

Editorial, Feature, People / Interview
Tim Wilson
Art of the Edit
The Science of Editing

The Science of Editing

Sven Pape, aka @ThisGuyEdits, joins Dr. Karen Pearlman -- former President of the Australian Screen Editors Guild and a three-time nominee for Best Editing at the Australian Screen Editors Guild Annual Awards -- for a provocative look at "Editor's Thinking," a cognitive skill set that you can use to improve your screenplay before you start principal photography of your film.


Sven Pape
Panasonic Cameras
Shooting MTV's Mary + Jane with Panasonic VariCam 35

Shooting MTV's Mary + Jane with Panasonic VariCam 35

To shoot the ½ hour scripted comedy series for MTV, Mary + Jane, the producers at Television 360 enlisted cinematographer Charles Papert (Crazy Ex-Girlfriend, Key and Peele), who found that Panasonic VariCam is a great fit for moving fast and getting great images when time and resources are scarce.


COW News
Adobe After Effects
Imagineer mocha Pro 5 Plug-In for Adobe: An In Depth Review

Imagineer mocha Pro 5 Plug-In for Adobe: An In Depth Review

Imagineer mocha Pro 5 Plug-in for Adobe brings all the amazing features of the professional version of the mocha Planar Tracker directly into After Effects and Premiere Pro in the form of a plugin. In this in-depth review, After Effects tutorial guru Tobias Gleissenberger of Surfaced Studio will show you what you can do with this new plug-in, and discuss what he likes and doesn't like about the new update.

Tutorial
Tobias Gleissenberger
Apple Final Cut Pro X
Hawaiki Keyer 3.0 Upgrader Tutorial

Hawaiki Keyer 3.0 Upgrader Tutorial

After 25 years as an editor, compositor, and VFX artist, frequent Creative COW poster and tutorial author Simon Ubsdell knows what he needs from a keyer -- and knew he wasn't getting good enough results from FCPX or Motion. Discussions in COW forums led him to create the highly regarded Hawaiki Keyer for Mac users using Apple Final Cut Pro X, Apple Motion, and Adobe After Effects and Premiere Pro on Mac as well. Enthusiasm expressed by COW members for its latest release led us to ask Simon for a tour of the even more advanced Hawaiki Keyer 3.0.

Tutorial
Simon Ubsdell
Cinematography
All Eyes on IBC 2016 for Cameras and Lenses Galore

All Eyes on IBC 2016 for Cameras and Lenses Galore

What’s that you say? An IBC that’s not only relevant, but downright exhilarating? This used to not be news, of course. However, in recent years, IBC has too often become simply an opportunity for European audiences to see products already announced at NAB. In 2016, however, the focus swings sharply to Amsterdam, especially when it comes to cameras and lenses. IBC 2016 is shaping up to be one of the most dramatic trade shows for cinematographers, broadcasters, and videographers in years. Join Creative COW Editor-in-Chief Tim Wilson for a speedy overview of some of the highlights.

Feature
Tim Wilson
Art of the Edit
Film Editing Tutorial: How To Crush The First Notes

Film Editing Tutorial: How To Crush The First Notes

It's happened to you. The first cut sounds noisy, has compression artifacts, actors aren't giving their best performances -- and the director has notes about all this and more. Follow along as Sven Pape from "This Guy Edits" works through some of these very issues on the film he's working on, with tips on how deliver exactly what YOUR director is looking for.

Tutorial
Sven Pape
Adobe After Effects
After Effects 2015.3 - My Favorite Features

After Effects 2015.3 - My Favorite Features

Learn why you should upgrade to After Effects CC 2015.3 - 13.8.1 - a close and detailed look at the latest release of After Effects (August 2016). Roei Tzoref will be focusing on his favorite features that set this release apart from previous versions: Performance, Queue in AME, Lumetri Color new features, and more.

Tutorial
Roei Tzoref
Adobe After Effects
Advanced Masking in Adobe After Effects

Advanced Masking in Adobe After Effects

Some of the coolest stuff you can do inside of Adobe After Effects is only possible once you unlock the power of masks. Join After Effects whiz Tobias Gleissenberger of Surfaced Studio to learn about mask animation and interpolation, using the variable width feathering tool, managing mask modes and ordering, and more.

Tutorial
Tobias Gleissenberger
Art of the Edit
Editing Movie Trailers with Patricio Hoter

Editing Movie Trailers with Patricio Hoter

More and more, films that are currently in production are working alongside with their marketing teams to establish a strategy months in advance of its release. That means that there’s more time to explore several options when crafting a trailer, but the workload also becomes heavier, and the stakes become higher. Avid Media Composer editors Christian Jhonson and Patricio Hoter (The Jungle Book, The Last Witch Hunter, Green Room, Titanic 3D, and more) explore this evolving artform.

Tutorial
Christian Jhonson
MORE
© 2016 CreativeCOW.net All Rights Reserved
[TOP]