LIBRARY: Tutorials Reviews Interviews Editorials Features Business Authors RSS Feed

VR Cinematography: Exploring How To Represent People in VR

COW Library : VR (Virtual Reality)/360º Video : Michael Naimark : VR Cinematography: Exploring How To Represent People in VR
CreativeCOW presents VR Cinematography: Exploring How To Represent People in VR -- VR (Virtual Reality)/360º Video Editorial



with David Lawrence and James McKee

Last July, shortly after Google announced its Jump VR camera collaboration with GoPro, Google’s head of VR Clay Bavor called me in to meet. The result was a short term, project-based artist residency, their first (using “early early” camera/algorithms with artifacts that aren’t representative of the current Jump).

I’m of the artist-as-bridgebuilder school and found lots of potential symbioses with old and new friends inside Google VR, around Google, and externally, and consequently proposed as a project a “community-based ethnographic VR experiment”, thinking globally but with a fast, lean, local prototype.


Areas of Exploration

Several timely and relevant areas of exploration emerged.

1. Close-up VR imagery from camera-originated material is awesome but tricky.

Headset-based VR is uniquely suited for experiences in the nearfield, “intimate zone”, where image and sound seem within or near arm’s reach. Screen-based immersion such as 3D movies is not very good in this zone, and Hollywood has long learned to keep most of the action behind the screen (except for “gotchas” like blood, bats, and broomsticks). For VR made from computer models such as most games, getting the needed different viewpoints for each eye is trivial, but when the material comes from cameras, getting these different viewpoints is much more challenging.

2. High quality spatial sound is as important as image.

Shooting with panoramic camera rigs often defaults to recording with panoramic microphone rigs, usually an omni-directional thingy on top of the camera, resulting in compromised sound. Using human sound recordists with shotgun mics or booms, or mic’ing every individual subject in view, is far superior but problematic. And also, if you do, how do you hide the recordists and gear?

3. Filling (and unfilling) the panoramic sphere has novel challenges.

Filling the full 360 degree view with interesting material, and unfilling it with uninteresting material, has its own unique challenges and opportunities. Early on, many camera-based VR filmmakers felt compelled to digitally fill in the “nadir hole,” the region at the bottom of the panoramic sphere where either the camera rig couldn’t see, or if it did, saw the tripod.

4. The “hyperimage,” a Holy Grail of interactive media, is well-suited for VR.

So if we’re good digitally filling the nadir hole with nearby ground or floor imagery, how far can we go? The “hyperimage,” an artificially overpopulated scene where “more” is “happening”, has been a holy grail of interactive media from the beginning. Each element can serve as an interactive link, and the more links, the richer the experience. Think interactive Bruegel.

5. Metadata-based interactivity, another Holy Grail, may also be well-suited for VR.

A related Holy Grail is “directed interactivity,” where individual scenes or clips or other media are all parsed and tagged with metadata to allow interconnection with some sort of narrative or direction, more compelling than a random walk. This grail, which includes “interactive movies” and “database art,” has its roots in the grand databases developed over many decades in anthropology, such as George Murdock’s Ethnographic Atlas and most notably, Alan Lomax’s Global Jukebox project. (Alan was a mentor and personal friend.)

6. Community buy-in is essential.

Finally, as VR cameras begin to proliferate, the vision of a “One Earth Model” begins to emerge, spanning from entertainment and gaming to tourism and travel to ecology and activism. For this (attitude alert!), community buy-in is essential, where production is a collaboration and control is shared between producers and subjects, at its best, in the spirit of cinéma vérité pioneers like Jean Rouch and Richard Leacock. (Ricky was also a mentor and personal friend.) Without community buy-in, in the end, the loss will be ours.




GoPro
Optimized for Jump, the GoPro Odyssey packs 16 synchronized HERO4 Black cameras into an all-in-one rig that’s capable of capturing immersive content in stunning 8K30 video. The video from each camera is then uploaded to the Google Jump assembler to deliver an engaging experience from every direction.


Starting with Studies

It was no secret last fall that the Google / GoPro VR launch had been delayed, and being under time constraints, I proposed getting the Jump VR camera rig for a day and shooting some studies. I’m a big fan of studies (think Muybridge) and frankly am bewildered how little the VR community has understood the value and leverage of them. I was also in a good position for this: the joke was while everyone else was flying off to shoot VR in Timbuktu, I had already done that and was good shooting, literally, in Google’s backyard.



Michael with a stereo-panoramic motion picture rig in Timbuktu in 1995.

For this, I enlisted the talents of a couple other VR OG types, David Lawrence and James McKee. We’ve worked together on and off since the Apple / Lucasfilm Multimedia Lab days c. 1990, and together the three of us represent over 85 collective years of experience working with cutting-edge experimental media. Jim and David also made the “fantastic” early VR radio piece based on “Cyberthon” (that’s another story). Most recently David produced “Farm”, a stereoscopic art video with San Francisco artist Dale Hoyt and Jim produced the spatial installation audio for Chinese artist Ai Weiwei’s “@Large” show on Alcatraz.

We took over the “Big Chairs Park” on the Google campus and, using wide blue masking tape, staked off the ground into 12 “one hour” radials with concentric rings at 1 meter out to 5 meters and beyond and got to work.




David and Jim with the Google / GoPro Jump VR camera rig at Google in 2015.


Our intention was to explore how people are represented in VR.


The 360 degree by 180 degree “equirectangular” video format turns the radial lines into parallel lines. Here’s the left eye view from a stereo pair. (Yep, that’s a 360 degree image of the same scene above!)

Our primary goal was to explore how people are represented in VR and to produce some modest, solid studies that would be immediately useful to students and folks getting started in VR. Something both practical and provocative. And our message to you is: Surprise us!


Study #1: Close-Up Tests

As mentioned, getting the needed different viewpoints for each eye in the nearfield from camera-based material is problematic, where these different viewpoints are most different. For farfield imagery like landscapes, it hardly matters since both eyes see essentially the same viewpoint.

Capturing these different nearfield viewpoints require special panoramic cameras which fall into three categories: 1) stereo-panoramic camera rigs with paired cameras, which is instantly viewable but with potentially gnarly lines between the stereo camera pairs; 2) unpaired stereo-panoramic camera rigs, which require computation to produce stereo pairs for viewing; and 3) panoramic camera rigs with additional magic such as laser range-finding (LIDAR), handiwork (such as 2D to 3D conversion), or clever computation (much yet to be invented).

Please don’t get me started on the state of VR cameras today (rant alert!). The Hollywood Reporter recently ran a story entitled “Virtual Reality Stitching Can Cost $10,000 Per Finished Minute.” This is mainly because folks building camera rigs have failed to do their homework. (Ask them what a nodal point is.) “Light Fields” is currently hot but, like “holograms,” even the experts are using the term looser than its technical definition.

The Google / GoPro Jump VR camera is an unpaired camera rig consisting of 16 GoPro cameras equally spaced around the “equator.” Because the cameras are unpaired, the wizards at Google have developed a cloud-based stitching algorithm that automatically converts the footage into stereo pairs for stereo-panoramic viewing. At the time of our studies, they claimed to be able to properly stitch imagery as close as 1 meter from the camera. We put it to the test.

If you look closely, the 1 meter shot is pretty good. Actually, we were pleasantly surprised at how good the 0.5 meter shot looked, with only minor noticeable artifacts.


Study #2: Recognizability

We had a practical agenda here: If we plan to shoot VR in the real world with real people, we may need film permits from local authorities, and we’d like to be able to confidently tell them how much space we need to “rent” by knowing where faces become unrecognizable. Remember, there is no zooming in VR headsets, you can only move or dolly the camera rig forward, so this should be a fairly simple number to determine.

The number, it turns out, is 5. :) About 5 meters radius from the camera rig. See for yourself.

Of course, this number not only depends on the resolution of the camera rig, but also on the storage resolution and viewer resolution. These numbers will all change but gradually and predictably.


Study #3: Camera Height and Eyeline

It’s long been known that imagery of people is greatly influenced by the relationship between the height of the subject and height of the camera, often referred to as “eyeline.” When the camera is below the eyeline, the subject looks “privileged” and when the camera is above the eyeline, the viewer feels privileged. We were surprised to learned how much this is amplified in VR, and found a very specific reason why.

The reason why is called orthoscopy (tech alert, hang with us!). An image is orthoscopically correct when it appears at the same scale and direction as it was captured. Turns out this is always true with VR but rarely true with everyday images. (The punchline to a Picasso anecdote, after a critic shows the artist a small photo of his girlfriend, is “she’s beautiful but she’s so tiny!”) In VR, when the viewer pans left 90 degrees, the image updates left 90 degrees as well, also part of being orthoscopically correct.

The amplification is because viewing VR images requires the viewer to physically pan and tilt their head accordingly, to be “embodied,” which is not the case in screen-based cinema. Theater audiences viewing a close-up in the center of a movie screen simply look at the center of the movie screen, regardless of the eyeline from which she was shot. Because of this, we suspect eyeline and camera height are much more critical in VR than in screen-based media.

Study #4: First Person / Third Person Solo Speaking


While television journalists and anchorpeople, onstage narrators and comedians, and many video games “speak to you” as a first person point of view, practically all narrative cinema is intentionally shot from a third person POV, with talent directed not to look at the camera (which in turn, serves as a “fly on the wall”). First person POV is so rare in cinema that there’s a Wikipedia Page dedicated to “Films shot from the first-person perspective” (it currently lists 33). And there’s a famous shot early on in “Apocalypse Now” where director Francis Coppola, cameoing as a television news director filming beach combat, screams to the soldiers “Don’t look at the camera!” Curiously, on-camera interviews fall somewhere in between, as exemplified by filmmaker Errol Morris’s "Interrotroninvention to maintain eye contact with interviewees.

So where will POV be with VR? We shot a little test.

It was apparent to us, especially when viewed in VR, that at least when someone appears to be speaking to the camera, they ought to be looking into the camera.

(A note about the “thumbs up / thumbs down” notations, please take these with a grain of salt. Our intention is not to provide answers as much as provocations. Remember this was an artist residency.)


Study #5: First Person / Third Person Two-Shot Dialogue

First, please take a look at this sequence. Keep in mind that Zach and Todd are always looking at each other, as exhibited by our VR viewer’s head swinging back and forth.

Here’s what we see is going on.

In shot 1, the camera is literally right between Zach and Todd, and they’re looking “through” it. We’ve seen and heard of VR productions with, say, several people sitting around a table in dialogue shot with a VR camera in the middle. While certainly worthy of experimentation, in our case we found this perspective unrealistic and unsatisfying.

Shot 2 is interesting on several levels. For one thing, neither Zach nor Todd are now looking at or through the camera, which has become a third-person fly-on-the-wall. Remember, they’re really looking at each other. And they’re still head-swinging far apart.

This perspective is “almost” unique to VR. In “Lawrence of Arabia,” an early Cinemascope film, the epic entry scene of the Omar Sharif character ends with Sharif in dialogue with Peter O’Toole at opposite sides of a very wide screen, at the time a unique and revolutionary composition. And in “How the West Was Won,” shot in 3-camera Cinerama for a giant curved screen, the talent often didn’t appear looking at each other at all, just like you see here.

While shot 3 is similar to shot 2 only less so, shot 4 is so much like a conventional “2-shot” that our VR viewer doesn’t even need to move her head anymore.


Study #6: Directed Attention

This little study speaks for itself.

Magicians and illusionists know the trick. Legend has it that Houdini was so brazen that, an instant before promising to transform his lovely assistant into a bag of sand, a planted shill in the back of the theater would scream something of surprise, redirecting the audience to instinctively turn around. Then, in plain sight, the assistant would jump out of Houdini’s arms and a stage hand would replace her with a bag of sand. Then trumpets (from the front) and voila, magic!

Our little study here is perhaps equally a cheap shot. If there was non-singular action, for example, several different people-of-interest criss-crossing each other (think Altman films), we may not be as “locked in.” This may, however, challenge the VR community’s current obsession with needing to fill the full 360 degree frame all the time.


Study #7: Hyper-Real Compositing

Here’s a looping video where everyone was shot separately and digitally composited together (using Adobe AfterEffects).

If you think the shadows don’t match, you’d be wrong. Remember this is in 360 equirectangular format. They do in VR (you have to see it and hopefully will some time soon). Everyone was shot within a relatively narrow timeframe and the shadows pretty much look credible, as does the “group” of people. We may call this a “credible” hyperimage.

Here’s something a little less credible.

From a purely technical perspective, it’s credible (except we intentionally slowed it down and remixed the sound for effect). It was, incidentally, shot in about 5 minutes, with our subject walking up and down each “hour” radial progressively. But since she’s the same subject, it’s an unreal, impossible shot. We may call this an “incredible” hyperimage.

And finally, here’s something combining several aforementioned elements (again, meant to be viewed in VR).

Some notes:

- This is a credible hyperimage, with everyone artificially added via digital compositing.

- Jim, the sound guy, has disappeared, digitally composited out of the scene.

- All subjects were mic’ed separately and a pro-level spatial sound mix was made in post-production.

- The subjects, all shot separately, appear synchronized in both words and action.

Well now, imagine what you could do with all THAT!



Acknowledgements

We’d like to thank our project proposal’s content advisors: Tressa Berman, Author, Anthropologist; William H. Durham, Professor, Department of Anthropology, Stanford University; Judith Fitzpatrick, Consultant, Anthropologist; David Evan Harris, Founder, Global Lives Project; Kevin Kelly, Author and Senior Maverick, Wired Magazine; and Anna Lomax Wood, Director, Global Jukebox Project.

We’d also like to thank our proposal’s production advisors: James Cha and Romalyn Schmalz of North Beach Bauhaus; and Roman Coppola, Susie Wrenn, and Michael Zakin of American Zoetrope / The Director’s Bureau.

And we’d like to warmly thank our friends in the communities of YouTube, Google Research, and Google VR.



[Ed. note: And we here at Creative COW thank Michael for his kind permission to repost this from Medium.]






Michael Naimark Michael Naimark is an artist, inventor, scholar, and coach in emergent media and immersive experiences. was on the original design team for the MIT Media Laboratory in 1980 and was a founding member of the Atari Research Lab (1982), the Apple Multimedia Lab (1987), and Lucasfilm Interactive (now LucasArts, 1989). In 2015, Michael was Google VR's first resident artist.

Along the way, Michael has directed projects with support from Apple, Disney, Atari, Panavision, Lucasfilm, Interval, and Google; and from National Geographic, UNESCO, the Rockefeller Foundation, the Exploratorium, the Banff Centre, Ars Electronica, the ZKM, and the Paris Metro. He occasionally serves as faculty at USC Cinema's Interactive Media Division, NYU's Interactive Telecommunications Program, and the MIT Media Lab.

For his complete biography and key selections from his media arts and research bibliography, visit naimark.net.

Comments

Re: VR Cinematography: Exploring How To Represent People in VR
by Daniel McClintock
Very good and informative article. Thanks to everyone involved for giving us more insight as how to do this.

A hypothesis which I wonder would work would be to introduce a "moving crew." This I think would work in a controlled situation such as in indoor or studio location. It would take some blocking and testing, but I think it could work.

You have two people next to each other having a conversation in a room. The crew is exactly opposite from them. One person decides to leave and goes to the opposite side of the room where the door is. The crew counters the move and shifts so that they are between, but offset from the center point. (If you were to look down from the ceiling, the two actors and the crew would form a triangle.)

Once the scene has been shot, the camera is turned back on, and the actors and crew leave. You shoot a few minutes of video with no one in the room. This shot now acts as a "master plate" when you go off to your compositing program. The camera has not moved so the physical location should match exactly the footage with the actors and crew.

Then just mask away the crew in post production.

I haven't tried this, but I think it could work. Outside locations could prove more challenging because of wind and sunlight.

Thoughts?

--------------------

"Sometimes Life Needs a Cmd-Z!"
@Daniel McClintock
by David Lawrence
Hi Daniel,

What you describe is basically how we produced the "credible" hyperlapse video. Jim interviewed people at various clock points around the circle, always standing on the opposite side of the camera. You can see this in Study #4: First Person/Third Person Solo Lip Sync. Notice how Jim's position moves across the equirectangular frame, i.e. around the circle. We shot an empty background plate and composited the interview subjects on top of it in After Effects. It's a pretty straightforward process for the most part. We only had a crew of 3 and were in a large open space so it was easy to hide from the camera when necessary. A bigger crew in a smaller space might be tricker, but I think with proper planning, what you propose would work.

_______________________
David Lawrence
art~media~design~research

linkedIn: http://lnkd.in/Cfz92F
vimeo: vimeo.com/album/2271696
web: propaganda.com
facebook: /dlawrence
twitter: @dhl
Re: VR Cinematography: Exploring How To Represent People in VR
by Todd Munro
Very interesting article, the spacial audio does add that much more to the image and in many cases is just as important as it is in cinema. Thanks for sharing your insights.

I managed to get a 9k 360 image for a music video last year, but it is hard to get an 8K+ video to play over the internet. Youtube's compression really doesn't do justice to what these rigs can output.
@Todd Munro
by David Lawrence
I feel your pain ;)

The Odyssey camera creates footage at a resolution of 8Kx8K over/under stereoscopic. The frame is so big, my top-of-the-line, maxed out MacBook Pro just says "nope!" We want to build a monster PC workstation to post this material and I got into some great conversations here on the COW about how to spec the hardware.

Even if you get material shot, posted and delivered, I think there's a bottleneck even bigger than playback - screen resolution.

Think about it - Odyssey delivers stereoscopic footage that's 8K per eye. That means to properly view it, we need a 16K display! It sounds crazy but there are already a couple Android phones with 4K displays and I'm sure more will be arriving soon. A 16K phone display may be far fetched but in 5 years, who knows?

All I can say is it can't happen too soon. Probably my biggest complaint about Cardboard is the crummy, soft resolution of the picture (with my phone at least) and the ever present "screen door" effect. It really breaks the immersiveness of the experience for me. I'm glad that whenever that day eventually arrives, the footage we shoot with Odyssey today will be ready for it.

_______________________
David Lawrence
art~media~design~research

linkedIn: http://lnkd.in/Cfz92F
vimeo: vimeo.com/album/2271696
web: propaganda.com
facebook: /dlawrence
twitter: @dhl


Recent Articles / Tutorials:
Letters to the COW Team
Creative COW's Brand New News Dept. Features and Functions

Creative COW's Brand New News Dept. Features and Functions

Creative COW's co-founders have been quietly at work cleaning up and rebuilding the Creative COW News section. While they've been at work, they've been adding new features and functions that will make the news department much more useful in the days ahead. If it's been a while since you've visited the news section, visit it soon but read this short introduction to what it is and how things are working. We hope these new changes make your research and keeping on top of industry news much more productive.

Editorial, Feature, Business
Ronald Lindeboom
Business & Marketing
12 Things I Know About Business at 55 That I Wish I'd Known at 25

12 Things I Know About Business at 55 That I Wish I'd Known at 25

12 Things I Know About Business at 55 That I Wish Id Known at 25 appeared in Creative COW Magazine and was one of our most popular articles. It is a true timeless classic in which COW leader, contributing editor, and Senior Business Adviser to Creative COW, Nick Griffin shares wisdom he's learned the hard way in over 30 years in business. His experience will help you to avoid mistakes, manage clients, and prepare yourself to achieve your greatest success.

Editorial, Feature, Business
Nick Griffin
RED Camera
Don Burgess aligns with Light Iron and Panavision for ALLIED

Don Burgess aligns with Light Iron and Panavision for ALLIED

Don Burgess, ASC trusts Light Iron. His last seven films can attest, so Burgess chose Light Iron to support him again with digital dailies and post finishing services on Allied. Directed by Robert Zemeckis and starring Brad Pitt and Marion Cotillard, the World War II-set film sees an intelligence officer's romance with a French Resistance fighter tested when high command thinks a double agent might be in play.

Editorial, Feature, People / Interview, Business, Project
COW News
Autodesk Maya
ZERO FX: The Magic You Won't See In The Magnificent Seven

ZERO FX: The Magic You Won't See In The Magnificent Seven

ZERO FX takes Creative COW readers inside the invisible effects used to create the powerful vistas and settings used in The Magnificent Seven. But the real magic is in what you don't see.

Editorial, Feature, People / Interview, Project
Kayla Millhouse
Art of the Edit
More Than One Path to Success: Senior Editor Mae Manning

More Than One Path to Success: Senior Editor Mae Manning

We talk a lot about things like “accessible tools” and the “democratization of video production” -- what has this meant for the emerging talent whose creative development has taken place largely, or even entirely, within this democratized landscape? Mae Manning is one such editor, who taught herself to edit music videos, and caught the eye of a local production company. Several years later and now their Senior Editor, she cuts corporate and industrial training videos, promotional videos, sketch comedy, short films, and everything else that gets thrown her way. Mae’s story is an inspiration for anyone that thinks there is only one path to success in the industry.

Editorial, Feature, People / Interview
Kylee Peña
Art of the Edit
How To Create Better Live Surgical Broadcasts

How To Create Better Live Surgical Broadcasts

Greg Ondera produces, directs, and edits medical video programs specializing in surgical procedures. From his wide ranging experience in the medical sciences and broadcast arts, Greg shows you how to create better surgical broadcasts.

Editorial, Tutorial, Feature, Business
Greg Ondera
NAB Show
NAB Show New York 2016: Growing, Yet Still Intimate

NAB Show New York 2016: Growing, Yet Still Intimate

Calling April's NAB Show "overwhelming" is an understatement. The expo that fills the rapidly expanding Las Vegas Convention Center every April topped 103,000 attendees and 1700+ exhibitors in 2 million square feet of exhibit space. The Big Apple's edition of the NAB Show is more bite sized: taking place this week at the Javitz Convention Center, 7000 visitors will be able to engage with 300 exhibitors, along with a variety of new opportunities for in-depth workshops on cutting-edge technologies. Here's a preview of the week's festivities.


COW News
Art of the Edit
Being an Advertising Editor: The Ins & Outs of Agency Work

Being an Advertising Editor: The Ins & Outs of Agency Work

Katie Toomey takes Creative COW members inside the world of the advertising editor, where being a generalist means you are often not only a video editor, but a designer and audio editor, problem solver, as well as tech support professional. Join Katie as she takes you inside her world.

Editorial, Feature, People / Interview
Katie Toomey
Adobe Creative Cloud
Adobe MAX 2016: Breakthroughs in Design and Productivity

Adobe MAX 2016: Breakthroughs in Design and Productivity

You might be excused for thinking that, barely a month since Adobe announced massive updates to their Creative Cloud suite at IBC, there might not be much more to add, except that there’s no way that Adobe would bring 10,000 people to San Diego for the Adobe MAX creativity conference and not have some truly compelling new news. Read on for news of new design tools for app prototyping, photorealistic comping/visualization, the new Adobe Sensei framework of intelligent services built into the entire Creative Cloud Platform, the integration of Reuters video and photography into Adobe Stock’s editorial collection, and, of particular interest to folks working in web video, the introduction of the new Social Publishing Panel within Adobe Premiere Pro.


COW News
Adobe After Effects Expressions
Adobe After Effects Expressions 101

Adobe After Effects Expressions 101

Expressions in Adobe After Effects open up a world of possibilities for your visual effects! Expressions can be daunting when you first get into them, though, as you have to essentially write 'code' - and code can be scary. Join After Effects guru Tobias Gleissenberger of Surfaced Studio for the first in a series covering expressions, from the very basics - all the way through to programming the Matrix!

Tutorial
Tobias Gleissenberger
MORE
© 2016 CreativeCOW.net All Rights Reserved
[TOP]