LIBRARY: Tutorials Reviews Interviews Editorials Features Business Authors RSS Feed

VR Cinematography: Exploring How To Represent People in VR

COW Library : VR (Virtual Reality)/360º Video : Michael Naimark : VR Cinematography: Exploring How To Represent People in VR
CreativeCOW presents VR Cinematography: Exploring How To Represent People in VR -- VR (Virtual Reality)/360º Video Editorial



with David Lawrence and James McKee

Last July, shortly after Google announced its Jump VR camera collaboration with GoPro, Google’s head of VR Clay Bavor called me in to meet. The result was a short term, project-based artist residency, their first (using “early early” camera/algorithms with artifacts that aren’t representative of the current Jump).

I’m of the artist-as-bridgebuilder school and found lots of potential symbioses with old and new friends inside Google VR, around Google, and externally, and consequently proposed as a project a “community-based ethnographic VR experiment”, thinking globally but with a fast, lean, local prototype.


Areas of Exploration

Several timely and relevant areas of exploration emerged.

1. Close-up VR imagery from camera-originated material is awesome but tricky.

Headset-based VR is uniquely suited for experiences in the nearfield, “intimate zone”, where image and sound seem within or near arm’s reach. Screen-based immersion such as 3D movies is not very good in this zone, and Hollywood has long learned to keep most of the action behind the screen (except for “gotchas” like blood, bats, and broomsticks). For VR made from computer models such as most games, getting the needed different viewpoints for each eye is trivial, but when the material comes from cameras, getting these different viewpoints is much more challenging.

2. High quality spatial sound is as important as image.

Shooting with panoramic camera rigs often defaults to recording with panoramic microphone rigs, usually an omni-directional thingy on top of the camera, resulting in compromised sound. Using human sound recordists with shotgun mics or booms, or mic’ing every individual subject in view, is far superior but problematic. And also, if you do, how do you hide the recordists and gear?

3. Filling (and unfilling) the panoramic sphere has novel challenges.

Filling the full 360 degree view with interesting material, and unfilling it with uninteresting material, has its own unique challenges and opportunities. Early on, many camera-based VR filmmakers felt compelled to digitally fill in the “nadir hole,” the region at the bottom of the panoramic sphere where either the camera rig couldn’t see, or if it did, saw the tripod.

4. The “hyperimage,” a Holy Grail of interactive media, is well-suited for VR.

So if we’re good digitally filling the nadir hole with nearby ground or floor imagery, how far can we go? The “hyperimage,” an artificially overpopulated scene where “more” is “happening”, has been a holy grail of interactive media from the beginning. Each element can serve as an interactive link, and the more links, the richer the experience. Think interactive Bruegel.

5. Metadata-based interactivity, another Holy Grail, may also be well-suited for VR.

A related Holy Grail is “directed interactivity,” where individual scenes or clips or other media are all parsed and tagged with metadata to allow interconnection with some sort of narrative or direction, more compelling than a random walk. This grail, which includes “interactive movies” and “database art,” has its roots in the grand databases developed over many decades in anthropology, such as George Murdock’s Ethnographic Atlas and most notably, Alan Lomax’s Global Jukebox project. (Alan was a mentor and personal friend.)

6. Community buy-in is essential.

Finally, as VR cameras begin to proliferate, the vision of a “One Earth Model” begins to emerge, spanning from entertainment and gaming to tourism and travel to ecology and activism. For this (attitude alert!), community buy-in is essential, where production is a collaboration and control is shared between producers and subjects, at its best, in the spirit of cinéma vérité pioneers like Jean Rouch and Richard Leacock. (Ricky was also a mentor and personal friend.) Without community buy-in, in the end, the loss will be ours.




GoPro
Optimized for Jump, the GoPro Odyssey packs 16 synchronized HERO4 Black cameras into an all-in-one rig that’s capable of capturing immersive content in stunning 8K30 video. The video from each camera is then uploaded to the Google Jump assembler to deliver an engaging experience from every direction.


Starting with Studies

It was no secret last fall that the Google / GoPro VR launch had been delayed, and being under time constraints, I proposed getting the Jump VR camera rig for a day and shooting some studies. I’m a big fan of studies (think Muybridge) and frankly am bewildered how little the VR community has understood the value and leverage of them. I was also in a good position for this: the joke was while everyone else was flying off to shoot VR in Timbuktu, I had already done that and was good shooting, literally, in Google’s backyard.



Michael with a stereo-panoramic motion picture rig in Timbuktu in 1995.

For this, I enlisted the talents of a couple other VR OG types, David Lawrence and James McKee. We’ve worked together on and off since the Apple / Lucasfilm Multimedia Lab days c. 1990, and together the three of us represent over 85 collective years of experience working with cutting-edge experimental media. Jim and David also made the “fantastic” early VR radio piece based on “Cyberthon” (that’s another story). Most recently David produced “Farm”, a stereoscopic art video with San Francisco artist Dale Hoyt and Jim produced the spatial installation audio for Chinese artist Ai Weiwei’s “@Large” show on Alcatraz.

We took over the “Big Chairs Park” on the Google campus and, using wide blue masking tape, staked off the ground into 12 “one hour” radials with concentric rings at 1 meter out to 5 meters and beyond and got to work.




David and Jim with the Google / GoPro Jump VR camera rig at Google in 2015.


Our intention was to explore how people are represented in VR.


The 360 degree by 180 degree “equirectangular” video format turns the radial lines into parallel lines. Here’s the left eye view from a stereo pair. (Yep, that’s a 360 degree image of the same scene above!)

Our primary goal was to explore how people are represented in VR and to produce some modest, solid studies that would be immediately useful to students and folks getting started in VR. Something both practical and provocative. And our message to you is: Surprise us!


Study #1: Close-Up Tests

As mentioned, getting the needed different viewpoints for each eye in the nearfield from camera-based material is problematic, where these different viewpoints are most different. For farfield imagery like landscapes, it hardly matters since both eyes see essentially the same viewpoint.

Capturing these different nearfield viewpoints require special panoramic cameras which fall into three categories: 1) stereo-panoramic camera rigs with paired cameras, which is instantly viewable but with potentially gnarly lines between the stereo camera pairs; 2) unpaired stereo-panoramic camera rigs, which require computation to produce stereo pairs for viewing; and 3) panoramic camera rigs with additional magic such as laser range-finding (LIDAR), handiwork (such as 2D to 3D conversion), or clever computation (much yet to be invented).

Please don’t get me started on the state of VR cameras today (rant alert!). The Hollywood Reporter recently ran a story entitled “Virtual Reality Stitching Can Cost $10,000 Per Finished Minute.” This is mainly because folks building camera rigs have failed to do their homework. (Ask them what a nodal point is.) “Light Fields” is currently hot but, like “holograms,” even the experts are using the term looser than its technical definition.

The Google / GoPro Jump VR camera is an unpaired camera rig consisting of 16 GoPro cameras equally spaced around the “equator.” Because the cameras are unpaired, the wizards at Google have developed a cloud-based stitching algorithm that automatically converts the footage into stereo pairs for stereo-panoramic viewing. At the time of our studies, they claimed to be able to properly stitch imagery as close as 1 meter from the camera. We put it to the test.

If you look closely, the 1 meter shot is pretty good. Actually, we were pleasantly surprised at how good the 0.5 meter shot looked, with only minor noticeable artifacts.


Study #2: Recognizability

We had a practical agenda here: If we plan to shoot VR in the real world with real people, we may need film permits from local authorities, and we’d like to be able to confidently tell them how much space we need to “rent” by knowing where faces become unrecognizable. Remember, there is no zooming in VR headsets, you can only move or dolly the camera rig forward, so this should be a fairly simple number to determine.

The number, it turns out, is 5. :) About 5 meters radius from the camera rig. See for yourself.

Of course, this number not only depends on the resolution of the camera rig, but also on the storage resolution and viewer resolution. These numbers will all change but gradually and predictably.


Study #3: Camera Height and Eyeline

It’s long been known that imagery of people is greatly influenced by the relationship between the height of the subject and height of the camera, often referred to as “eyeline.” When the camera is below the eyeline, the subject looks “privileged” and when the camera is above the eyeline, the viewer feels privileged. We were surprised to learned how much this is amplified in VR, and found a very specific reason why.

The reason why is called orthoscopy (tech alert, hang with us!). An image is orthoscopically correct when it appears at the same scale and direction as it was captured. Turns out this is always true with VR but rarely true with everyday images. (The punchline to a Picasso anecdote, after a critic shows the artist a small photo of his girlfriend, is “she’s beautiful but she’s so tiny!”) In VR, when the viewer pans left 90 degrees, the image updates left 90 degrees as well, also part of being orthoscopically correct.

The amplification is because viewing VR images requires the viewer to physically pan and tilt their head accordingly, to be “embodied,” which is not the case in screen-based cinema. Theater audiences viewing a close-up in the center of a movie screen simply look at the center of the movie screen, regardless of the eyeline from which she was shot. Because of this, we suspect eyeline and camera height are much more critical in VR than in screen-based media.

Study #4: First Person / Third Person Solo Speaking


While television journalists and anchorpeople, onstage narrators and comedians, and many video games “speak to you” as a first person point of view, practically all narrative cinema is intentionally shot from a third person POV, with talent directed not to look at the camera (which in turn, serves as a “fly on the wall”). First person POV is so rare in cinema that there’s a Wikipedia Page dedicated to “Films shot from the first-person perspective” (it currently lists 33). And there’s a famous shot early on in “Apocalypse Now” where director Francis Coppola, cameoing as a television news director filming beach combat, screams to the soldiers “Don’t look at the camera!” Curiously, on-camera interviews fall somewhere in between, as exemplified by filmmaker Errol Morris’s "Interrotroninvention to maintain eye contact with interviewees.

So where will POV be with VR? We shot a little test.

It was apparent to us, especially when viewed in VR, that at least when someone appears to be speaking to the camera, they ought to be looking into the camera.

(A note about the “thumbs up / thumbs down” notations, please take these with a grain of salt. Our intention is not to provide answers as much as provocations. Remember this was an artist residency.)


Study #5: First Person / Third Person Two-Shot Dialogue

First, please take a look at this sequence. Keep in mind that Zach and Todd are always looking at each other, as exhibited by our VR viewer’s head swinging back and forth.

Here’s what we see is going on.

In shot 1, the camera is literally right between Zach and Todd, and they’re looking “through” it. We’ve seen and heard of VR productions with, say, several people sitting around a table in dialogue shot with a VR camera in the middle. While certainly worthy of experimentation, in our case we found this perspective unrealistic and unsatisfying.

Shot 2 is interesting on several levels. For one thing, neither Zach nor Todd are now looking at or through the camera, which has become a third-person fly-on-the-wall. Remember, they’re really looking at each other. And they’re still head-swinging far apart.

This perspective is “almost” unique to VR. In “Lawrence of Arabia,” an early Cinemascope film, the epic entry scene of the Omar Sharif character ends with Sharif in dialogue with Peter O’Toole at opposite sides of a very wide screen, at the time a unique and revolutionary composition. And in “How the West Was Won,” shot in 3-camera Cinerama for a giant curved screen, the talent often didn’t appear looking at each other at all, just like you see here.

While shot 3 is similar to shot 2 only less so, shot 4 is so much like a conventional “2-shot” that our VR viewer doesn’t even need to move her head anymore.


Study #6: Directed Attention

This little study speaks for itself.

Magicians and illusionists know the trick. Legend has it that Houdini was so brazen that, an instant before promising to transform his lovely assistant into a bag of sand, a planted shill in the back of the theater would scream something of surprise, redirecting the audience to instinctively turn around. Then, in plain sight, the assistant would jump out of Houdini’s arms and a stage hand would replace her with a bag of sand. Then trumpets (from the front) and voila, magic!

Our little study here is perhaps equally a cheap shot. If there was non-singular action, for example, several different people-of-interest criss-crossing each other (think Altman films), we may not be as “locked in.” This may, however, challenge the VR community’s current obsession with needing to fill the full 360 degree frame all the time.


Study #7: Hyper-Real Compositing

Here’s a looping video where everyone was shot separately and digitally composited together (using Adobe AfterEffects).

If you think the shadows don’t match, you’d be wrong. Remember this is in 360 equirectangular format. They do in VR (you have to see it and hopefully will some time soon). Everyone was shot within a relatively narrow timeframe and the shadows pretty much look credible, as does the “group” of people. We may call this a “credible” hyperimage.

Here’s something a little less credible.

From a purely technical perspective, it’s credible (except we intentionally slowed it down and remixed the sound for effect). It was, incidentally, shot in about 5 minutes, with our subject walking up and down each “hour” radial progressively. But since she’s the same subject, it’s an unreal, impossible shot. We may call this an “incredible” hyperimage.

And finally, here’s something combining several aforementioned elements (again, meant to be viewed in VR).

Some notes:

- This is a credible hyperimage, with everyone artificially added via digital compositing.

- Jim, the sound guy, has disappeared, digitally composited out of the scene.

- All subjects were mic’ed separately and a pro-level spatial sound mix was made in post-production.

- The subjects, all shot separately, appear synchronized in both words and action.

Well now, imagine what you could do with all THAT!



Acknowledgements

We’d like to thank our project proposal’s content advisors: Tressa Berman, Author, Anthropologist; William H. Durham, Professor, Department of Anthropology, Stanford University; Judith Fitzpatrick, Consultant, Anthropologist; David Evan Harris, Founder, Global Lives Project; Kevin Kelly, Author and Senior Maverick, Wired Magazine; and Anna Lomax Wood, Director, Global Jukebox Project.

We’d also like to thank our proposal’s production advisors: James Cha and Romalyn Schmalz of North Beach Bauhaus; and Roman Coppola, Susie Wrenn, and Michael Zakin of American Zoetrope / The Director’s Bureau.

And we’d like to warmly thank our friends in the communities of YouTube, Google Research, and Google VR.



[Ed. note: And we here at Creative COW thank Michael for his kind permission to repost this from Medium.]






Michael Naimark Michael Naimark is an artist, inventor, scholar, and coach in emergent media and immersive experiences. was on the original design team for the MIT Media Laboratory in 1980 and was a founding member of the Atari Research Lab (1982), the Apple Multimedia Lab (1987), and Lucasfilm Interactive (now LucasArts, 1989). In 2015, Michael was Google VR's first resident artist.

Along the way, Michael has directed projects with support from Apple, Disney, Atari, Panavision, Lucasfilm, Interval, and Google; and from National Geographic, UNESCO, the Rockefeller Foundation, the Exploratorium, the Banff Centre, Ars Electronica, the ZKM, and the Paris Metro. He occasionally serves as faculty at USC Cinema's Interactive Media Division, NYU's Interactive Telecommunications Program, and the MIT Media Lab.

For his complete biography and key selections from his media arts and research bibliography, visit naimark.net.

Comments

Re: VR Cinematography: Exploring How To Represent People in VR
by Daniel McClintock
Very good and informative article. Thanks to everyone involved for giving us more insight as how to do this.

A hypothesis which I wonder would work would be to introduce a "moving crew." This I think would work in a controlled situation such as in indoor or studio location. It would take some blocking and testing, but I think it could work.

You have two people next to each other having a conversation in a room. The crew is exactly opposite from them. One person decides to leave and goes to the opposite side of the room where the door is. The crew counters the move and shifts so that they are between, but offset from the center point. (If you were to look down from the ceiling, the two actors and the crew would form a triangle.)

Once the scene has been shot, the camera is turned back on, and the actors and crew leave. You shoot a few minutes of video with no one in the room. This shot now acts as a "master plate" when you go off to your compositing program. The camera has not moved so the physical location should match exactly the footage with the actors and crew.

Then just mask away the crew in post production.

I haven't tried this, but I think it could work. Outside locations could prove more challenging because of wind and sunlight.

Thoughts?

--------------------

"Sometimes Life Needs a Cmd-Z!"
@Daniel McClintock
by David Lawrence
Hi Daniel,

What you describe is basically how we produced the "credible" hyperlapse video. Jim interviewed people at various clock points around the circle, always standing on the opposite side of the camera. You can see this in Study #4: First Person/Third Person Solo Lip Sync. Notice how Jim's position moves across the equirectangular frame, i.e. around the circle. We shot an empty background plate and composited the interview subjects on top of it in After Effects. It's a pretty straightforward process for the most part. We only had a crew of 3 and were in a large open space so it was easy to hide from the camera when necessary. A bigger crew in a smaller space might be tricker, but I think with proper planning, what you propose would work.

_______________________
David Lawrence
art~media~design~research

linkedIn: http://lnkd.in/Cfz92F
vimeo: vimeo.com/album/2271696
web: propaganda.com
facebook: /dlawrence
twitter: @dhl
Re: VR Cinematography: Exploring How To Represent People in VR
by Todd Munro
Very interesting article, the spacial audio does add that much more to the image and in many cases is just as important as it is in cinema. Thanks for sharing your insights.

I managed to get a 9k 360 image for a music video last year, but it is hard to get an 8K+ video to play over the internet. Youtube's compression really doesn't do justice to what these rigs can output.
@Todd Munro
by David Lawrence
I feel your pain ;)

The Odyssey camera creates footage at a resolution of 8Kx8K over/under stereoscopic. The frame is so big, my top-of-the-line, maxed out MacBook Pro just says "nope!" We want to build a monster PC workstation to post this material and I got into some great conversations here on the COW about how to spec the hardware.

Even if you get material shot, posted and delivered, I think there's a bottleneck even bigger than playback - screen resolution.

Think about it - Odyssey delivers stereoscopic footage that's 8K per eye. That means to properly view it, we need a 16K display! It sounds crazy but there are already a couple Android phones with 4K displays and I'm sure more will be arriving soon. A 16K phone display may be far fetched but in 5 years, who knows?

All I can say is it can't happen too soon. Probably my biggest complaint about Cardboard is the crummy, soft resolution of the picture (with my phone at least) and the ever present "screen door" effect. It really breaks the immersiveness of the experience for me. I'm glad that whenever that day eventually arrives, the footage we shoot with Odyssey today will be ready for it.

_______________________
David Lawrence
art~media~design~research

linkedIn: http://lnkd.in/Cfz92F
vimeo: vimeo.com/album/2271696
web: propaganda.com
facebook: /dlawrence
twitter: @dhl


Recent Articles / Tutorials:
Art of the Edit
A Newbie Looks at EditFest LA

A Newbie Looks at EditFest LA

Thanks to the Blue Collar Post Collective's Professional Development Accessibility Program, Indiana shortform editor Hillary Lewis was able to attend the American Cinema Editor's EditFest LA. Rather than the lion's den she feared, Hillary found unexpected support among people who were more like her than she'd imagined. This rare opportunity provided unique insights into what Hollywood editing is really all about, and what it takes to succeed wherever you are.

Feature
Hillary Lewis
Adobe Premiere Pro
Cinematic Look Using Lumetri Color in Adobe Premiere Pro

Cinematic Look Using Lumetri Color in Adobe Premiere Pro

Want to give your video that elusive cinematic look? Visual effects guru Tobias Gleissenberger will show you the secrets of the Lumetri Scopes and Lumetri Color panels in Adobe Premiere Pro and After Effects that make it super easy to properly correct & grade your footage.

Tutorial
Tobias Gleissenberger
Adobe After Effects
Adobe After Effects Energy Ball

Adobe After Effects Energy Ball

In his latest high-energy Adobe After Effects tutorial, VFX guru Tobias Gleissenberger of Surfaced Studio combines a variety of effects to create the pulsating energy ball, composited with motion tracking, optical flares, and more.

Tutorial
Tobias Gleissenberger
Cinematography
Robert McLachlan: Cinematographer for Game of Thrones

Robert McLachlan: Cinematographer for Game of Thrones

Robert McLachlan is the cinematographer of Game of Thrones, Westworld and Ray Donovan, and he joins commercial director and Go Creative Show host Ben Consoli to share behind the scenes stories from some of his most iconic scenes including The Red Wedding and The Loot Train Battle.

Feature, People / Interview
Ben Consoli
Art of the Edit
What Picasso Can Teach Us About Filmmaking

What Picasso Can Teach Us About Filmmaking

Feature film editor Sven Pape takes a unique, entertaining look at Pablo Picasso's approach to art, and offers specific examples from a variety of movies, as well as Picasso's own advice. As Sven puts it, success requires action. Make a film. Fail. Then fail harder. Of course, Picasso and Sven have great advice for succeeding too! You'll get a kick out of this one.

Tutorial, Feature
Sven Pape
Blackmagic Design Fusion
Blackmagic Design Fusion 9 Tutorial: The New Planar Tracker

Blackmagic Design Fusion 9 Tutorial: The New Planar Tracker

Editor, VFX artist, post-house owner, and plug-in developer Simon Ubsdell draws on over 25 years of experience to dig deep into the compelling features found in the new Planar Tracker found in Blackmagic Fusion. Along the way, Simon offers a wide range of tips and tricks, as well as new perspectives on the relationship between tracking and compositing: in short, tracking done right.

Tutorial
Simon Ubsdell
Art of the Edit
Growing Up on YouTube: Video Production, The Next Generation

Growing Up on YouTube: Video Production, The Next Generation

Through accessible tools and ease of engagement, young people like Sabrina Cruz have been able to grow up on YouTube and find one another. Underneath the amusing titles and colorful thumbnails, her videos have drawn over 10 million views with thoughtful messages woven together with high production value and editorial skill. Dismiss her as just a YouTuber at your peril. It's not that she's after your job. It's that she's one of the young creators helping change the world with intelligence, wit, drive, and an infectious optimism. Are you keeping up?

Feature, People / Interview
Kylee Peña
Art of the Edit
3 Mistakes All Beginning Editors Make, And How To Avoid Them

3 Mistakes All Beginning Editors Make, And How To Avoid Them

In the latest edition of his enduring series "This Guy Edits", Sven Pape covers the three mistakes that all beginning editors make -- mistakes he knows well, having made them all in his own editing career. Fortunately, he's learned the fixes by now too, and shares the easy workarounds in a high-energy, humorous fashion that will have even the most experienced editors nodding along and smiling in recognition.

Tutorial
Sven Pape
RED Camera
RED IPP2: Real-World Looks At An Image Processing Revolution

RED IPP2: Real-World Looks At An Image Processing Revolution

Science is one thing, the real world is another, and yet beautiful things can happen when the two interact with each other. Our conversation begins with RED Digital Cinema's Graeme Nattress explaining the ways that RED's customers are shaping the company's new approaches to color science, as reflected in RED's new image processing pipeline, IPP2. From there, filmmakers Chris McKechnie and David Battistella get specific about how RED IPP2 has revolutionized their RED workflows, both in the field and in post. No hype here. Just the facts, plus some very pretty pictures, and, okay, more than a little bit of excitement in the lab, in the field, and in the edit suite.

Feature
Christine Bunish
Business & Marketing
Authenticity: The First Step to Stock Video Success

Authenticity: The First Step to Stock Video Success

His stock footage has sold to the tune of $7 million dollars over the past 10 years, earning on average over $30,000/mo. Here, Robb Crocker shares the specific steps he took, and that you can take too, to build a successful stock video business: free from clients, deadlines, and creative limits.

People / Interview, Business
Robb Crocker
MORE
© 2017 CreativeCOW.net All Rights Reserved
[TOP]