LIBRARY: Tutorials Reviews Interviews Editorials Features Business Authors RSS Feed

Technology 2014 | Production, Post & Beyond: Part TWO

COW Library : Cinematography : Debra Kaufman : Technology 2014 | Production, Post & Beyond: Part TWO
CreativeCOW presents Technology 2014 | Production, Post & Beyond: Part TWO -- Cinematography Editorial All rights reserved.

Steven Poster, ASC, President of the International Cinematographers Guild,
Local 600 of the I.A.T.S.E.
Frankly I'm getting a little tired of saying we're in transition. I think we've done the transition and we're arriving at the place we're going to want to be for a while. We're finding out that software, hardware and computing power have gotten to the point where it's no longer necessary to do the things we've always traditionally done, which is sending out unprocessed images to a lab or post production house and hoping they come back right. We're finding out that's no longer a necessary process. Where that gets done and how that gets done is maybe the last part of this transition.

And as the tools get better, faster and less expensive, we're finding out that this work is being done by crew people, whether it's in production or post production, whether it's on the set or in editorial. And that's pretty exciting. What it allows for is the image intent of the director and director of photography to be preserved in a way that we've never been able to control before. The director of photography can, on the set, interpret what the image should look like before it goes into post production, and that information can now flow through to the digital intermediate and the final translation of the image. No, we're not doing final color on the set. There is still work to do at the end, because you have to match scene to scene. But the exciting thing is that we can create dailies that look right. The tools for calibration are improving.

One of the great things we're going to see is the advent of workflows built around the ACES color management architecture. That's a very important development; it gives us a big enough bucket to put everything in so that every quality of the image can be preserved and used down the line. And it's a standard where we've never had a real standard before. So, we will be able to rely on calibration when it is fully implemented all the way through the system, from beginning to end, in prep, on the set, in the editing room and in post production and in working with VFX, compositing, animation. It's becoming a closed chain and that's the important part. The artistic intent, the communication of the story through the director's concept and through the director of photography's vision is now truly a possibility so that everyone can understand it every step of the way.


Dr. Siegfried Foessel, Head, Moving Picture Technologies Department, Fraunhofer IIS
In 2013, Fraunhofer gave the industry its first glimpse of the new, creative opportunities of lightfield technology at the SMPTE Annual Tech Conference, demonstrating how making use of the entire lightfield can make it so refocusing, virtual view rendering, vertigo and even dolly zoom effects can be completely done during postproduction.

In 2014, we can plan on seeing the industry's first use of specialized lightfield camera arrays for production and the first integrations of lightfield processing with postproduction tools.

Also in 2013, the Interoperable Master Format (IMF) was proposed with new extended capabilities for storing 4K master formats or digital archive formats, essentially making it a future-proof storage standard format for movie productions. The use of the IMF in its extended app#2 flavor for postproduction will become prevalent in movie productions in 2014.


John Galt, CTO, Panavision
In the world of motion picture imaging, 2013 has been a time of technology disruption as great as any we have seen in the last 15 years. The migration from 2K acquisition and display to 4K acquisition and display is as great a technology challenge as the migration from standard definition to high definition television was back in the early 1990s. The unfortunate adoption of the one dimensional metric 2K, 4K etc. belies the fact that 4K has four times the image data at 2K, not double, as the metric implies.

A further confusion is that all the "4K" cameras introduced in 2013 use Bayer pattern sensors and employ some form of data compression even as they describe their outputs as "raw". This fancy footwork results in image data per frame of as little as 3MB. This is less than half the data per frame of an uncompressed high definition RGB camera image. However, when the compressed Bayer pattern is de-Bayered and decompressed, the RGB data is now 33MB per frame. This is the data rate that post-production and archivists must now deal with.

The area of greatest concern to me, that we at Panavision can do nothing about, is the problem of theatrical projection of 4K images. With most television panel makers phasing out 2K for screen sizes above 55 inches, the big screen TV you buy next year will be 4K whether you want it or not! Contrast this with the four out of five theatrical screens that cannot project 4K. Since the exhibitors have only recently spent the money to convert from film to digital projection, this is unlikely to change for many years to come. We are faced with the specter that for the first time in history, television could exceed the image performance of theatrical projection.

Another problem with the one-dimensional metric 4K being a surrogate for resolution, is that it completely ignores the contribution of optics to the process of creating a high resolution image. Almost all existing lenses for film formats do not obtain their best performance when used on digital cameras. There are a number of reasons for this. First, as any cinematographer knows, nothing thicker than a gelatin filter or a net should ever be placed behind a lens. Digital cameras, however, have various glass and crystalline materials between the lens and the imager that can be as much as 3mm or more compared to the 0.1mm of a gelatin filter. All this material behind a lens, which was originally designed to image in free air, causes chromatic and other imaging errors that were not visible on film, or lower resolution 2K cameras. These problems are more noticeable at smaller f-numbers and shorter focal lengths.

Although the migration from film to 2K digital, and now to 4K digital and beyond creates many imaging challenges, it also creates great opportunities for the lens designer and manufacturer. Freed from the constraints of film emulsions and spinning mirrors, the lens designer can now reinvent the cine lens. These new lenses will combine classical optics with electronics that will enable the cinematographer to explore a new creative landscape in depth, not just the two-dimensional post process of color correction.

The Bayer pattern images, with built-in image enhancement, have also caused many cinematographers to look to optics to enable the differentiation of their images. One of Panavision's most successful recent lens offerings, PVintage, is a series of classic lenses, some more than 50-year old designs, that we have rehoused and updated mechanically without changing the glass.

John Galt is one of the straightest talkers in the business. Here he cuts through "intentional obfuscation" as he lays out the difference between "real" pixels and "marketing" pixels – and why pixels are the wrong way to talk about resolution.

At the other extreme, Panavision has recently reintroduced the Primo line of optics, now called the Primo Vs, with aberration correction for digital imaging. Other lens manufacturers will probably do the same as they introduce new lens series, but all existing film optics are not optimal when used on digital cameras. The other problem is that existing optics, from all manufacturers, were designed to image on color film emulsions that are at least 35 microns thick. Most 35mm cine lenses are optimized for a spatial frequency of around 20 line pairs/mm whereas a 4K 35mm size sensor would need a lens optimized for 40 line pairs/mm. This implies a photo-site of around 6 microns and a height of less than 5 microns. Just as 4K is four times the resolution of 2K, a lens with 80 percent contrast at 40 line pairs/mm has four times the performance of a cine lens at 20 line pairs/mm.

Panavision has also recently introduced a new series of optics, the Primo 70 series, that are designed to cover the 70mm format with better than 80 percent contrast at 40 line pairs/mm. These lenses can also compensate for a behind the lens optical thickness of up to 7mm of glass (BK-7).

The future of imaging technology will see a greater integration of the various components that make up an imaging system. Lenses will talk to cameras and cameras will not just record images but spatial and motion information, which will be captured as metadata and facilitate post-production processes, that today require heroic effort and vast quantities of money. Even as we gear up for a 4K future, the Japanese broadcaster NHK plans its introduction of 8K television by 2018 and intends the 2020 Olympics to be broadcast nationwide in 8K. Which, once again is not twice the resolution of 4K but four times. The lens and camera designers will be kept very busy for the foreseeable future!


William Feightner
Colorfront is working on a number of scenarios supporting our industry's rapid move to fully distributed collaborative workflows, tying together Pre-production, Production, and Post. Last week at the CineGrid conference in San Diego we demonstrated high quality live 4K material rendered on and playing back from Amazon cloud services to a large screen 4K projector over standard public internet. Distributed collaborative virtual workflows are our immediate future.

Colorfront has been working on a number of different scenarios for distributed and collaborative workflows. At CineGrid, what we showed was the ability to connect over the standard Internet, working over the Amazon cloud service and playing back live 4K material. Distributed post is a trend because it's perceived as being desirable. Right now, we're all looking at how to distribute and collaborate with high quality images.

My personal take on high frame rates is that it's here to stay. There is nothing inherent about 24 frames. But we have to remember that it's not an either/or scenario. We'll see different rates used for different things. Right now, there's a black and white argument over HFR, but it is just another tool to tell the story. There are discussions of variable frame rates within a single project and even different layers in a single image being at different temporal updates.

One of the other things driving more temporal updates is high resolution. Higher resolutions demand more temporal update. Standard TV in the U.S. has always had a temporal update of 60 fields a second. So, when it comes to frame rates, no one size fits all. That will be a big theme going forward.

With 3D, we had yet another blip and then it went away because of bad, improper usage. To dimensionalize a project as an afterthought does not have long-term success. In most cases, it doesn't work well; it has to be integrated in the initial project. This is the pattern we've seen throughout the history of 3D, but we keep learning a lot each time. Although the use of 3D has retracted, what's left is some validation on its use as a creative tool. When it's properly applied, it can really look good.

One of the big difficulties is implementing it for live capture is that it's extremely expensive to do properly. New tools are evolving that could potentially lower the cost. To put it simply, you don't necessarily need two high resolution cameras side by side; there are other ways to sense depth data and have really good looking 3D. As 3D matures more as an aesthetic choice for certain things, we'll see improvements in how it's implemented to lower the cost. A lot of things are being tested that can work well.

A Conversation with Bill Feightner, New Colorfront CTO

With 4K, the train left the station. The driving force for 4K is on the consumer side with larger and larger displays. TV manufacturers certainly tried to stimulate sales by offering 3D but that didn't seem to go far, and the 3D offerings have diminished. But they're pushing hard on 4K much more than they did on 3D. The cost for 4K TVs is not significantly higher than what we've been paying for 2K, and we will go beyond that probably to large video walls that could include all kinds of other functionalities. But, for now, 4K is around the corner. It's not a passing trend.

When you talk about ACES, I think the bigger picture is that standardization in imagery is much needed. We see total chaos out there with all the different cameras and distribution approaches. Our industry cannot afford those inefficiencies because it costs money and we don't have the budgets to waste money. ACS offers a standardized image specification for a container that is large enough to be future-proofed. We already have a pretty good standard for digital cinema projection, but there are extreme variances in home delivery that cause a lot of complaints.

We badly need to keep working to that point on the home side to accommodate new standards. For example, we're mastering for Rec 709 and very few home sets will accurately display Rec 709; most go far beyond that. It's an outdated standard and we need to implement something new. Most importantly, we need a goal post and something for the TV manufacturers to agree on, especially with larger color spaces and high dynamic range HD imagery.


Author of The Lean Forward Moment and The Film Editing Room Handbook (4th Edition)

Norman Hollyn, Professor, Editing Track Head,
Michael Kahn Endowed Chair, USC Cinematic Arts;
author of The Lean Forward Moment
and The Film Editing Room Handbook (4th Edition)
2013 has been, if the conferences and web reporting is any indication, a holding year. The industry isn't quite ready to give up on 3D but the energy seems to have left the room on that. 4K may revive that, but we have yet to see a strong demand for that outside of elements of the post production community. It certainly doesn't pass the Mom test that I always give – will it make enough difference to my Mom to notice? The answer for most of population who aren't sports fans seems to be a resounding "Huh?" It will probably take another year for hardware manufacturers to figure out what to push in the absence of a real need for most consumers (who are largely stuck in the compressed world of cable and web viewing). Expect 2014 to continue to hold.

Instead, I see the postproduction world in 2014 dealing with two main issues – creating better ways to collaborate over great distances and helping us out of the media management logjam. On the latter point, we are now reaching the tipping point where media creators, both large and small, have been shooting and storing way too much data to keep track of it in any easy and profitable way. This is complicated by the disconnect between capture companies and post – codecs and capture techniques have always run ahead of our NLEs ability to ingest, edit and finish with them.

The industry is finally focusing on what metadata is worth capturing up and down the entire chain, and how to do that. That will result in new hardware and software products for smaller users as well as the big ones, and we may see some companies merge who serve the different ends of the chain. I believe that this trend will pick up steam later in the year.

As for distance collaboration, my last three films have all been done over long distances – on two of them, I never even met the directors. This has its ups and downs. I can work with people all over the world who I never would have had the opportunity to work with before. On the other hand, we are missing the personal and the eye contact that can be so essential to a successful project. A number of people have been working on tools to solve some of these personal issues during the past year. I hope that we will start to see the fruits of their labor in the coming year.

David Stump, ASC
We've gone through what I've called the "dark days of digital," Six, seven, eight years ago, everybody was suffering from "first adopter" syndrome. I used to tell the ASC that things are as bad now as they're ever going to be. Digital has gotten better, but the really good thing that I didn't expect to see that has come out of the hard lessons of adopting digital is that the industry has learned how to learn again. And that's under-appreciated. We had the same workflow, the same conditions and the same parameters for making images for 100 years. Then we started getting all these digital cameras and workflows and, in hindsight, most of the discomfort was our own personal discomfort of having to learn. Now filmmakers as a culture have gotten over that. Whether we're old or young, we have accepted that learning new cameras and new ways of working are going to be a daily occurrence. We've gotten over grousing about specific cameras or specific devices. The expression I've coined for that paradigm shift is that we've learned how to learn.

The adoption of log workflows is another great thing. It's astounding. The first time I worked with a log workflow, shooting my first Viper movie in 2002, everybody was so averse to anything log because they didn't know how to use the signal. Log was an orphan. And it was stillborn then. Now it's the easiest thing on the planet. Everybody knows it and expects it and if you're not doing a log output on your camera, you're well behind the times.

3D TV, UltraHD, HFR, and higher projected brightness levels – they are all inevitable, and we'll see them all emerge in combination. One missing factor that nobody is talking about that's a revolutionary technique is wider shutter angle. I'm a huge, huge proponent of wide shutter angle and I always have been. Did you know that the amount of blur in a single 24P frame is greater with a 180-degree shutter than the amount of blur in a 60P frame with a 360-degree opening? If you're going to go to high frame rates, why not leave the shutter open? It's still less blur than at 24P.

Here's the trick to thinking about things such as high dynamic range, higher resolution, and higher frame rates. The human eye at its native refresh rate is between 200 and 300 Hz per receptor, per rod or cone, asynchronous. Nobody realizes that. The return on frame rate in cinema is that you get more and more information up to between 60 and 72 fps; above that, the resulting return for your investment drops off substantially. Humans generally can't see the difference above 72 fps; it's insignificant enough as to not be cost effective. Once you climb from 24 fps to 60, 66, 72 fps, you can test and measure the difference.

With regard to resolution, the human eye in retinal density, in terms of rods and cones, is equivalent to roughly 6K, per degree of viewing angle (and up to 8K if you're a jet pilot!). Once we get to 4K or 6K at 16 bits of dynamic range – like we're going to go to in ACES – with a refresh rate of 60 Hz with the possibility of a 360-degree shutter opening, we've reached the threshold of human visual acuity. There is nowhere to go from there. But we're quickly approaching that as an imagery criterion and we don't know it yet. There's nobody driving the imagery development as a whole, with the exception of the Academy and the ASC. Everything is creeping inexorably in that direction, and I don't see any need to go beyond that.

Read more: Dave Stump, ASC reports on the industry's progress toward setting standards for high frame rate digital cinema.



Larry Laurence J. Thorpe Canon
There was an increasingly vigorous industry discussion throughout 2013 on the role of higher frame rates in the context of 4K and UHDTV in general. Within the television discussions there was broad agreement that 30 fps was much too low for 4K motion imaging and that at least 60 fps was essential. 120 fps has been formally incorporated as an upper frame rate in the new ITU Recommendation ITU-R BT.2020 for UHDTV (both 4K and 8K). Convincing demonstrations by both BBC and NHK research on the benefits of higher frame rates are spurring a broader global examination. In the cinema world, it would appear that progress has been made in persuading some to consider shooting digital theatrical features at frame rates considerably higher than the long-established 24 fps. It remains to be seen what forms of features best lend themselves to such elevated frame rates. Time will tell all.

3D for television more or less bit the dust in 2013 – at least if judged by its lack luster presence at both CES and NAB. Personally, while I don't think it is dead – some considerable wounds are being licked. Still, a great deal of highly valuable lessons and experiences were gleaned over the past five years. Certainly, it emerged that live 3D television coverage is hard – in every respect. Based upon these many projects it would appear that all are now taking a deep breath before resurrecting any major revival. Any comeback of 3D TV will be dependent upon:

  • Decision by a major camera manufacturer to design a serious integrated 3D camera – and eliminate the former rigs that had pieced together off the shelf components into ungainly and expensive imaging systems
  • New compression technologies that facilitate transmission of full HD resolution for each eye
  • Possibly a marriage of 4K and 3D

The "beyond" in the title was certainly a huge discussion at multiple industry events and conference throughout 2013. Beyond the promise of what 4K offers to perceived image sharpness (an enduring discussion in itself) there was increasing insistence that a wider dynamic range had to be an additional dimension to the 4K viewing experience, ideally accompanied by a wider color gamut and a greater bit depth – and of course, higher frame rates. The associated vast surge in total data rates left many shaking their heads as they contemplated realities of the related technological challenges for production and distribution. Then there are the daunting costs linked to these challenges. However, given the present momentum in 4K production, it is to be hoped that ongoing projects will help apply more practical perspectives to all of these extended attributes. It is to be hoped also that 2014 might see the beginning of some realistic tests of the realities of 4K viewing experience in the living room – to help better answer the question of what might be the requisite screen size to properly exploit the full visual prowess of this system. The recent EBU report on their subjective tests of 4K versus 1080P HDTV served only to confuse this core issue.

ACES became very real in 2013. This sterling development work by so many far-sighted technologists and production folk reached levels that allowed manufacturers to begin serious product developments. Canon's deep involvement with the AMPAS Science & Technology Committee spurred us to develop a significant implementation of the new ACESProxy system with a related integrated link between our EOS C500 camera and our new DP V3010 4K studio reference monitor. We believe this empowerment of on-set grading will positively contribute to the overall workflow of production and postproduction.


Yuri Neyman
In the year of 2014 we will see the continuation of changes in the "image-building" process. The further integration of creative and technological processes from pre-production to post will evolve into what Global Cinematography Institute ( calls "Expanded Cinematography".

"Expanded Cinematography" is the combination of live and virtual cinematography which dominates the visual landscape of today's image making technologies in motion pictures, television, web-content, gaming and other visually based genres of performance and presentation – old, existing and the new which are yet to be discovered and developed.

New forms of relationships and interdependence between "traditional" cinematography, art direction, VFX/FX, "virtual cinematography" and previsualization will continue and evolve into additional varieties of technological and artistic pipelines and workflows – from the previsualization to the further incorporation of the "traditional" post-production methods of color correction and editing into the set operations and to the post and internet distribution.

In general we will see the advanced continuation of the "Democratization" in general and "Hybridization" in the professional "image-building" process.

The ever cheaper and more powerful cameras, lighting and editing software, hardware and equipment will allow more people (not necessarily knowledgeable and prepared to it) to have access and to own the equipment and to produce more and more "fodder" for Facebook, YouTube, Vimeo etc.

The process of "Hybridization" is evident when traditional and non-traditional methods of the image building are mixed and "the Hybrid imagery" created.

With the ever-advancing creative and technological approaches to the narrative script structure, the Industry will continue to develop new methods of "visual storytelling." The fast changing global, social, and cultural paradigm demands innovative approaches to meet the ever changing needs of the different kind of audiences via many new and more "visually effective" storytelling outlets such as gaming, web, and few others. And this has already led to changes in cinematographers profession

The fundamental changes in cinematography already have occurred, and will continue to occur. All previous "revolutions" in cinematography cannot be compared with the ongoing Expanded Cinematography/Hybridization Process. In all previous "revolutions" affecting the image-building process (such as sound, color, wide-screen, digital), despite the way visual expression was changed, nobody questioned before the authorship or cinematographer's authority in the images creations. But, that is exactly what is happening now.

Recent feature films have amplified the traditional role of the cinematographer and made even more people appreciative of the cinematographer's artistry and craft. However, today and going forward, images in film are no longer produced as the result of only the traditional tools of cinematography.

The Academy Awards selections clearly illustrate the issue. It is not a coincidence that for the last four years the Oscars for both Cinematography and Visual Effects were given to the same film. This consecutive trend is a strong confirmation of the new artistic and technical visual paradigm that is emerging, and will continue.

And we all will look very attentively what will happened in 2014 with nominations and awards for Cinematography and Visual Effects for "Gravity". The current Academy president Cheryl Boone Isaacs stated: "Technology is changing the definitions of what we do." It will be an interesting year!


Curtis Clark, ASC
HFR appears to be having a polarizing effect on the filmmaking community. It is in turn linked to the overall impact that digital motion image capture has been having on our ability to create and maintain the traditional aesthetics of a "film look."

As motion pictures, along with scripted TV dramas, migrate from film cameras to shooting with digital motion picture cameras using end-to-end digital imaging workflows, we're in danger of losing what many consider to be important constituents of our traditional photographic motion imaging. For better or worse, 24 fps image capture with 24fps projection has been the industry standard since 1929 and is deeply embedded in our aesthetic appreciation of the cinematic art form. Motion image blur and strobe effects from camera panning were not viewed as "artifacts" that necessarily needed to be (nor could be) eliminated. These "artifacts" were considered non-optional parameters that governed temporal resolution of motion picture film capture and projection. They, of course, have been routinely managed and dealt with as intrinsic components of the film motion-imaging platform to be controlled in an artful way by the cinematographer and director when composing and executing shots.

The spatial resolution of 35mm film negative has been considered generally sufficient, especially when digitally scanned as 4K. To a certain extent, HFR is an attempt to compensate for the limited spatial resolution of some digital cameras by adding temporal resolution when shooting and subsequently projecting higher frame rates, e.g., 48fps/60fps. HFR is sometimes used to overcome image artifact issues related to fast action in 3D presentations. For some filmmakers HFR tends to reproduce a "video look."

Even with, or perhaps because of today's transition from HD video and 2K digital motion picture cameras to 4K digital motion picture cameras that utilize a wider gamut color space beyond HDTV Rec.709, discussions are taking place regarding the ability of current digital motion picture cameras to create and preserve "film looks" within end-to-end digital imaging workflows. Also, 2014 will see an increase in the deployment of 4K production and postproduction workflows that take full advantage of 4K image capture, as well as 4K film scans.

Filmmakers have been discovering the creative potential of digitally capturing and digitally projecting 4K images with four times the spatial resolution of 2K or HD video images. The ability to capture and render images with significantly enhanced image detail adds tremendous potential for creating more viscerally engaging images. This also applies to the new UHDTV (aka 4K TV) display platform. Of course, the optical performance of the lens is also a major factor in the reproduction of image resolution and sharpness, as well as contrast. Resolution is not the same as sharpness. Digital video cameras frequently use electronic sharpening to artificially enhance edge details, which contributes, along with video's 30fps, to the general perception of what is frequently referred to as a "video look."

Assuming that no "artificial," i.e., electronically enhanced, image sharpening is used on a 35mm 4K (preferably 16-bit) image scan, the 35mm film negative has a more natural sharpness combined with greater spatial resolution than is possible with HD or 2K digital image capture. Since film is still considered the motion picture imaging benchmark for achieving a "film look," digital motion picture cameras are frequently judged in relation to their ability to digitally replicate a high quality film look." In addition to 4K spatial resolution, this should include the ability to capture naturally sharp images with a wide dynamic range of scene tones, along with greater color bit depth (preferably 16-bit) to enable greater precision in color grading for achieving optimal creative results. The Sony F65 and the Sony F55 are two of the latest generation of digital motion picture cameras that are able to record 4K images with 16-bit wide gamut color, along with a wide dynamic range of scene tones.

Color management within digital imaging workflows has problematically evolved without any industry standards-based reference...until now. To address this problem, the Academy of Motion Picture Arts and Sciences has developed, in collaboration with many motion picture production and color science experts, a comprehensive standards-based wide gamut color management system and image interchange framework. ACES provides a much needed solution to the production and postproduction challenge of managing consistent color reproduction across multiple image capture/origination and image display platforms and devices. ACES is also designed to provide the best option for motion image archiving.

2014 should see a more rapid adoption of ACES, especially as filmmakers and post facilities better understand both the creative and cost efficient advantages. ACES is the only color management system that not only protects, but also allows the filmmaker to take full creative advantage of wide dynamic range 16-bit wide gamut color images.

Technology 2014 | Production, Post & Beyond: Part ONE

For a broader look what recent trends can tell us about the future, please see
Technology 2014 | Production, Post & Beyond: Part ONE

What were the big technology trends in media and entertainment over the past year? What's going to be significant in the coming year? Those are questions that many of us are asking, so we went to some of our savviest Creative COW contributors to ask their opinions of where we've been and where we're going. In Part 1, we offer an overview of their perspectives.

Related Articles / Tutorials:
How To Put Yourself In Any Movie, Part 2: Greenscreen

How To Put Yourself In Any Movie, Part 2: Greenscreen

Not every VFX problem can be solved with a plug-in alone! Visual effects start with the visuals! In part two of his series on inserting yourself into any movie, filmmaker and effects artist Cody Pyper covers how to set up lighting to match shots from Hollywood movies, and how to set your camera to the best settings for shooting green screen.

Cody Pyper
The Invisible Man Cinematography, with Stefan Duscio, ACS: Go Creative Show

The Invisible Man Cinematography, with Stefan Duscio, ACS: Go Creative Show

Cinematographer Stefan Duscio, ACS and Go Creative Show host Ben Consoli discuss the technical issues behind filming an invisible character in Leigh Whannell's The Invisible Man, using a robotic camera for VFX shots and the value of unmotivated camera movement. They also discuss why Stefan still uses a light meter, filming with the Alexa Mini LF and how he prepared for an IMAX release.

Ben Consoli
The Lion King's Virtual Cinematography: Caleb Deschanel, ASC

The Lion King's Virtual Cinematography: Caleb Deschanel, ASC

Caleb Deschanel, cinematographer for Disney’s live-action The Lion King, shares how they used traditional cinematography to create the life-like virtual film. Caleb and Go Creative Show host, Ben Consoli, discuss modeling cameras and lenses for virtual filmmaking, how Caleb was able to move the sun around in virtual space to get the perfect lighting, using a real drone for the Circle of Life sequence, and more!

Ben Consoli
Shooting RED 8K for Danny Boyle's Yesterday

Shooting RED 8K for Danny Boyle's Yesterday

The magical romantic comedy Yesterday reunites cinematographer Christopher Ross BSC with director Danny Boyle to tell the story of a singer-songwriter who wakes up to discover that he's the only one in the world who remembers The Beatles. Christopher selected the RED HELIUM S35 8K sensor (with as many as 17 cameras rolling simultaneously in a single scene!) to capture a variety of looks as the story takes viewers from East Anglia to Los Angeles. With 10-15TB of footage coming in every day, this is also a workflow story, featuring DIT Thomas Patrick and the team at Mission Digital for dailies, and Goldcrest Post for online, VFX, conform, and grade.

Adrian Pennington
Spider-Man Far From Home Cinematographer Matthew Lloyd

Spider-Man Far From Home Cinematographer Matthew Lloyd

Matthew Lloyd, cinematographer for Spider-Man: Far From Home, takes us behind the scenes of the film and shares techniques for lighting and shooting massive visual effects scenes. Matthew and Go Creative Show host Ben Consoli, discuss working in Marvel’s Cinematic Universe, using pre-vis to prep for shots with VFX, creating Spider-Man’s holographic world, plus Matt’s camera and lens choice, his experience with commercial and fashion filmmaking, audience questions and so much more!

Ben Consoli
DJI Osmo Action Camera In-Depth: Taking on GoPro

DJI Osmo Action Camera In-Depth: Taking on GoPro

The DJI Osmo Action is DJI's first GoPro-like action camera. It shoots crisp 4K video at 60 FPS, and super slow motion at 240 FPS at 1080p, also with support for HDR and terrific RockSteady image stabilization. Especially interesting: TWO LCD screens to make it easy to see what you're shooting from every angle. VFX guru and filmmaker, Surfaced Studio's Tobias G puts the Osmo Action through its paces and tells all about what he likes and doesn't, with lots of sample footage for you to judge for yourself!

Tobias G
Stuart Dryburgh: DP for Men In Black International

Stuart Dryburgh: DP for Men In Black International

Stuart Dryburgh, cinematographer for Men In Black International, joins Go Creative Show host, Ben Consoli, to discuss creating the look for the film. Stuart talks about the challenges of working in an established franchise, filming in NYC in the snow, why Stuart prefers Arri Alexa cameras, his lighting and lens choices for the film, shooting action scenes, and more!

Ben Consoli
Capturing ProRes RAW

Capturing ProRes RAW

Apple ProRes RAW has lots of buzz, and can offer some great opportunities in both shooting and post, once you know how to capture it. Director Steve Pierce and DP Igor Kropotov explain why they love it, how to capture it on set, and what tools you can use.

Adorama TV
Small HD FOCUS 7 4K Monitor Hands On

Small HD FOCUS 7 4K Monitor Hands On

Here's a first look at the SmallHD FOCUS 7, a 7-inch, 4K monitor that packs significant production value in a moderate price. The monitor includes Small HD’s OS3 software, which gives users access to features such as pinch-to-zoom, waveform monitors, focus pulling, 3D LUTs, and more, in a build that's lightweight, durable, and retains mobility.

Adorama TV
GoPro HERO7 First Look

GoPro HERO7 First Look

The new GoPro HERO7 can do WHAT? Join Steven John Irby, co-owner and director of Street Dreams Magazine, for a look at the most advanced GoPro yet: HyperSmooth Stabilization, TimeWarp Video, live streaming, voice control, waterproof, and much more.

Adorama TV
© 2020 All Rights Reserved