LIBRARY: Tutorials Reviews Interviews Editorials Features Business Authors RSS Feed

The Truth About 2K, 4K and The Future of Pixels

COW Library : Cinematography : John Galt : The Truth About 2K, 4K and The Future of Pixels
CreativeCOW presents The Truth About 2K, 4K and The Future of Pixels -- Cinematography Editorial

Woodland Hills California USA All rights reserved.

Editor's Note: This article was originally posted in 2009, which means that parts of it are significantly out of date. Regardless, many of the issues raised, especially around frame rate and approaches to sensor size, are still very much being discussed all these years later. Please enjoy this as the now-historical document that it is, a snapshot of the industry in the early days of widely-accessible digital cinema cameras, without expecting it to be anything else ~Tim Wilson, Editor-in-Chief.

John Galt: "Pixel" is an unfortunate term, because it has been hijacked.

Historically, 2K and 4K referred to the output of a line array scanner scanning film, so that for each frame scanned at 4K, you wind up with four thousand red pixels, four thousand green and four thousand blue.

For motion picture camera sensors, the word "pixel" is kind of complicated. In the old days, there was a one-to-one relationship between photosites and pixels. Any of the high-end high definition video cameras, they had 3 sensors: one 1 red, a green and a blue photosite to create 1 RGB pixel.

But what we have seen particularly with these Bayer pattern cameras is that they are basically sub-sampled chroma cameras. In other words they have half the number of color pixels as they do luminance And the luminance is what they call green typically. So what happens is you have two green photo sites for every red and blue.

So how do get RGB out of that? What do you have to do is, you have to interpolate the red and the blues to match the greens. So you are basically creating, interpolating, what wasn't there, you're imagining what it is, what its going to be. Thats essentially what it is. You can do this extremely well, particularly if the green response is very broad.

Well 4K in the world of the professionals who do this, and you say "4K," it means you have 4096 red, 4096 green and 4096 blue photo sites. In other words...

Creative COW: 4000 of each. 4K.


John Galt: Right.

But if you use the arithmetic that people are using when they are taking all of the photosites on a row and saying they're 4K, they are adding the green and the blue together and saying, "Oh, there are 4K of those, so it's 4K sensor." Now actually, in order to get RGB out of a Bayer pattern you need two lines. Because you only have green plus one color (red) on one line, and green plus the other color (blue) on the other line. You then have to interpolate the colors that are missing from surrounding pixels.

Bayer Pattern sensor. Image courtesy of Colin M.L. Burnett

Note that there are twice as many green pixels as red or blue on this representation of a Bayer pattern sensor. To create a single RGB pixel, there must be an equal number of each color, so the choice is whether to discard green pixels and lose luminance detail, or to use interpolated, aliased red and blue pixels.


Let's go back to scanning a film frame. The aspect ratio of a full 35mm film frame is basically 4x3. So if you have 4096 photo sites across the width of the film, in red and green and blue, and 3K along the height, you would have 4K by 3K. You'll have 12 million green photo-sites, 12 million blue photo-sites, 12 million red photo-sites.

That's 36 million photo-sites. A 36 mega-pixel image is what you get from a 4K scan.

Now you know very well that you cannot take a 8.3 million pixel sensor and create 36 million out of that without interpolation. You are up-converting, and there's really no value to the up-conversion. There's no new information.

So 4K is not these 8 mega pixel or 9 mega pixel or 10 mega pixel CMOS images for the Bayer pattern where they add up all the pixels in a row and say hey, we got 4K. The great perpetrators of that mythology have been RED and Dalsa. That's why I call these "marketing pixels." It's intentional obfuscation. Because they really do nothing to improve image quality. They may improve sales volume. But they don't do anything to quality.

But somehow the world has accepted that that's 4K. It's purely semantic. It's like saying, "I don't like my weight in pounds so I converted to kilos. It sounds better!" You'd be amazed at how many non-technical people I meet, often producers and directors, but sometimes even cinematographers get fooled by that stuff.

There's a fundamental problem with the Bayer sensors. I mean in 1972 when Dr. Bryce Bayer at Kodak couldn't make sensors with lots of photo-sites, his was a brilliant idea, and it works very well in still cameras. But with any camera with a fixed sampling stucture, in other words any CCD or CMOS camera with discreet photo-sites, you have to use an optical low pass filter to make sure that you don't create a moire pattern in the final image.

If you design the optical low pass filter to satisfy the requirement of the frequency of the green samples to maintain the highest resolution, the red and blue photo-sites, which are half as frequent as the green will have aliases.  However, if you design the optical low pass filter to make sure that you don't get a color alias from red and blue, then you are throwing away some of the resolution from the green.

So you can never get the resolution you might expect from a Bayer pattern. Someone can argue this until they are blue in the face but we are dealing with the limitations of the physics of optics and the mathematics of sampling theory, and you can't escape it. There'll always be aliases from a device with a fixed sampling structure, such as an array of photo-sites on a sensor, if you try to record frequency information that is greater than half the number of available samples.  Of course, sometimes the limitations of the camera lens acts as the optical low pass filter!!!

Now if you use the same arithmetic that these people are claiming they're 4K cameras are using, then Genesis would be 6K. Because it has 5760 pixels on one line: 1920 red, 1920 green and 1920 blue. But isn't that a little bit nonsensical? But I think it's no more nonsensical than essentially compressing the heck out of an image, then interpolating that up to create a DPX file which is enormous and say wow, we got 4K. I think that people will start to understand this and realize that it creates a terrible problem with post, because you have so much more empty data to process.

The most important issue from our point of view, is that we want to have equal resolution, TRUE edge resolution in red, green and blue. The most important thing is not to have interpolated information. You want to know that the edge is REAL.

This is because our cameras are used for doing high-end image compositing. I'm not talking about 100 people sitting at work-stations rotoscoping  images. I'm talking about being able to shoot a blue screen or a green screen and using software like Ultimatte Advantage  and pull perfect linear mattes from smoke, fire, transparent objects. or liquids - things that can't be roto'd.



Another problem with a message built on "marketing pixels" is that it confuses pixels and resolution. They don't have anything to do with each other. What defines the resolution, quite frankly, is the optics more than the sensor.

My wife has a Ricoh GX 100. It's a beautiful little camera with a 10 million photo-site sensor. But it's not nearly as nice a picture as my old 6 mega-pixel Canon D60.

When we released the [Panavised version of the Sony] HDW-F900, dubbed "the Star Wars camera," it was a 2/3rd inch camcorder. People asked, "Why are you doing this?" Well, because it only weighs 12 pounds, it's got a battery, there's no need for an umbilical cord, and it's got a built-in recorder just like a film magazine.

"Panavised" Panavision-Sony "Star Wars" camera

Almost everyone in the industry laughed at it, but it has proved to be unbelievably successful. That camera is still renting every day with the Primo Digital lenses we designed for the 2/3" format, and really, you'd be hard pressed to get a better image. So you have to look at the whole system, not latch on to just one parameter and say "That's what we're gonna go for!" Everything has to work together as an imaging SYSTEM.

Unfortunately, one of the tragedies of digital imaging, is that now we've got these ridiculous numbers games, because so few people understand the fundamentals of the imaging technology, everybody wants a number to latch on to. The numbers don't mean anything in the context of 100 years of development of film and motion picture technology, optical technology and laboratory practice and cinematographers did wonderful work without understanding anything about the chemistry or photographic emulsion technology.

Whenever I do a presentation about digital imaging, my first question these days is, "Anybody know how many grains of silver are on a frame of film? Hands up, hands up!" Nobody ever puts their hand up. My second question is, "Hands up! Anybody ever thought about this before?" You can tell the nerds in the audience from the hands that go up!

John Galt presentation, 444 RGB

For videos of John Galt's presentation with Canon's Larry Thorpe, "Demystifying Digital Cameras," click here.

So why do we care? Because after  100 years of being comfortable with a relatively stable film based motion picture technology, along comes this new and disruptive digital imaging technology, and we're all clutching for some magic number that we can carry around in our heads, and this will define the process for us. Sorry, it doesn't work that way. It's messy and it's complicated, and lots more so today than it was in the days of film.



The 4K system that most people know is IMAX -- and it doesn't quite make 4K, which is a surprise to people. "How can that possibly be?," you say. "It's an enormous big frame." Well, because of what I was talking about earlier: the physics of optics. When you take the entire system into account - from the lens of the camera, to the the movement of the light through the projector, all slightly reducing resolution -- you wind up with less than the full resolution you started with.

IMAX theater in Myrtle Beach, South Carolina USA number of years ago some IMAX engineers - and I don't think IMAX ever let these guys out of their lab again -- did this wonderfully elegant experiment at the Large Film Format Seminar at Universal Studios Imax theatre. They showed this film they made that began with 2 rows of 2 squares: black white, white black, as if you had 4 pixels on the screen.

Then they started to double and double and double the squares. Before they got to 4K the screen was gray. Do you know what the means? There was no longer any difference between black and white, which is what allows you to see sharpness. It's the contrast that we see, not the actual information.  Technically, the MTF (Modulation Transfer Function) was zero at 4K!

Let's just pretend for a moment that IMAX truly is 4K. You watch IMAX at between one and one and a half picture heights from the screen. But in order to get to appreciate 4K on a regular movie screen, you would have to sit much closer than normal. In other words, when you go to a movie theater, and most of the modern theaters with stadium seating are designed so that the middle of the theater is 2 ½ to 3 picture heights from the screen, for most of us who watch movies, that's pretty where we want to be sitting. Maybe just a little bit closer from some of us who do this for a living, because we're maybe looking for artifacts or issues. If you sit much closer than 2 ½ picture heights, that's what you're seeing, artifacts, not movies!

So if you had true 4K resolution in your local theater, everybody would have to sitting in the first 6 rows. Otherwise they wouldn't see any extra detail. Their eyes wouldn't LET them see it. You know this intuitively from passing by these beautiful new monitors at trade shows. You find yourself getting absolutely as close as possible to see the detail, and to see if there are any visible artifacts. At normal viewing distances, you can't.

So the whole 2K 4K thing is a little bit of a red herring.

Creative COW: What do you think about IMAX as a filmgoer?

John Galt: I don't like the frame rate. I saw Gorillas in the Mist and the gorilla were flying across the forest floor. Every frame they seemed to travel like 3 feet. [laughs]. It's really annoying. I mean I loved Showscan: 70mm running at 60 fps. In terms of a sense of reality, I think it was far superior to IMAX.

That's why I subscribe to Jim Cameron's argument, which is we would get much better image quality by doubling the frame rate than by adding more pixel resolution.

To many cinematographers, this is sacrilege. You often hear cinematographers saying, there's something special about movies at 24 frames per second.  This may be true, but I'll tell you one of the problems of 24 fps, it's the reason we watch such a dim picture on a movie  screen, because if you pump up the screen brightness, you would notice the flicker from the 24 fps motion capture.

So when you are watching in a dark surround and a dark movie theater, the eye and brain gets into this state called mesopic, that is neither photopic, which is full color vision in bright light, or scotopic which is night vision and no color. It's the in-between state between full color and no color vision. What happens there, the brain takes longer to integrate an image, so it fuses the motion better and we are less sensitive to flicker, but we also lose color acuity.

But we have to remember that 24 frames was never designed from an imaging standpoint. It was designed for sound.

The original Kinetoscope ran at approximately 48 frames per second, with rapid pull down.


The Kinetoscope, first publicly demonstrated in 1891

A Scotsman by the name of William Kennedy Dickson (below, right), working for Thomas Alva Edison, figured out that you could shoot at 16 frames per second and show the same image 3 times with a three bladed shutter in the projector. And you save film - which had to appeal to a Scotsman like Dickson! (I'm a Scotsman by the way, which is why I can make jokes about a Scotsman.) William Dickson and the kinetoscope

When sound came along and they couldn't get intelligible sound at 16, they went into the next sub multiple of 48, they went to 24 frames with a 2 bladed shutter. And thats how we ended up with 24 frames. They eventually settled on 24 frames per second with a 2-bladed shutter: 48 flashes of 24 frames per second.

Now if you take a still picture of somebody walking in front of you at a 48th of second, you know that they're going to be blurred. But If we were to record 48 frames per second with a 2-bladed shutter, then the integration time would be only a 96th of a second, and each of the images would be sharper.

Recently we've been renting a camera from Vision Research called the Phantom, which easily shoots at 1000 frames per second. When you see a drop of water in a commercial fall slowly and create a lovely little splash of bubbles, that's the sort of thing shot by these high speed cameras.

Phantom HD

Above, water balloon, after the balloon has burst, but before the water has fallen. Below, pouring liquid in a TV spot. For streamed movie clips of high-speed Phantom HD video, click here.

Phantom HD image for Sunbeam

They are actually quite low-resolution, but because they're shooting at such a short shutter speed, they look much much sharper than cameras that have four times the resolution.

Phantom HD resolution vs. frame rate chart

Vision Research chart on the Phantom HD digital cinema camera showing the effect of speed on resolution: 1000 frames at 2K, but to get to 2000fps, the maximum resolution is 640x480 - yet Phantom's pictures are inarguably gorgeous.

This is why I honestly think that in the future, one direction we're going to have to go is to higher frame rates, not more pixels.

Somebody said that the perfect thing would be 60 frames a second at IMAX. Kodak would love that. [laughs].


Once again it was audio that was the deciding factor in perpetuating 70mm for marquee presentations. We talk about 70mm film, but in fact the camera negative was 65mm. The other 5mm was a 5-track magnetic sound track. So that was how, pre-Dolby encoding, you got 5 channels of audio. You got it in 70mm because it had multiple magnetic tracks.

Once you had the ability to write digital data onto 35mm film, you could get up to a 7.1 audio track encoded digitally on the film. So the need for the magnetic sound on the film that 70mm provided just went away. And as soon as the multi-channel audio was on 35, that was basically the death knell for 70mm.

Panavision 70mm

From The Big Fisherman, in 70mm, with 6 tracks of audio (Westrex Recording System)



We think that the next improvement in digital imaging quality is being able to extend the scene dynamic range that you can capture.

We've been developing a new sensor technology called Dynamax. Now, I've been telling you that we don't need 4K -- well, this sensor is 37.5 megapixels! You basically have 6 green, 6 red, and 6 blue photosites for every pixel.  Using the "NEW MATH" it is a 17K sensor!

Are you familiar at all with high dynamic range imaging in still photography? HDRI?

In the still photography world, what is going on is that people are taking multiple exposures and combining them. Let's say I do an exposure at a stop of 2.8. The next one is at 4, then 5.6, then 8, and 11. Depending on what I'm shooting, the 2.8 exposure could completely blow out the highlights, but it would have lots of shadow detail. And the f11 exposure would retain the highlights, but there would be no detail in the mid tones and the shadows. If we were to combine them, we'd have a single image with the most possible detail across the widest possible range.

Click image for larger

HDR example: multiple exposure combining to create 1 high dynamic range image. Courtesy Steven Bjerke

So in Photoshop and some other programs, you can actually blend these images to create an ultra high dynamic range image. And some of the images, you should just do a web search for high dynamic range imaging and you'll come up with what a lot of photographers have been doing. Some of the images are extraordinary, like paintings They just have extraordinary information. Some of them are quite surrealistic.

Click image for larger

Courtesy of Simone Wedege Petersen

Today, that's only available in the still photography world. DynaMax is designed to do that for moving images. With those 6 red, 6 green and 6 blue photosites for each output pixel, you'll have the equivalent of shooting 6 images with different exposures at once, and blend them together to create a single high dynamic range image. You'll be able to capture extreme highlights, the near highlights, the mid highlights....

Creative COW: Six images, with different exposures, combined into one.

John Galt:
Yes. I see basically registering those 6 photosites to 1 output pixel. But you see, because I have 6 individual photo sites I can control those photo sites individually, so that I can create a non linear transfer characteristic.

All CCD or CMOS sensors right now are linear devices, like a light meter. One photon goes in and one goes out...well, not exactly, because the quantum efficiency is not quite 100%. But let's say that if you have 10 units of light in, you get 10 units of charge, 20 units of light, 20 units of charge, and so on. It's linear. Film emulsions don't work that way. Film has a nonlinear transfer function.

First of all, there's a kind of inertia. It takes a certain amount of light to get any exposure at all, which is why the toe is flat. What it really says is that down at the bottom of the film characteristic curve we have less sensitivity.

Panavision Panalog 4:4:4 log colorspace

Curve for Panalog color space, a 4:4:4 log color space. Learn more here.

And then we get on to the straight line part of the characteristic curve, and there, it truly is logarithmic. And then we get up and when it starts to roll off again, the so-called shoulder, what is happening is you've exposed all the silver grains of low light sensitivity that require lots of light, so the sensitivity drops again, and that happens to be a wonderful thing! Because if the scene dynamic range that you attempt to capture is a bright sunny day, you can easily have 1 million to 1 dynamic range.

But I'm going to be printing this under piece of photographic paper, where the very best I'll get is 120:1. Or I'm going to watch it on a very high quality professional CRT monitor. The best it gets is 15,000:1. [Or an Optoma DLP at 10,000:1.] But we still started with a million to 1. How do I capture that?

There have been devices made with logarithmic amplifiers and other things, but they're not terribly practical. So the idea is to be able to make a sensor that has a transfer characteristic which is more like a motion picture film emulsion.

In DynaMax we can control these individual photo sites so they have a short exposure, longer exposure and so on. So we can then take those exposures and blend them together to create a high dynamic range image, just as if you were shooting half a dozen different exposures.

Panavision DynaMax sensor

The DYNAMAX-35 sensor is a multimode video sensor capable of operating up to 120 fps at 6x HDTV and 30fps at full res of 37Mpix

So, yes, the Dynamax sensor is by any measure a true 4K sensor. At least in the first pass, we have no intention of trying to record every one of the 37.5 photosites as a greater number of pixels in DynaMax. It would be a waste of time. It's about getting more dynamic range

In fact, everything talking about 4K belies the fact that most of the theater installations around the world are basically going at 2K. I mean the only commercial 4K digital cinema projector that I am aware of is the Sony 4K projector. But the bulk of theatrical installations around the world are the Texas Instruments DLP. And its maximum resolution is 2048x1080. I mean, let's face it. The difference between 1920 and 2048 is 6%. Believe me, you cannot see a 6% difference. Six percent is irrelevant.

Creative Cow Magazine: Digital Cinema

To learn more about the practicalities of digital cinema, see the Creative COW Magazine Extra, "21st Century Cinema"

Besides, when we talk about scanning film to get 4K -- we don't, really. We typically scan perf to perf, but the actual Academy Aperture is 3456 pixels, which is quite a bit less than 4K. When you scan film at 2K, you're often just scanning 1728 across the Academy Aperture. Just to make things a little more complicated!

So these are all high definition television projectors going into theaters whether you like it or not. Slightly better color gamut, but they are all basically paying lip service to the idea that it's not HD.

Its a horrible ratio anyway, 2048/1920. You want to avoid these horrible little small ratio numbers because a digital filter to scale 10920 to 2048 is difficult, and you probably lose more in the filter than you can gain by having a few more pixels.



One of the new developments for us is capturing scene metadata. So we are doing a movie just now, I'm not sure if I'm allowed to talk about it, but anyway, this is a movie that is going to be released in 3D, its actually being shot in 2D.  Using our lenses that have built in encoders to encode scene metadata that will be given to the people in post-production so that they can match the 3D computer generated imagery, with 2D live action photography.

And what it means, what capturing metadata means, that people in post can render a computer generated image, that has all of the optical characteristics of the principal photography. Including the information such as a focus pull, they will have the information to pull focus in the computer generated part of the image too.

The cinematographer wants to know what the image looks like, not just on  a TV monitor, but what it's going to look like on film. So we have this device called the Display Processor (below) where we can take 3D look up tables, load it into that device, feed the camera into that and then feed it into a wide, digital cinema color gamut monitor, you can emulate a particular negative printed on a particular print -- while you're shooting. This is one of the more popular pieces of hardware that we have developed. Most cinematographers go out with one of these per camera.

Panavision Gamma Display Processor



Creative COW: When people talk about an Arri D20 or a RED or whatever, one of the very first things to come up is the price of it. But thats really not a direct a factor when we talk about rentals.

John Galt: One of the interesting things about Panavision's headquarters is that we have research and development here, we have the factory for manufacturing lenses and cameras right here, and we have the rental floor. This puts us directly in contact with customers. We know what they want, because they tell us. "No, I don't want higher resolution; I'd just have to sit closer to the screen. But yeah I'd like to have more shadow detail, I'd like to have more highlight detail. Can you do that?"

Another wonderful thing about the rental business is that the whole product development process is kind of turned upside down. When you sell something, service is a profit center. When you make something available for rent, service is a COST. Because we rent things instead of selling them, our best way to keep costs down is to build to higher standards.

Making adjustments to gear at Panasonic headquarters

Lenses are a great example. A zoom lens is built nominally, put together as per the spec. What they do next over in R&D is start making micro adjustments. They have a little eccentric cam that lets them measure deviations in the angle of rotation from where the cam is supposed to be. There are often over four hundred measurements made, going for the peak performance of that zoom lens at any particular focal distance.

That lens is then taken apart, the cam goes back into the factory, and they re-cut the cams based on the test results. Sometimes we'll do that 3 or 4 times. Why? Because in doing that, we can improve the performance of the lens by 30% or more. Is it expensive? Yeah, it's ridiculously expensive. But it's not expensive over the life of the lens.

And it's not expensive when you know that that that lens will not be sitting on the shelf because a particular cinematographer doesn't like it. We have a whole floor set up at Panavision where customers test equipment every day. They will reject a particular lens not because its pictures aren't good, but because it doesn't FEEL right. That's why it's very very hard to build things for the rental market. There may be BUILDER remorse, but there is no buyer remorse. If they're not happy with something, back it goes onto OUR shelf, not theirs.

We can also develop new products in ways that aren't practical in a retail environment. So you know, the design camera for the Genesis camera was to be able to match the performance of our Millennium XL 35mm film camera, in all aspects: frame rate, size, weight, all of that. And we didn't get there. It was maybe 12 pounds, 13 pounds more than a XL with a 400 foot film magazine -- more than enough to matter to the poor bugger who has to carry it on a Steadicam all day. With a 400 foot film magazine, you're getting less than 5 minutes of recording time. That's fine for Steadicam, but we also wanted to record longer than that.

We just introduced the SSR-2, a dockable solid state recorder. We can record up to 84 minutes of uncompressed 1920x1080 at 4:2:2 or 42 minutes at 4:4:4 That requires almost three quarters of a terabyte of solid state flash memory. (We didn't consider hard drives because they just aren't reliable enough.)

h SSR-1 solid state recorder

Panavision Genesis with SSR-1 solid state recorder

When we started developing it three years ago, the flash memory cost alone to give you that recording time would have been $68,000! Of course, what happened during the few years of development is that the price of flash dropped to 1/10 of what it was when we started. Now, had we been building this to sell, we'd never have built it at all. It would have been totally impractical to even consider.

But if you're going to rent it out, you can look at the longer term. That expensive piece of flash memory saved us money because we never need to service it, or replace it for wear and tear. You only have so many read and write cycles, but one of our flash manufacturers calculated that we have at least 35 years! The technology will be long obsolete before then.

Creative COW: I think one of the aspect of democratization that people have missed is that products can become more and more complex and more and more sophisticated and are available to rent. Which is quite democratic indeed. I've probably got enough money to rent this thing for few months relative to buying it.

John Galt: I think it was Orson Welles who said that movie making is the only art form where the artist can't afford the material for his art. If you've got the movie that you've got to make, you can go out there and collect money, and beg, borrow and steal, and do whatever is necessary to go out and buy yourself a camera that costs less than 20 grand. Before it's useful, it's closer to 100 grand.

Or you can spend that money on your production. I think that getting your movie made and seen is probably more important than accumulating equipment.

Director John Ford, on locationThe studios realized this a long time ago, which is why none of the studios have their own camera department anymore. It was a department like any other - props, costumes, sets, and so on.

All those studios also used to have their own service shop, they had their own cameras, their own lenses, their own tripods, their own lights -- everything.

And what the studios realized is that they didn't make enough movies to justify maintenance of that camera department.

So ultimately the camera departments disappeared, and you found that you had companies that serviced all the studios like Panavision, that were providing cameras, lenses and accessories.

Now if we don't make these lenses as well as possible, they'll sit on the shelves and gather dust. And that's true for every piece of equipment we build. There's not a lot of selling we can do. We bring people in, we show it to them, we explain what it does, they try it, if they like it they'll use it, and if they don't it sit in the shelves.

Whereas, I'm an amateur wood worker. You wouldn't believe the number of tools that I have bought over the years that I thought were just going to be wonderful for one thing or another, that just sit in the shelf to embarrass the hell out of me when I think of the money I spent for them. I just bought into some piece of advertisement or promotion and it seemed like great idea at that time. Well, our customers don't have that problem. They'll take something out, and if it doesn't work for them, it comes back, and that's it.

In the end, it's a win-win. We put a bit more into the design, manufacture and assembly process, and we get fewer equipment rejects, and fewer service problems over time. The rental environment allows you make a better product available to the customer.


At around this point in the conversation - no kidding -- we heard the intercom announce that John was needed on the rental floor! We truly appreciate how generously he shared his time with us.

John Galt is currently the Senior Vice President of Advanced Digital Imaging at Panavision's corporate office. His responsibilities at Panavision include the development of digital imaging technologies in support of Panavision's core motion picture and television production business.

John Galt, PanavisionGalt was project leader of the group that, with Panavision's technology partner Sony, developed the "Genesis" digital cinematography camera.  Prior to Genesis, Galt was also responsible for the "Panavized" version of the Sony HDW-F900 first used by George Lucas on StarWars episode 2.

He was previously employed as Vice President, High Definition Technology Development for Sony Pictures High Definition Center. His main responsibilities were the integration of electronic and film imaging systems. This included film preservation, High Definition film transfer systems and electronic cinema. Galt was project leader of the group that designed and built the first High Definition Telecine in North America.

Prior to Joining Sony in 1988 Galt was president of Northernlight & Picture Corporation in Toronto, Canada. Northernlight & Picture was the co-producer along with the Canadian Broadcasting Corporation of "Chasing Rainbows" a fourteen-hour drama series, the first to be produced and photographed using high definition video technology. John Galt was also Director of Photography on this project.

He holds numerous U.S., British, and Japanese patents in film and electronic imaging related areas.


Re: The Truth About 2K, 4K and The Future of Pixels
by J.d. Frey
Ok- now that Panavision is now using RED sensor and software technology for their camera can we delete this stupid article?
Re: The Truth About 2K, 4K and The Future of Pixels
by Barry Jenkinson
It's amazing that showroom salesmen even at this level still talk nonsense.
Re: The Truth About 2K, 4K and The Future of Pixels
by Charles Taylor
5 years later, the proof is in the pudding. Nobody uses the bizarre (and problematic) striped supersampled sensor pattern Galt used in the Genesis. Because it's a wasteful use of sensor real estate and the Bayer pattern is better.

"You are up-converting, and there's really no value to the up-conversion. There's no new information."

Well, no. You're interpolating. That's different that up-converting. He's right that upconverting is valueless, but that's not what's happening.

By his logic, a 1k image from a 3-chip camera and a 4k image from a Bayer would be equally good. And that's simply not the case. It's logically and empirically untrue.

"Because they really do nothing to improve image quality. They may improve sales volume. But they don't do anything to quality."

Quite simply, this is not true.

"Now if you use the same arithmetic that these people are claiming they're 4K cameras are using, then Genesis would be 6K."

No, because the Genesis creates an HD image. Not a 6k image.

It is physically impossible for the Genesis to have more than 1920/2 lines of resolution horizontally. That's why it's an HD image.

The limit for a 6k image is 6000/2 lines horizontally. Big difference.

If we use the same arithmetic everybody uses, the Genesis is a 1.9k camera.

"The most important issue from our point of view, is that we want to have equal resolution, TRUE edge resolution in red, green and blue. The most important thing is not to have interpolated information. You want to know that the edge is REAL.

This is because our cameras are used for doing high-end image compositing. I'm not talking about 100 people sitting at work-stations rotoscoping images. I'm talking about being able to shoot a blue screen or a green screen and using software like Ultimatte Advantage and pull perfect linear mattes from smoke, fire, transparent objects. or liquids - things that can't be roto'd."

Working in VFX, I can say quite clearly that this is a problem only in his mind. You take a 4k or 5k Bayer camera and work with it at 2k or HD (what the Genesis shoots) and you will have a superior image for VFX.

" They don't have anything to do with each other. What defines the resolution, quite frankly, is the optics more than the sensor."

Completely false.

Pixels place an absolutely hard upper limit on resolution of an image. You can use 50 billion photosites, but if you create an HD image out of them, you have then destroyed all information at a frequency higher than 1920/2 lines horizontally.

Not only that, but the aberrations of a lens are not purely negative things. People actually often choose lenses that are less than ideal from a technical perspective because they look nicer or cooler, and capturing those aberrations at 1000 lines of resolution has value even if the lens is only delivering 500 lines.

"My wife has a Ricoh GX 100. It's a beautiful little camera with a 10 million photo-site sensor. But it's not nearly as nice a picture as my old 6 mega-pixel Canon D60."

Which has nothing to do with the megapixels, and everything to do with the fact that one of those sensors is probably ten times the size of the other.

Anyway, with the perspective of a few years, a lot of this article is clearly Galt shilling for the Genesis, a camera that didn't have much impact on the filmmaking landscape that used technology that has been discarded in favour of the very tech Galt disparages in this article.
@Charles Taylor
by Richard Herd
Slight disagreement. The bayer patter is prevalent for two reasons: 1. it's cheaper to make and sell one sensor than three; and 2 post production computers have the hard iron processing to debayer.
Re: The Truth About 2K, 4K and The Future of Pixels
by Bob Bryk
Okay, Im not sure if this has been answered allready, but here it goes.

I own a Chromebook, pixel, and it is the most amazing image I have ever seen generated in my life. Better then MAC, better then IMAX better than most if not all print.

Now the specs are 2560 X 1700 resolutions at 239 pixels per inch with a nit of 400 on a 12.85 display.

Is this screen considered 4k? Or does it break all labels.

Re: The Truth About 2K, 4K and The Future of Pixels
by Nathan Bainer
In regards to the slow motion section, Why would a higher frame rate produce a sharper image? Shutter speed and frame rate are independent.. or is there something else that I'm not aware of? Couldn't an equally sharp image be made using something like a 5D mark iii @ 1/8000th shutter speed?
@Nathan Bainer
by James Houston
The point was that presenting more images to the eye per second produces a sharper image (nothing to do with shutter speed). The eye doesn't have a shutter speed, so it can integrate detail present at high frame rates which creates a perceived sharpness that is better than the spatial resolution would suggest. It is a combination of averaging noise and the response/inhibition 'circuits' in the retina.
@James Houston
by Charles Taylor
Well, it has something to do with shutter speed.

Shutter speed and frame rate are not independent - the maximum shutter speed is the frame rate.

At 24fps, you can use a 1/24s shutter.

At 1000fps, your longest shutter is 1/1000s.

While you can use those short shutters at lower frame rates, you *must* use them at higher frame rates.

Director of Photography
Re: The Truth About 2K, 4K and The Future of Pixels
by Kevin Stebleton
Exceed 24 FPS? Did I read that correctly?

When Sony began touting the CineAlta 20 years ago, they described the interaction with Hollywood:

Sony: How many frames should we design this to shoot? We can do 60, 75, maybe 100
Hollywood: 24
Sony: 24 is subject to judder. We can fix that now.
Hollywood: No thanks, we like 24
Sony: Ummmmm (drawing board, think think)

There was something special as to HOW a story is presented, and 24 helps with a dramatic story. Somehow football in 24 is terrible, and movies at 60 are the same, at least on displays I've been around.

I'm skeptical, to say the least, that a departure from 24 will be welcomed.
@Kevin Stebleton
by Tim Kolb
Keep in mind that the proposals out there are primarily for acquisition at this point...not necessarily for exhibition.

Stereo (3D) gets a little dicey when you have copious motion blur that appears to move on two uncorrelated axes..

Director, Consultant
Kolb Productions,

Adobe Certified Instructor
Re: The Truth About 2K, 4K and The Future of Pixels
by J.d. Frey
I know this is an old article- but it is number one on google and I think there is deliberate mis-information.

I agree photosites are not pixels. I agree it takes multiple photosites to make pixels. However anyone that has ever worked on image processing would thing it was ludicrous to conclude that the number of pixels created is 1/4 of the number of photosites.

There are very sophisticated debayering algorithms out there that can create amazing visual results. It is not a trick, or a gimmick. With all due respect to Mr. Gait- his dividing by 4 image processing premise is wrong, or at the very least out dated by 20 years. If his premise were true, then every camera released since the writing of this article would be of significantly different design.

Resolution is not aliasing- that is the real truth about 2k, 4k and now 5k:
Re: The Truth About 2K, 4K and The Future of Pixels
by Alexander Doak
Incredible article. Just awesome. One of my all time favorites.

Do note that what may be shown here as examples of HDR images may not actually be High Dynamic Range due to the limitations of the display device you are using. Unless you recently dropped $50,000 on a monitor that could quite literally blind you. These new cameras won't be able to really shine until the next generation of displays come out.

That's another thing to consider, the human eye has a contrast ratio limitation of around 10,000 to 1 at any given instant, so these new displays are going to be out performing the human eye in some ways. Though, if given enough time the eye can adapt to perceive a much higher contrast ratio, so High Dynamic Range of 1,000,000 to 1 in displays may still be useful.

It sure would be fun to be a film maker with this new tech. For example, the audience could be intentionally 'blinded' (or at least ruin their night vision for a few minutes) by the movie for emotional effect. You'd have to turn away from the screen because of the eye strain... talk about immersion. Then, the producer could slip in some secret stuff in the dark scenes immediately following the blinding for those who watch the film a second and third time and know to avert their eyes BEFORE the blinding light appears... It could add a lot of substantial depth to the film if used properly.

Fun stuff!
@Alexander Doak
by Bob Wilson
There are plenty of benefits to HDR whether in still or motion pictures even with conventional displays and regardless of the human eye's limitations (10,000:1 or whatever that may be). The point is that you can display shadow detail that would not other wise be exposed at all (i.e. it would appear black) and highlights that would normally be blown out (i.e. appearing totally white) with stunning detail including all the mid ranges all in one frame. That's the point of HDR. Sure the more the better up to the viewer's ability to perceive it (arguably) but even with a terrible display device you can still see the benefits of HDR, even a poor one.
Those Pesky Microscopic Silver Halide Grains
by Larry Degala
ToddAO 35mm/SuperPanavision 35mm has a projection field of approximately ~21mm across by ~18mm high.

The average grain size for silver halide was 100 nanometers. Ultra fine grain has a size less than 50 nanometers.

Doing the math, the width across is 21,285,200 nanometers.

The height is 17,780,000 nanometers.

If we use 50 nanometers as the average, we get 425,704 grains across by 355,600 grains up and down.

Multiply the two numbers and you have 151,380,342,400.

[Repeat after me] 151.4 trillion grains!

I was a judge in an international film festival. I saw many HD productions from domestic filmmakers. Then I saw a New Zealand indie bring in a Cinemascope product; the cinematographer was a crew member of Lord of the Rings.

Now, everyone could use their imagination to visualize the audience response...
by Mike Cohen
I think the moral of the story is - the audience needs to have a good experience. If you point an IMAX camera at a pile of rusty nails, it's still a pile of rusty nails. Likewise, if David Fincher shot Benjamin Button on HDCAM vs RED vs Genesis vs Viper vs Super 16, the audience would possibly never know the difference.
Superman Returns was shot on the Genesis - it looked good but that's about it.
The Truth About 2K, 4K and The Future of Pixels
by Eric Fitzgerald
Interesting reading. One exception to Mt, Galt's general thesis on 2k and 4k I've found is type. Nothing is as resolution dependant as type - especially small type in end title roll-ups. Very little screen real estate defines each character. I've come across type issues that can only be satisfactorly resolved by going to 4k. Very slight italics really challange 2k on the vertical strokes and stair-stepping can be seen from the center of a large theater even with really soft anti-aliasing.

Eric Fitzgerald
Hollywood Title
Silver Grains
by Nathaniel Johnston
He never told us how many grains of silver halide are on a single frame of film (obviously frame size and ASA rating will affect the number here). Now I need to know. Anybody out there who knows care to share the answer with us?
The Truth About 2K, 4K and The Future of Pixels
by Tim Kolb
I hope I didn't imply he was criticizing Bayer sensors in general.

I think the point is that using the term '2K camera' and '4K camera' to compare two cameras like the Genesis and the RED really isn't a meaningful comparison in itself.

As he said, the real world of cameras is a lot messier than that.

(I should probably note that I don't consider myself anti-bayer sensor or anti-any specific camera, but I do have a professional acquaintance with John, and I just plain envy the intellectual grasp he has on imaging in general, as well as his ability to still get someone like me to understand this stuff...)

The Truth About 2K, 4K and The Future of Pixels
by Tim Wilson
John's criticism wasn't of Bayer in general -- said it's a brilliant idea that can lead to great-looking images. His point is that upconversion has inherent limits. Anybody who has ever worked with any kind of image knows that this is true.

He's also right that companies who build their imaging on upconversion tend not to mention it.

Great point about Arri's approach in this context, Tim. If you have enough photosites on a Bayer sensor to have at least one each of R, G and B to map to a single RGB output pixel, then you haven't created pixels out of thin air through upconversion. While a different sensor and a different approach, the folks at Arri and P-vision clearly agree that oversampling is the way to go.
The Truth About 2K, 4K and The Future of Pixels
by Tim Wilson
Thanks for the Showscan notes, David! I added a link to the company info page in the article. Along with Douglas Trumbull's involvement, they note that Showscan was being used for motion simulation rides in 70 venues worldwide. The page is from 2004, so there you go.

While I wait for your article on Showscan for the Cow library :-) here's a GREAT story from cinematographer Chuck Barbee, who worked with Trumbull and his then-protege John Dykstra back in the day. He talks about John's early experiments in Showscan at 35mm, and also about the challenges that he (Chuck) had with actually *shooting* Showscan. Wonderful reading.
The Truth About 2K, 4K and The Future of Pixels
by Tim Kolb
I know that some see "RED" mentioned with anything other than complete admiration and may seem to think that it's sour grapes from anyone else in the industry with a different idea...

However, Keep in mind that the Genesis does indeed generate a sensor-borne value for each pixel in each color channel...but then it's only sold as an HD camera. The Genesis is designed to make images 1920x1080, but the sensor is 5760 photosites wide...

Keep in mind that Bayer sensors can be deployed in different ways too. If Bill Lovell from ARRI was being straight with me in Chicago (and I have no reason to believe that he wasn't), the ARRI D20/21 uses it's nearly 3K wide Bayer sensor to create an HD image in most cases. So Bayer technology can be used in a lot of ways.

I think that many would love to see the 2K image a RED could make if it actually used the entire sensor's image area and all the photosites to create it. If you could use the entire sensor area, you'd have 100% of green and 50% of a complete raster for red and blue. When a RED user uses the 2K files they've shot in post, I assume that it's simply a partial decode of a wavelet in the compression, which is not the same thing.

There isn't anything wrong with Bayer sensors, I think the idea is to make sure that an HD Genesis at whatever it would be that they would cost vs a 4K RED ONE at 17,500 USD or whatever it is, actually has a little more detail in the story that isn't printed in all the brochures.

Panavision's videos on Modulation Transfer Function are excellent BTW...check them out if you want to understand imaging and the sharpness we make and the sharpness we see.
The Truth About 2K, 4K and The Future of Pixels
by David Hunter
I saw Showscan in '83 or '84 in Dallas at a Showscan theater at a ChuckyCheese Pizza parlor if you can believe that. Douglas Trumball was involved in the project. The maybe 15 minute demo was the most amazing movie experience of my life and remains so today. NOTHING has ever been created since to match the surreal lifelikeness of that technology. It, unfortunately, needed a lot of light to film and the camera, they said, created more than the normal racket as 70 huge frames rocketed by the shutter every second. But, the effect when projected with a Showscan projector was hyper-real, so real that it was confusing. Nothing we ever saw on a big screen in our lives prepared the brain to see such detail IN MOTION. A roller coaster ride was pristeen in motion, no blur when passing static objects. And the dynamic range was almost baffling -- I would like to elaborate on how they used the dynamic range to represent a dark room, but, this post is too long already. Showscan is still the premiere presentation technology. The sound was awesome, too.
The Truth About 2K, 4K and The Future of Pixels
by Tim Wilson
Alex, thanks for catching typos. I was working between multiple versions as I revised, and I obviously got lost. I found a few others. :-)

Charles, please note that the Bayer sensor "was a brilliant idea. And it works very well in still cameras." Search on the word "Bryce" in this page(Bayer's first name), and it will take you right to that sentence.

He also says that there can be problems with *any* kind of sensor, which is why he says that what separates image quality is...quality, which has to be evaluated as an organic whole, starting with optics, and not by any looking at any one individual factor such as pixels or imager size. This is in the section on pixels and resolution, where he concludes, "It all has to work together."

re: HDR images. These are the 2 I could get permission to use, and it happens that they're hyper-detailed - grains of sand, bricks and mortar. Not at all flat. As I noted in my earlier comment, though, I'm new to HDR. If you can point me to some more representative images that I have permission to post, please do! Better still, please let me know if you're up for doing an article on the subject. :-) Drop a line to tim at
The Truth About 2K, 4K and The Future of Pixels
by Rob Mitchell
Enjoyed the article a lot (I'm just an amateur, so no arguments from me). However, it looked like a proofread must have stopped reading just before the end of the article, as there are a couple of typos:

"Whereas, I'm an amateur wood worker. You wouldn't believe the number of tool (TOOLS) that I have bought over the years that I though (THOUGHT) is just gonna be wonderful ..."
The Truth About 2K, 4K and The Future of Pixels
by Tim Wilson
The only reference he makes to the image quality of a Bayer sensor is at the very beginning, where he says that interpolation can be done "extremely well." Anybody here disagree with that? His point is just that, unless you have 4000 photosites each for RGB, you only get 4000 output RGB pixels if you upconvert. The math is the math. The rest is marketing.

He also steps very carefully through the math to show why, using the same math, Genesis is a 6K camera, and why they refuse to market it that way. Nice to see somebody practicing the marketing clarity they preach.

I really enjoyed his observations on this, but I personally love his points about the future of pixels even more. The shutter speed/fps parallel really struck me -- of COURSE you get clearer still images of moving objects with higher shutter speeds! It also explains why PhantomHD images look higher resolution than they are, and why the motion blur that creates the illusion of reality at lower resolutions detracts from accuracy at higher ones.

(Although if you go to the Vision Research site, you'll see a few examples of motion blur at many thousands of frames per second. If you're moving that fast, you really ARE blurry. But you also know from what you've seen yourself what John is talking about.)

I especially love his description of Panavision's HDR approach. I'll be honest, I had a hard time following this while he was talking about it, but once I saw the images, I got it immediately. No matter how many of 'em you've got, using one set of photosites to capture a shot's entire dynamic range inevitably requires compromises. But having six sets of photosites, each focused in a different part of the range, explodes the possibilities. He's talking about the in-camera possibilities in this interview, but I suspect it won't be long before somebody figures out how to work with the individually "bracketed exposures" in post too.

The written version here only barely reflects the energy and humor in John's speaking style. I added a few [laughter] notes here and there, and I did what I could to keep his sentences intact, but the man doesn't actually speak with all that much punctuation. :-) A wonderful storyteller, and truly enjoyable man to spend time with.

FWIW, some of the things we talked about that I didn't find a way to fit in include: the restoration of "Lawrence of Arabia," the physics of light, the relationship between resolution and depth of field, and home theater. (What can I say? I wanted to know how a guy with access to the world's greatest optical and display technologies watches TV.)

Some of it was more technical than I felt competent to edit by myself, but ALL of it was entertaining, engaging, and thought provoking. I'm glad you're enjoying this excerpt.

PS. I know it's kind of long, but I still suggest reading this article again when you get a chance. I obviously heard it the first time when John and I spoke, and read all of this many times when I was editing it, and am still finding new things.

Tim Wilson
Creative Cow Magazine

The Truth About 2K, 4K and The Future of Pixels
by Charles Taylor
Hmm, I can't say I was as thrilled by that article as some see to be. I would say, in fact, that Galt is using the same "obfuscation" that others are using.
I believe that the interpolation in demosaicing a Bayer sensor is not useless, and 3-chip approaches, and RGB stripe approaches have their own problems. If Bayer pattern sensors are so terrible, it seems odd that all professional DSLRs use them.
Not trying to be a jerk, there was definitely some good discussion here, but also some marketing obfuscation.

Also, was I the only one who thought those were terrible HDR's? A real HDR image would be flat and low contrast, not poppy and contrasty. Some of that building was brighter than the sky... Not very believable.
The Truth About 2K, 4K and The Future of Pixels
by Easy Street
Great article! Definitely worth reading.
The Truth About 2K, 4K and The Future of Pixels
by Kit Laughlin
This is simply the best article I have read on this complex subject. The resolution war was won long ago; we need increased DR, and fat (big) pixels to catch light at lower levels.

I just wish the Foveon sensor could be used in movie image making; no low pass filters required, no de-mosaicing of any image, and each pixel records an actual RGB value. Very sharp, and realistic colours.

Thanks so much for interviewing John.
The Truth About 2K, 4K and The Future of Pixels
by Pat Appleson
as I was saying, not only is he 'Da Man!!! I was so excited after reading his 'article' that I hit CR and my comment was blasted up online before I could finish. (for the newcomers, CR inplies LF, but you can call it ENTER.) (grin) And I don't think the man is dumping on other companies. I've been in the biz for a while and I get so tired of hearing the "you've got to do it our way" company line I could barf. It doesn't matter that he works for a company that, like all of us, is trying to make a profit, so we can take that profit and invest some of it in R&D. What the man is saying is the emperor has no clothes. For so long, at trade shows, all we get is hype and vapor ware. Mr. Galt demonstraits that at least someone is thinking about the physics that go into the machine. And the last time I looked, the physics were the one thing you couldn't change. You've got to live with them. His short paper in COW was, to me, an affirmation of everything I've quietly believed about the 2k4K numbers game.
Please Mr. Galt, keep up the good work.
I'm a fan.
Best regards
Pat Appleson
The Truth About 2K, 4K and The Future of Pixels
by Pat Appleson
Mr. Galt is 'DA MAN!!!!!
The Truth About 2K, 4K and The Future of Pixels
by glenn chan
I'm always skeptical when one manufacturer craps on competing products, and their technical theories/ideas coincidentally conclude that their engineering approach is the best...

One should be careful in disparaging other manufacturers' products. Often, presentations that do this have little technical merit as technical knowledge is twisted to make their products look good. In this case, the comparison of Bayer to 4:2:0 is fairly flawed as:

1- Most 3-chip systems have color artifacts too. Just look at the stills on CML... there tends to be color fringing on black&white test patterns. (Ditto for Bayer stripe.)
2- Chroma subsampling creates very different artifacts than color artifacts generated by the sensor+signal processing. e.g. chroma subsampling can create code values that call for (significantly) negative light... but no sensor generates artifacts like these.

Related Articles / Tutorials:
How To Put Yourself In Any Movie, Part 2: Greenscreen

How To Put Yourself In Any Movie, Part 2: Greenscreen

Not every VFX problem can be solved with a plug-in alone! Visual effects start with the visuals! In part two of his series on inserting yourself into any movie, filmmaker and effects artist Cody Pyper covers how to set up lighting to match shots from Hollywood movies, and how to set your camera to the best settings for shooting green screen.

Cody Pyper
The Invisible Man Cinematography, with Stefan Duscio, ACS: Go Creative Show

The Invisible Man Cinematography, with Stefan Duscio, ACS: Go Creative Show

Cinematographer Stefan Duscio, ACS and Go Creative Show host Ben Consoli discuss the technical issues behind filming an invisible character in Leigh Whannell's The Invisible Man, using a robotic camera for VFX shots and the value of unmotivated camera movement. They also discuss why Stefan still uses a light meter, filming with the Alexa Mini LF and how he prepared for an IMAX release.

Ben Consoli
The Lion King's Virtual Cinematography: Caleb Deschanel, ASC

The Lion King's Virtual Cinematography: Caleb Deschanel, ASC

Caleb Deschanel, cinematographer for Disney’s live-action The Lion King, shares how they used traditional cinematography to create the life-like virtual film. Caleb and Go Creative Show host, Ben Consoli, discuss modeling cameras and lenses for virtual filmmaking, how Caleb was able to move the sun around in virtual space to get the perfect lighting, using a real drone for the Circle of Life sequence, and more!

Ben Consoli
Shooting RED 8K for Danny Boyle's Yesterday

Shooting RED 8K for Danny Boyle's Yesterday

The magical romantic comedy Yesterday reunites cinematographer Christopher Ross BSC with director Danny Boyle to tell the story of a singer-songwriter who wakes up to discover that he's the only one in the world who remembers The Beatles. Christopher selected the RED HELIUM S35 8K sensor (with as many as 17 cameras rolling simultaneously in a single scene!) to capture a variety of looks as the story takes viewers from East Anglia to Los Angeles. With 10-15TB of footage coming in every day, this is also a workflow story, featuring DIT Thomas Patrick and the team at Mission Digital for dailies, and Goldcrest Post for online, VFX, conform, and grade.

Adrian Pennington
Spider-Man Far From Home Cinematographer Matthew Lloyd

Spider-Man Far From Home Cinematographer Matthew Lloyd

Matthew Lloyd, cinematographer for Spider-Man: Far From Home, takes us behind the scenes of the film and shares techniques for lighting and shooting massive visual effects scenes. Matthew and Go Creative Show host Ben Consoli, discuss working in Marvel’s Cinematic Universe, using pre-vis to prep for shots with VFX, creating Spider-Man’s holographic world, plus Matt’s camera and lens choice, his experience with commercial and fashion filmmaking, audience questions and so much more!

Ben Consoli
DJI Osmo Action Camera In-Depth: Taking on GoPro

DJI Osmo Action Camera In-Depth: Taking on GoPro

The DJI Osmo Action is DJI's first GoPro-like action camera. It shoots crisp 4K video at 60 FPS, and super slow motion at 240 FPS at 1080p, also with support for HDR and terrific RockSteady image stabilization. Especially interesting: TWO LCD screens to make it easy to see what you're shooting from every angle. VFX guru and filmmaker, Surfaced Studio's Tobias G puts the Osmo Action through its paces and tells all about what he likes and doesn't, with lots of sample footage for you to judge for yourself!

Tobias G
Stuart Dryburgh: DP for Men In Black International

Stuart Dryburgh: DP for Men In Black International

Stuart Dryburgh, cinematographer for Men In Black International, joins Go Creative Show host, Ben Consoli, to discuss creating the look for the film. Stuart talks about the challenges of working in an established franchise, filming in NYC in the snow, why Stuart prefers Arri Alexa cameras, his lighting and lens choices for the film, shooting action scenes, and more!

Ben Consoli
Capturing ProRes RAW

Capturing ProRes RAW

Apple ProRes RAW has lots of buzz, and can offer some great opportunities in both shooting and post, once you know how to capture it. Director Steve Pierce and DP Igor Kropotov explain why they love it, how to capture it on set, and what tools you can use.

Adorama TV
Small HD FOCUS 7 4K Monitor Hands On

Small HD FOCUS 7 4K Monitor Hands On

Here's a first look at the SmallHD FOCUS 7, a 7-inch, 4K monitor that packs significant production value in a moderate price. The monitor includes Small HD’s OS3 software, which gives users access to features such as pinch-to-zoom, waveform monitors, focus pulling, 3D LUTs, and more, in a build that's lightweight, durable, and retains mobility.

Adorama TV
GoPro HERO7 First Look

GoPro HERO7 First Look

The new GoPro HERO7 can do WHAT? Join Steven John Irby, co-owner and director of Street Dreams Magazine, for a look at the most advanced GoPro yet: HyperSmooth Stabilization, TimeWarp Video, live streaming, voice control, waterproof, and much more.

Adorama TV
© 2020 All Rights Reserved