LIBRARY: Tutorials Reviews Interviews Editorials Features Business Authors RSS Feed

A Primer on NLE Video Quality

A Primer on NLE Video Quality
A Creative COW Editorial Perspective



Ron Shook: A Primer on NLE Video Quality

Ron Shook
Ron Shook
Shoulder-High Eye Productions
Chicago, Illinois USA

©,2003, Ron Shook and Creativecow.net. All rights reserved.

Article Focus:
In this article, Ron Shook attempts in a relatively non-techy way to put some perspective on a range of quality issues inherent in today’s Non-Linear Editing technology. Perhaps as you shop for your next NLE or the services of an NLE/editor, you will have a better background for asking significant questions and separating the important stuff from the selling hype.




I see a number of misconceptions floating around the COW and other forum venues about the video quality issue. In the course of having my thoughts reviewed by those with far better understandings than mine, I've had a few misconceptions cleared up myself. In fact I rather wondered whether I'd dived into too deep water. I do some of my own video “engineering,” but I'm not an engineer by any stretch of the imagination, although I've been around this industry since Television equaled Broadcast Television with a smattering of expensive Corporate/Industrial TV. I've picked up a thing or two while moving from the first low cost portable TV gear sent from Japan (EIAJ B&W) to today. I'm going to attempt in a relatively non-techy way to put some perspective on a range of quality issues inherent in today’s Non-Linear Editing technology. Perhaps as you shop for your next NLE or the services of an NLE/editor, you will have a better background for asking significant questions and separating the important stuff from the selling hype.

Acknowledging the Experts

After drafting this article I submitted it to the perusal of 3 of the Internet video gurus who have been most influential in helping to evolve whatever understanding I have about the issues addressed here, Adam Wilt, Tim Duncan, and the COW’s own Philip Hodgetts. I thank these gentlepersons for their insights and help and didn't want my thanks to be buried at the bottom of an exhaustive read. I also don't want my opinions, expressed here, to be heaped on their heads. My opinions are my own, presented by a dude who's TV aerial has been hanging in tatters for over a year and who watches the most God-awful signals, happily enjoying the content.

Adam Wilt is both a video engineer and a user. He is the DV25 apostle’s guru, a frequent contributor to DV Magazine (keep an eye out for recent and upcoming articles by Adam concerning some of the issues discussed here), and an untiring WEB contributor despite his occasional claims to laziness. (g) Discover the breadth of Adam's interests and knowledge, and the services he offers at www.adamwilt.com, including more about the technical side of the things we do than you ever thought you needed to know.

Tim Duncan is the most curious, skeptical, and driven to experiment of any video dude that I know. He has used and experimented with nearly every major NLE ever made. Tim can pick up and become reasonably expert on an NLE in a week with both enthusiasm and sharp critique (g). It would take most of us 6 months to make the same progress. Tim has been a primary trade show demo artist on a number of NLE’s of various stripes over the years. He runs a production and post house in Nashville, and like Adam, works the WEB with abandon. Visit Tim’s website, www.zapdigital.com.

Philip Hodgetts, hosts on the AE, FCP, Media100, Premiere, and Quicktime COW forums (this boy gets around (g)), and has been adding his superb skill to our Industry WEB knowledge for as long as just about anyone. Philip’s revolutionary focus in recent years has been in the area of context sensitive training and help that plugs right into the tools that many of us use. The list of these “Companion” and “Intelligent Assistant” products continues to prosper and grow because users swear by them. Find out more about them at Philip’s WEBsite, www.intelligentassistance.com.

Where’s my Budget?

Assuming a limited budget on your next production and I have yet to kiss a limitless one (g), two broad considerations are always at war, what you'd like to do and what you can afford to do. It pays to know what the technology/cost trade-offs are along the way in the whole production, post-production, distribution chain. This discussion will attempt to analyze a range of factors that can influence the final technical quality of your production by focusing on the post-production part of the chain, which is rapidly becoming almost universally NLE. We'll occasionally look back at the production technicalities and forward to distribution technicalities, but it is in the post-production environment where most of the signal manipulation takes place. I'll mention specific NLE hardware and software from time to time as examples, and throw debatable opinions to the audience. But, this is not remotely an NLE shoot out, but a discussion of technical parameters that can effect quality in the chain, whatever the NLE. As a Discreet edit*or, using a product no longer in development but still plenty potent, I don't have many axes to grind or an NLE product to defend. In fact, it is my growing opinion that most of the surviving professional NLE’s in the low to medium range from the “none too lowly anymore” Adobe Premiere to the Avid Symphony have basically the same abilities and most differences have little to do with feature sets.

The Cost vs. Quality Question

This article had its impetus in a question on a forum by an editor who had produced a documentary for broadcast, which he was really proud of. The Doco had been shot on a higher end DVcam camcorder (4:1:1), captured analog component into a 4:2:2 MJPEG native NLE and edited. His question: Was it worth his time and expense in terms of quality to record the edit master tape to a higher quality form than DVcam? This led naturally to a discussion of what happens when various video formats leave the camera for the post adventure, and where and how they can go when the cut is in the hard drive can. The answers aren't simple or always straightforward. BTW, I won't go into differences between NTSC and PAL video here. Although there are technical differences, they result in few practical differences in the context of this discussion, and besides I'd probably get too many things wrong. (g)

Let's construct a mythical user, “Pat.” Let's assume that Pat has been told that, since her production was shot Dvcam, there is no advantage to mastering the edited version to a “higher” format. This would in fact be essentially the case had Pat edited her production on a 4:1:1 DV25 native NLE, but since she edited her production on a 4:2:2 MJPEG native NLE, other factors come into play. There's no way that she can output her cut back to DV25 from her NLE and think that she will have the same quality as on one of the DV50 formats or DigiBeta.

It is likely that any unprocessed footage in Pat’s cut won't take any but a marginal quality hit, whatever digital format she decides to output to. In fact her DV25 footage would actually fare slightly better in a DV native NLE, but the compositing, FX, and graphics would be degraded in comparison to her 4:2:2 system. Those elements would likewise be degraded, if output back to DVcam from her 4:2:2 NLE. It might still be "good enough", but it's not as good as her system is capable of. Since Pat’s I/O is analog component there is, in addition, a whole lot of transcoding going on (2 passes in, 2 passes out), which adds quality hits whatever she outputs to, and probably calls for outputting to the best format available.

DV25 vs. Other Digital Production Formats & BetaSP

The "why" of this is contained in the color resolution compromises that make the DV25 format possible. The following is only a quasi-technical discussion, because I have only a quasi-technical brain and level of understanding. (g) Standard definition digital video like D9, DV50 formats and DigiBeta is YUV 4:2:2 component resolution with the first number representing the luminance (B&W) component and the other two numbers, chrominance (color) components. Chroma resolution is 1/2 of luma resolution. BetaSP, like these formats, although not digital, uses approximately _ of the amount of the Luma bandwidth component for each of its two chroma bandwidth components. This level of what is essentially compression and bandwidth reduction in the chroma, whether analog or digital, is made possible by the fact that the human eye is more attuned to the detail of luma than chroma and thus the brain gets its sense of the sharpness of an image more from the luma than chroma.

DV25 (DV, DVcam, DVCPro25), 4:1:1 formats, take this compression of chroma even further to 1/4 the resolution of the luma. This still works very well for recording the "real world" which is composed mostly of relatively low saturation colors so that the lack of color resolution goes mostly unnoticed, i.e., we don't notice the bleed. DV25 works so well in documenting the "real world," that I would not be hesitant to say that it is superior to BetaSP for this usage. With the same high quality camera head DV25 will have significantly better resolution than BetaSP (better than 500 lines vs. less than 400 lines) and significantly better signal to noise ratio (more than 5db better). The result is that the image will look sharper and be less grainy with DV25 vs. BetaSP.

The only major caveat is that BetaSP is considered better for recording colored screen work for chroma keying because of its better spatial color resolution. At least this has been the accepted argument. I have it on good authority that this argument is somewhat specious. DV25 has, pound for pound, essentially the same color resolution as BetaSP, and the more costly digital formats have twice the color resolution. However, there are anomalies in going from the DV codec to RGB for chroma keying that make chroma keying with DV25 more problematical, resulting in aliased (steppy) keying. It turns out that chroma keying can be accomplished with DV25 sources to much the same success level as BetaSP. The trick is to slightly blur the RGB channel that corresponds to the color used for keying, usually blue or green. Any software compositing program worth its salt can do this. It's not that you can't record on DV25 for Chroma Keying and do a good job, but you'll have to futz more in software to get it to work as well as with BetaSP originated footage. This obviously will take some, perhaps considerably more, time. Other than this DV25 is a very good acquisition format.

DV25 vs. Higher Quality (and Higher Cost) Digital Post-Production Formats

But..., DV25 as a native editing format starts to fall down once you move away from basic cutting to compositing, FX and graphics where the lack of spatial color resolution begins to have cumulative adverse banding and aliasing effects. The better (hardware assisted) DV native systems like the Canopus Storm and Matrox RTX100 help to minimize these adverse effects by performing all video processing with special low cost hardware in the 4:2:2 realm. This does help, but they still have to output to DV via firewire or to analog usually using a Y/C connection which degrades the image.

So there are reasons for the heavier hardware of 4:2:2 native editing systems, they are more resolute and thus smoother in the color video components and they can take and deliver I/O from higher quality formats and retain most of that quality. It seems like there are 11 ways from Sunday in terms of technologies and technical philosophies as to how to build and use these heavier systems including but not limited to compression formats, YUV or RGB native, lossless or uncompressed and now 10, 12, or 16 bit processing instead of the old standby 8 bit. They all have their strengths and weaknesses, but they are all better than DV25 native.

The previous statement may be too broad or imply too much. We have all seen in the DV25 world how drastic can be the difference in quality between different versions of software DV codecs. That variation in quality exists in other digital codecs as well and just because a codec is 4:2:2 doesn't automatically mean that it functions as well as it should in comparison to the possibilities, and competing 4:2:2 codecs. Engineering standards and quality evolves by all manufacturers. Without mentioning any names, it is possible, perhaps even probable, that there are some older generation 4:2:2 hardware choices that take a quality back seat to the best of the current 4:1:1 crop of hardware choices, simply due to advanced engineering.

Compression Codecs

Not all compression codecs used in professional NLE’s are equal. Most NLE’s, except for some more recent engineering technologies with new compressed and uncompressed codecs, have used, and some continue to use, proprietary hardware MJPEG compression. These codecs aren't as efficient as the more recent MPEG2, I-frame codecs, i.e., they use more disk space for comparable quality. But, on the other side of the coin, MJPEG compression is highly scaleable and readily appropriate for off-line/on-line post-production, i.e., you can capture all of your sources at crappy quality for editorial and re-capture at superlatively quality only what you need for the finished product.

DV25 native NLE systems aren't scaleable at all, although it is apparently possible to scale within the DV family (DV100><DV50><DV25). I'm not aware of anyone actually doing this in an NLE yet, except for allowing for DV25 capture of DV50 sources (I'm not sure why anyone would do this). FCP, for instance, has the capable workaround of rendering out proprietary low resolution proxy files from source captures that can be used for software only off-line editorial even on a laptop. FCP in it's most recent version can even create these proxies on the fly during capture. Edit6.5 started down this road before it was tossed in the end of life file, and Incite Editor 3.5, about to be released, will have this capability. The DV codecs (DV50 uses two DV25 codec chips to cut the amount of compression and deliver 4:2:2 video) are relatively recent, high efficiency, non-proprietary codecs and low cost because everyone uses the same standardized compression and hardware chips. This means that although these compression codecs are not readily scaleable, they are inexpensive, and you are liable to be able to afford more disk space anyhow.

YUV vs. RGB

White papers have been written extolling the benefits of YUV native and RGB native NLE hardware. YUV is native to the television standard and formats that we use to record TV reality and RGB is native to computers and thus to the graphics and animations we create to enliven and elucidate that reality. Thus there's a trade-off going on in the nativeness of any particular NLE, although all systems have to output YUV if they are going to any TV tape format.

I'm tempted to say that this difference in “nativeness” between various NLE’s is probably not anything to worry about, but it's probably time for another disclaimer. There is always going to be some quality loss in the conversion from one color space to another of any NLE system, and like codecs, color space conversion algorithms in NLEs are not always equal. In addition, one person’s negligible loss is another person’s unacceptable loss. I could easily present the opinion that those doing long form, relative little affected documentaries would be better off with a YUV native NLE, while those doing short form, heavily effected and graphics intensive work should stick with RGB native NLEs. I'd be a little uneasy about making such a statement as it's beyond my direct knowledge. Folks, with more knowledge than I, have said that YUV is superior for a video editing system. On the other hand, Discreet has done quite well supplying $200k to $2 million advanced systems for both video and film editing that are all RGB native. The quality of the conversion electronics probably has a lot to do with it.

Uncompressed vs. Lossless vs. Compressed

Much has been touted in the NLE world about uncompressed capabilities as well as in the Matrox and Media100 camps about mathematically lossless compression. Is mathematically lossless compression truly lossless? Probably. Is it truly lossless as implemented in any particular system? I don't know, but if it isn't, it's probably close enough. I think that the same can be said in slightly lesser degree for the least downscaled compression formats. If you use the least compression in any scaleable compression based NLE, it's unlikely to give you a product that can be distinguished from uncompressed. Most MPEG2 based NLEs set the least compression to about 2-1 compression, while MJPEG based NLEs can often go to near 1.5-1 compression. Neither is liable to show artifacting on a visible level as compared to uncompressed even on the most complex composite done in 1 extra generation. To further muddy the waters, uncompressed codecs by different manufacturers aren't equal in quality either. There's a lot of theory that goes up in smoke in the real world.

BTW, this is outside the scope of this discussion but for comparison purposes, the high quality mega-heavy iron in this industry, like the Discreet Fires and Infernos, use 12bit, RGB (4:4:4) native uncompressed hardware that is resolution independent. Lots of pixels to push around in a hurry at film resolutions. There are systems out there that push quality even higher.

How Many Bits?

Speaking of bits, a more recent tech/hardware, digital NLE distinction is between the historical 8bit and more recent 10 or 16bit processing. This refers to the bit depth, the # of levels of variation in the color and gray scale of the image. In an 8 bit component system each of the 3 components can represent 256 colors. 256 X 256 X 256 = the 16.7 million colors that you've heard about as the capabilities of a system. This is a computer system or an NLE system that is native RGB, 4:4:4. An NTSC, YUV, 4:2:2, 8bit system minus the space of NTSC setup (pedestal) has 240 X 120 X 120 = 3.5 million potential colors. You might think, “Good Heavens, even 3.5 million colors ought to handle any of my needs,” but there are instances where this isn't true. These instances involve subtle gradations of the same or similar colors or grayscale and the resulting artifacting is referred to as banding. You've probably encountered it. It most often occurs in graphics when you've tried to create one of these subtle gradations and it isn't smooth but looks stepped. It seldom occurs or is masked by detail in the natural world when you are shooting although there can be instances of it in recording of the sky and water. Signal processing in the camera that minimizes this problem has improved with the latest generations of production gear. It is an inherent problem with 8bit and the solution involves some form of conscious signal degradation that I don't really understand.

If a banding problem is there in your 8 bit source material, and all common digital production formats are recorded to tape in 8 bit component regardless of the bit depth used to process in the camera head or deck levels of the camcorder, it could very well be compounded in your 8bit component NLE, if that material is used for further compositing or FX. It could be compounded to visibility even if not visible in the source footage. If the NLE processes in 10bits or more (10bit processing has 64 times the number of possible colors that 8bit has) this compounding has little chance of taking place and the integrity (quality) of your original source footage is maintained. I personally suspect that this bit distinction thingy could be more important to the overall quality of our final NLE projects than the distinction between very high data rate compression and uncompressed. But we don't hear about this distinction very much because most of the major hardware purveyors are still delivering 8bit NLE hardware.

You might wonder, “If my post chain starts at 8bit on the tape and ends with an edit master 8bit tape, what’s the big deal?” Well, in the first place, it's not a huge deal and in many cases we will see very little practical difference. But we are far more likely to have no difference whatsoever, if the middle of the chain is processed at a higher bit depth. This fact points to some rules that I consider to be truisms. Once you take quality away at any point in the postproduction chain you'll never get it back. Stacking quality compromises on top of quality compromises can results in even more than simple additive quality loss. And…, the best defense against losing quality is a bit of what on the surface might seem like overkill.

For example, if you have captured at about 3-1 compression into your NLE and are going to use a segment of footage in a software compositing program, don't think that you can maintain the quality of the footage if you use a compression codec at 3-1 in your compositing program. Exporting to the compositing program uncompressed is the only way to maintain that quality. Another example: NLE manufactures have given us charts of video quality on scaleable compression systems that purport to indicate the subjective quality at certain levels of compression, i.e., different data rates are equated with VHS, S-VHS, BetaSP, DigiBeta and so forth. Many users have been misinterpreted these charts. My Targa2000 based edit* system can compress with usable off-line footage at about 40 kilobytes/frame (15-1 compression) up to on-line footage of 470 kpf (1.5-1). If 100 kpf is identified as VHS quality and 350 kpf is identified as DigiBeta quality on an NLE chart, does this mean that those are the capture setting that I should use? It doesn't mean anything of the sort. Poor VHS material is considerably harder for a compression codec to compress well than is pristine DigiBeta footage, because it contains more noise and possibly glitches. I might be quite happy with dumping a touch of quality on the DigiBeta footage at 250 kpf and only retain what little quality there is in the VHS footage by digitizing at 400 kpf, if I had to use them together. Of course for either footage you push to the max if you want to lose as little as possible.

Where Does My Budget Go?

Back to Pat: If she sources on DV25 and feeds SDI into her 4:2:2 NLE either lossless or uncompressed, the only transcoding from the original DV signal is uncompressing the DV signal and going digitally from 4:1:1 to 4:2:2 (this is accomplished in the deck), an almost completely benign transcode. From there you edit 4:2:2 and output your edit master via SDI, to hopefully one of the 4:2:2 digital formats. You have maintained a completely digital signal path with very little loss at all in your original digital source footage and very pristine graphics and FX. This is the ideal usage workflow for optimum quality from DV25 originals.

Our facilities often don't have all of the elements for this workflow, so we need to plan on where to compromise within our budgetary constraints. In Pat’s facility, no matter what she chooses for an output format, her best I/O right now is analog component, and her NLE has no uncompressed or lossless capability Assuming she pushes her NLE to the upper levels of its compression data rates, there will be very little degradation from the original and all graphics that can be accomplished real-time will be handled by her NLE uncompressed until composited and/or output.

While it would be best not to have to do this twice, doing the analog cycle through component I/O is, in my opinion, not an entirely bad thing from an aesthetic standpoint. You will lose 2 or 3db signal-to-noise and some lines of resolution doing I/O analog component from your DV25 originals. There is some analog rounding going on here that can actually take a little of the digital edge off the source material, that many users find makes the image aesthetically more pleasing. It can minimize high frequency “mosquitoes” and overly aggressive edge enhancement. Going this route on both input and output is perhaps too much of a “good” thing if it can be avoided. In Pat’s case, short of having SDI on both the NLE and all video decks, an expensive proposition, a relatively low cost way to gain capability and increase the digital integrity of her post-production chain would be to install firewire input on her system to input DV25, if possible. Then she would have the analog pass only on output to higher quality formats. It would be better to have the analog pass on input because then it wouldn't effect the graphics, but who knows, maybe a little rounding on the graphics isn't such a bad thing?

Don't Whine About Your Budget, Get the Most Out of It.

Let's back up to the beginning again. I started off early giving DV25 a hearty nod as a fine acquisition format, and have then proceeded to almost trash it as a post-production format. That isn't really the case because in these tough times of squeezed time and budgets, the principal of “good enough” really makes a lot of sense. There are steps you can take and things that you can avoid when editing DV25 native that make it hearty enough for a very high percentage of professional post production. With ever faster processors, even without hardware assist, some versions of DV25 software only editing can have nearly as good a workflow as far more expensive heavy iron systems, even with the client “assisting” at the editor’s shoulder. DV25 native editing may not be as good quality wise as more costly alternatives, but it's still pretty darned good. You don't have to go very many years back in this industry to say that it's as good as anything that cost less than $100k then. That says a whole mouthful, since native DV professional editing can be had for as little as $10k or less for a complete system with lots of storage.

We can't look at quality within the NLE World without looking at quality in terms of the options for input and output (I/O). Whatever the source footage, digital or analog, inside the NLE it's all digital, and except in the case of DV25 native systems with DV25 source footage, there will have to be transcoding going on to deliver the source to the NLE’s native file format for editing. And again, with the exception of the DV25 native chain, there will have to be transcoding going on to deliver the NLE’s edited product to tape or server. Full analog, 3-wire component I/O is the least destructive to quality of any I/O when you are using analog tape formats with your NLE. Y/C I/O is a distant 2nd. Again, with the exception of DV25, SDI I/O is the least destructive to quality when you are using digital tape formats with your NLE. If you use DV25 source footage and/or mastering with your 4:2:2 NLE either firewire or SDI I/O are equally effective. If you use firewire the source is transcoded in the NLE and if you have SDI the source is transcoded in the deck on input and the opposite on output. However, if you do have the more expensive SDI I/O on your NLE, you can readily add higher quality tape formats to the digital mix through deck rental, which at this stage of the game you can't with firewire (1394) I/O (There will be some changes to that statement soon).

Concatenation?

Most, not all, NLEs, that aren't exclusively uncompressed, don't use the same compression scheme as the common tape formats they are processing in the post environment. The two most common compression schemes used in NLEs are the older MJPEG and newer MPEG2 I-frame. However, during the ingest of the source into the system with these NLEs, the digital signal must be transcoded to the compression codec native to the system. Quality hits can result from a phenomenon called concatenation. Concatenation happens when relatively unnoticeable artifacts from one compression scheme become more garishly noticeable when compounded by the artifacting of another less compatible compression scheme. Concatenation artifacts can take different forms (aliasing, blockiness, banding, etc.) depending on the compression schemes on either side of the transcoding. The higher the data rates of any compression scheme, if the compression schemes are scaleable, the less concatenation is likely to happen to a noticeable degree. Concatenation will not happen at all if the source digital is uncompressed or lossless, but since all common digital sources have some compression, this is a nonsensical statement (g), but if the NLE is capable of working with these digital sources in the lossless or uncompressed realm, it is less likely to happen.

Concatenation most often rears its ugly head in relationship to the highly compressed MPEG2 used for internal cable, satellite and server distribution in the industry or with the MPEG2 used for DVD production. This is why Discovery channel and others have had their on again/off again demand for uncompressed editing of the edit master tapes that they purchase for distribution. There's a lot of programming on Discovery channel and elsewhere that's been edited on NLEs with some compression but if you are new to them they are liable to demand uncompressed editing even if they accept programming from those they know, and trust to not push the compression too high. Most compressed NLE systems can deliver results that won't be overly mucked up by MPEG2 distribution, but it demands high 10-15 MB/sec data rates if the product is going up on the bird or into the server. Most newer technology 4:2:2 NLEs that have scaleable compression as the primary file storage medium, or as a hard disk saving alternative to fully uncompressed, use a form of MPEG2 I-frame compression because this will transcode to MPEG2 IPB for distribution with less chance of concatenation than other compression schemes. Don't get me wrong, uncompressed capability is a very good thing, but compression at higher data rates holds up pretty well to the rigors of the post-production chain.

Concatenation may also be the source of another interesting anomaly in the emerging DVD production arena. Some users are reporting that feeding DVD recorders by Y/C connection rather than firewire is giving better results, better not in terms of resolution or noise, but in terms of visible artifacting. It's likely that as this technology matures the manufacturers will resolve issues in the transcode from DV to MPEG2 IPB that is most likely due in part to concatenation. This problem may not always be the case, but for now it pays for the user to make tests and choose the most pleasing input approach to DVD recording.

As an aside, it became apparent several years ago that MPEG2 would be the broadcast compression algorithm of choice for broadcast, satellite and cable use for the time being. NLE pundits theorized that by now nearly all NLEs that use compression would be using MPEG2 I-frame compression, because it could be recompressed to MPEG2 IPB more easily with less concatenation than other forms of compression. A lot of this has certainly taken place, but it is also the case that the old, tried and true MJPEG compression with its scalability to very high data rates, and the lower cost of the disk space needed to store these high data rates, has done a pretty good job of holding its own.

What About Multiple Codecs in the Same Box?

A few NLEs built around specialized hardware with multiple hardware codecs, notably Pinnacle’s Liquid (formerly FAST) Blue, and to a lesser degree Pinnacle’s Targa 3000, and Matrox’s DigiSuite DTV are able to mix native digital formats on the timeline. Some folks might get the implication that this will hold quality better than other approaches. This isn't necessarily the case. The transcoding in these systems for the mix down and output happens on the fly rather than happening during capture into the system, i.e., there's the same amount of transcoding as with other systems. The beauty of this approach is that by having DV25 and, in the case of Blue and DTV, DV50 sources, staying in their native formats in the NLE, disk space for storage is held to the absolute minimum. There is no quality loss from transcoding to another compression scheme or grandly more storage space required to transcode to uncompressed. Fiber Channel SAN storage is still relatively expensive, but can be the only way to go for larger facilities to maximize their workflow and the varied talents of their personnel. The increased cost of these more hardware intensive NLEs is more than made up for by the decreased cost of the amount of SAN storage required to feed them. This is exactly where and why these systems are being installed and is primarily a workflow rather than quality issue.

After discussing all of these potential quality hits, it begins to feel like it's a wonder that anything other than mush comes out of our NLEs. The fact is, the manufacturers have delivered from good to astounding quality NLEs and it's up to we editors to have enough understanding of how they work to choose intelligently and once we've chosen not muck up the chain. I hope that this will help to evolve that understanding for some of you. In the writing and the review it has certainly helped my understanding. If I attempt to go any deeper in this realm, I will meet the Peter Principal face to face. (g) Ask your forum leaders if you care to get more specific information about any particular NLE as it applies to the quality elements discussed here. The COW is a unique resource for comparing and contrasting numerous choices with fellow users, as long as you approach the quest politely with a thirst for understanding instead of an offensive agenda.

Stay tuned for the second article in this series, “Part 2: The New NLE Battlegrounds,” coming in a few weeks.


### Ron Shook



Please visit our forums at CreativeCow.net if you found this page from a direct link




Related Articles / Tutorials:
Adobe Premiere Pro basics
Premiere Pro Techniques: 110 Preset & Custom Lumetri Looks

Premiere Pro Techniques: 110 Preset & Custom Lumetri Looks
  Play Video
In this tutorial, Andrew Devis goes through a new option that will be coming with the next release of Adobe Premiere Pro called Lumetri Looks - which gives you the option to apply .Look files created in Adobe SpeedGrade directly to your footage in Premiere Pro. While this option at first glance seems to only offer the ability to apply presets already created, there is also a way in which you can create your own grades in SpeedGrade, save them as .Look files and then apply those custom grades to your footage or to an adjustment layer in Premiere Pro CS_Next. In this tutorial, Andrew shows the new option and how it may be used with SpeedGrade CS_Next to create, save and apply your own custom looks.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro Techniques: 109 Timeline Panel Change in CC

Premiere Pro Techniques: 109 Timeline Panel Change in CC
  Play Video
In this short tutorial for Premiere Pro's new CC version, Andrew Devis goes over some of the changes and improvements to the timeline panel for the next version of Premiere Pro so that you can get up and running with it quickly and smoothly.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6 Techniques: 70 Color 23: Adjustment Layers

Premiere Pro CS6 Techniques: 70 Color 23: Adjustment Layers
  Play Video
In this tutorial, Andrew Devis shows how to use the new 'Adjustment Layer' feature added in CS6 and how it can be used to affect not just the whole of a clip but specific areas as well.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6 Techniques: 71 Audio 1: Track Types

Premiere Pro CS6 Techniques: 71 Audio 1: Track Types
  Play Video
In this tutorial, Andrew Devis goes over the various types of audio tracks available in Première Pro including a look at the new 'Standard Track' as well as multi-channel masters and 'Adaptive Tracks'.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6: 68 Color 21: Blend Modes 2: Lens Flare

Premiere Pro CS6: 68 Color 21: Blend Modes 2: Lens Flare
  Play Video
In this tutorial, Andrew Devis starts off by showing how to add and change a 'Lens Flare' and then how to use a colored gradient to colorize your footage for added interest. Andrew then goes on to show how to use the 'Cell Pattern' effect as a way to add gentle movement below an image to create more interest - because the eye is drawn to movement.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6: 69 Color 22 The Filmic Blend Technique

Premiere Pro CS6: 69 Color 22 The Filmic Blend Technique
  Play Video
In this tutorial, Andrew Devis shows how to use the filmic blend technique that helps to make video look more like film thus making your final production look much richer. This technique is a must for wedding videotographers.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6 Techniques: 63 Color 16 Levels Effect

Premiere Pro CS6 Techniques: 63 Color 16 Levels Effect
  Play Video
In this tutorial, Andrew Devis shows that there is a levels effect in Premiere Pro which you may have used if you are a Photoshop or After Effects user. However, the levels effect in Premiere Pro can be both hard to find and hard to use, and according to Andrew, should not be your first port of call for brightness & contrast adjustments. That said, Andrew demonstrates how to use the level effect in Premiere Pro as well as showing its short-comings.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6 Techniques: 64: Color 17: Day for Night

Premiere Pro CS6 Techniques: 64: Color 17: Day for Night
  Play Video
In this tutorial, Andrew Devis shows a simple method for using footage shot during the day to make a it look like the shot was filmed in the early morning or late evening. Andrew uses a couple of effects to colorize the shot and change its contrast.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6 Techniques: 65 Color 18: Leave Color Effect

Premiere Pro CS6 Techniques: 65 Color 18: Leave Color Effect
  Play Video
In this tutorial, Andrew Devis shows how to use the 'Leave Color' effect as well as a simple compositing technique to be able to focus on just the item needing to be shown without having to worry about similar colors in other parts of your shot. While the 'Leave Color' effect is powerful, it is also a simple approach which may or may not work for you depending on the footage you are working with. If you are not getting the results you want with the 'Leave Color' effect, then you may wish to use the secondary color correction techniques shown in previous tutorials to obtain a similar and slightly more controllable result.

Tutorial, Video Tutorial
Andrew Devis
Adobe Premiere Pro basics
Premiere Pro CS6 Techniques: 67 Color 20: Blend Modes 1

Premiere Pro CS6 Techniques: 67 Color 20: Blend Modes 1
  Play Video
In this tutorial, Andrew Devis starts by showing how to animate an effect from the 'generate' category of effects and then shows how to use 'blend modes' to blend a reasonably flat piece of footage with the animated effect below to give a little more life or interest to the shot.

Tutorial, Video Tutorial
Andrew Devis
MORE
© 2016 CreativeCOW.net All Rights Reserved
[TOP]