It's easy to see why people would think that a larger HD frame size is automatically better than a smaller one, or that the same image size from 2 cameras of the same format would have the same quality. This seems like a simple question, but once you get all the qualifiers and exceptions in there, you see why content producers are going out of their minds trying to understand all these technical nuances and compromises.
First, we have to specify that the sort of image that is stored on media is a separate issue from what the standards say needs to be played back and displayed.
The problem with HD is that it creates huge pools of data. The problem with creating, .and then handling, huge pools of data is that it is not cheap. But "cheap" is a very popular feature, so some compromises need to be made.
Start by looking at SD. Just like DV and DigiBeta creating pictures of the same size, that comply with the same standard, there is an obvious quality and cost difference. Why? Because in order to make smaller data sets to handle in DV, more information was thrown out, and what was left was compressed more aggressively.
DV technically played back a picture that could fill the same monitor that Dbeta could, but the image quality was noticeably degraded...not a problem for corporate and news guys, but higher end production would typically aim a bit higher. It's all in what's right for your situation.
While the DV format had a similar luma pixel count to DigiBeta, DV threw out more color information, forcing those discarded color values to be calculated by simply "filling gaps" when the video was decompressed. This did result in some perceived image "vibrance" loss to the eye when compared to Dbeta or DVCPRO 50, which had twice the color values stored and far less "gap" to recalculate and fill.
And that's before you take sensors and lenses into account.
HDCAM and XDCAM
Now, with HD we have a similar approach to aggressively throw out info and compress images similar to the days of DV, but in many ways, it's now even more aggressive than that.
In HDCAM's case, Sony had the foresight to see that studio infrastructure was not going to be upgraded across the board the day the first HD piece of equipment was unpacked. So they decided to create an HD format (HDCAM) with an SD data rate (135 Mb/s) so that existing SDI (SDTI) infrastructure could be utilized to ship the stuff around.
In order to get what would be a 1.2 Gb/s data rate at 1920x1080 uncompressed down to nearly a tenth of its original size, the camera drops the picture resolution down to 1440x1080 from 1920x1080, then throws out two thirds of the color information BEFORE they even start to "compress" the data set.
So effectively, HDCAM is 1440x1080 with a color difference subsample of 3:1:1.
However, when this plays out of the deck through HD-SDI, the signal that the deck creates from that stored signal is actually 1920x1080 4:2:2. Yes, all the data that was not stored and the difference in resolution needs to be "manufactured" on playback, so that your NLE will see 1920x1080 4:2:2 coming down the pipe on HD-SDI.
With HDV and XDCAM, these formats are actually shooting 1440x1080 non-square pixel (or, with JVC's HDV1 format, 1280x720p, square pixel). HDV does not pretend to have the 1920 horizontal resolution either on the camera sensors (Canon's is full res 1440x1080) or from tape playback with FW output.
The only time when an XDCAM or HDV2 (Sony/Canon) device needs to "create" a 1920x1080 image from its 1440x1080 information is when it outputs full raster HD via HD-SDI, which only takes square pixels. (Again, JVC's HDV1 is already square pixel so…less of an issue.)
Also, the MPEG compression utilized by HDV and XDCAM has a color subsampling rate of 4:2:0, which does affect image quality (or at least, strict accuracy) even when you consider that HDCAM's 4:2:2 is interpolated from the 3:1:1 information it has stored.
When comparing the two cameras however (F900 HDCAM vs. the F350 XDCAM), the camera heads are simply night and day different. The F900 can be bought for as low as $60,000 (it arrived on the market at an MSRP closer to $100K), and the F350 camera can be purchased for something around $20,000. This should be the first indicator that there is a significant difference in the capabilities of the two cameras.
When it comes to the recording formats, HDCAM simply carries more data. Even at its aggresively compressed data rate of 135 Mb/s, stores four times the data of XDCAM's top data rate of 35 Mb/s.
Also, as much color data as the HDCAM recording format discards, XDCAM tosses out even more. More discarded information and higher compression ratios affect image quality.
Varicam and the HVX200
The Varicam has a 1280x720 sensor and only records 720p (it does not do 1080). DVCPRO HD 720p is 960x720 in the file, and the data transfers into an NLE, as 960x720 non-square pixel frame size. At 960x720, the stored file is 4:2:2 color subsampling.
DVCPRO HD decks and some newer cameras including the HVX200 can do 1080p/i in DVCPRO HD. These frame sizes in-file are 1280x1080 for 29.97 fps and 1440x1080 for 25 fps.
The HVX200 is a unique piece as it has sensors that are physically 960x540. An image is created by "pixel-shifting" some values to create more discrete image samples. I've worked with HVX200 files and frankly, they look better than they have a right to for that price.
What I've never understood is what the flow chart looks like inside the camera.
DVCPRO HD is, on PLAYOUT, a 1280x720 square pixel 4:2:2 format, or on the appropriate equipment, a 1920x1080 square pixel 4:2:2 format. Remember when it first came out, it was principally an HD-SDI in/out tape format so the image had to be re-constituted at the full size for serial interface, just like HDCAM does.
On the HVX200, my question on how one gets from the sensor to the recorded size is whether or not the 960x540 image is then interpolated up to 1280x720, and THEN subsampled back to 960x720...or if the sensor is oversampled or "pixel shifted" in both directions, all the way to 1920 (960x2) x1080 (540x2).
Then would it scale that down to 1280x720?...or 960x720?...or both in order? This isn't clear to me.
One interesting area of DVCPRO HD tech specs is the data rate itself when the framerate is adjusted. Much of this didn't come to the surface generally until the P2 cards came out and Panasonic was trying to present as sensible an economic picture as they could.
DVCPRO HD is 100 Mb/s, a fact that they've frankly been bludgeoning HDV with since HDV hit the market. What they don't mention when they're making those comparisons is that normally DVCPRO HD 720p is a 60p file. It doesn't matter what frame cadence you choose: it gets packed in a 60p stream.
Most Varicam users who shoot with and love that camera (I think it makes wonderful images myself) shoot 24p with it for a "filmic" aesthetic, which that camera does really really well. However, when you are shooting 24 fps into a 60 fps stream, each frame needs to be duplicated: one for two frames, the next for three, the next for two, then three, and so on.
When you handle this material and cull out the redundant frames to get down to the 24 you need, you've discarded 36 redundant frames, which, as Panasonic points out every chance it gets, are all I-frame compressed. They aren't long-GOP like MPEG2 based HDV. So you've tossed 60% of your frames...and therefore 60% of your data. DVCPRO HD at 24p is a 40 Mb/s file...not a 100 Mb/s file.
Next to a 35 Mb/s variable data rate XDCAM recording, a 40 Mb/s constant bitrate recording doesn't have that much of a margin. However, the difference in camera heads would be evident in the aesthetic of the pictures.
So once again, image quality relies on a collection of factors.
We haven't even mentioned lenses which is an even more significant factor in image quality as the glass is first in line, followed by the camera head, the processing, luma and chroma subsampling, then compression and what data rate is used.
All of which is why there are very few clean and neat answers out there and even fewer completely compatible comparisons.