Memorial Day Sale | Save $75 Off Premium & 30% Off the Entire Store!

News & Insight

The Misrepresentation of Camera Specs | Why a 4K Sensor Will Not Give a 4K Image

By Justin Heyes on August 13th 2017

Today in photography, especially in on internet forums and message boards, there is a plague of excess precision. Photographers argue on such minuscule specs and figures of a lab test. The difference between 8.8, 9, and 9.2 is insignificant, yet they will fight tooth and nail for some sort of gear superiority. People talk about the high ISO performance of cameras like the a7s II at 51200, when we really should just say ISO 50k. Camera specs are intentionally precise in some areas and others obtuse.

When a figure in the community speaks up to “clarify” any ambiguity, the community, in general, tends to lean with them because they speak with conviction, but this “clarification” can lead to more confusion as a whole. Recently CookeOptics TV released a video where in the cinematographer Geoffe Boyle talks about 4K resolution and the misconception surrounding it.

What is a Pixel?

The word pixel, unbeknownst to the average photographer, has different meanings, yet these meanings are sometimes used interchangeably causing any conversation to degrade to a skit of “Who’s on First?”. Pixels can be defined as imaging, resolution, and display. Imaging pixels (photosites) are not the same as resolution pixels (megapixels), and both are different from display pixels (computer monitor).

Boyle states in his interview that a camera with 4K “photosites” would give the stills resolution of 2.7K “pixels.”

Bayer Pattern

Optically the traditional bayer pattern is closely akin to 4:2:0, where each row of photosites alternates rows of corresponding Red-Green and Blue-Green color filters. The color filter was first patented by Bryce Bayer in 1975 and has become the industry standard for sensors and is used in everything from cellphone cameras to the $30,000 RED Helium, with a few exceptions. An algorithm is then used to decipher how those imaging pixels are converted into resolution pixels, in a process called debayering.  The algorithms and a number of photosites sampled per resolution pixel are different from each camera manufacture and are not released publicly. The real resolution of any camera already takes into consideration the process of debayering, pixels used for black levels, and other factors and is advertised in effective pixels.

In computing, the terms mega- and giga- have traditionally been modified from their original meaning for storage space, leading to confusion. Whereas giga- is decimal defined as 1000^3 (1 billion), in computing terms, it is 1024^3 (1.074 Billion). Hard disk manufacturers would use the former in marketing and advertising, resulting in more impressive numbers. It’s like measuring yourself in stone when you don’t like your weight in pounds. It comes to no surprise camera companies do the same.

[REWind:] MICROSOFT CURVED SENSOR | “OUR PROTOTYPE CAMERA SYSTEM IS SHARPER THAN ANY KNOWN COMMERCIALLY AVAILABLE CAMERA…”

Looking at aforementioned RED camera, or other cinema cameras from Blackmagic, Panasonic or Canon, the advertised resolution directly correlates to the longest side of the sensor and not the processed resulting image, not because that is its “true resolution,” but because of a process called supersampling.

What is Supersampling?

In supersampling, multiple resolution pixels are sampled to make a better approximation of the captured image. An example of this can be seen when comparing the a6300 against the a7S II. The a6300 captures a 6K equivalent image (6000×4000) and downsamples it to 4K UHD, whereas the a7S II records an equivalent 4K image (4240×2832).

Using Boyle’s example with the Arri Alexa XT, the sensor has a 2.8K sensor that is 2880 x 1620 in 16:9 mode. The camera can capture the full resolution 2880 x 1620 in ARRIRAW, but when capturing ProRes the Alexa XT downsamples the full resolution, producing wonderful 1080 footage. That is how the resolution is derived, but what about color?

Color Depth

The photosites on an image sensor are roughly half green, quarter red, and a quarter blue because of the bayer pattern, but the corresponding resolution pixels are derived from a combination of photosites, much like the display pixels on a computer monitor to television. The color information from the sensor is expressed in bit depth. The more bits per channel, the more color information can be captured and stored. Each primary color (RGB) has to have the same bit depth.

For example, the stills outputted from the Canon 5D mark IV are 14-bit, which is 16384 colors of Red, Green, and Blue; while the video recorded from the sensor remains 8-bit (256 colors of Red, Green, and Blue).

Conclusion

The misrepresentation of camera specs can lead to confusion among the photographic community. Some who seem to spend more time in front of the camera voicing their opinions rather than behind it, lead to more confusion. To quote Boyle, “people talk about it [resolution] and don’t know what the hell they are talking about.” If you would like a more in-depth explanation of pixels, mtf curves or camera specs in general, I highly recommend watching the lecture “Demystifying Digital Camera Specifications” by John Galt and Larry Thorpe.

About

Justin Heyes wants to live in a world where we have near misses and absolute hits; great love and small disasters. Starting his career as a gaffer, he has done work for QVC and The Rachel Ray Show, but quickly fell in love with photography. When he’s not building arcade machines, you can find him at local flea markets or attending car shows.

Explore his photographic endeavors here.

Website: Justin Heyes
Instagram: @jheyesphoto

2 Comments

Please or register to post a comment.

  1. Lauchlan Toal

    Interesting video from the perspective of an older photographer. Most people who learned on digital would find the 1080p/4k/8k specs very intuitive, since that’s how we measure camera sensors anyway. But I can see how a film photographer (or someone who uses Sigma Foveon sensors) would feel that the measurements we discuss today are exaggerated. 

    I would say that most people online aren’t making false claims about resolution though – people just discuss things with the assumption that their view is understood. For example, a digital photographer would assume that 4k refers to a 8.3MP output with Bayer interpolation, whereas Geoff Boyle here would likely assume that 4k refers to 8.3MP of pure info – like from film, or via downsizing a 11.9MP image. And usually that’s fine, since that rarely matters – usually we just say that we’re happy that a camera has 4k and that’s that. But when we get deeper into technical discussions, those differences in assumptions can definitely cause confusion.

    So I’d disagree with the opinion that calling 4k Bayer interpolated footage 4k means you don’t know what you’re doing, it’s good to see his view explained so that confusion can be avoided when two people with different assumptions interact.

    | |