So you’ve heard photographers insist on shooting in RAW versus JPEG because the files are unprocessed, uncompressed, and thus contain just that much more information than their merely mortal counterparts. Still, you may not know what that really means, or you’ve dabbled with processing a RAW image and seen a little how it can pull more details from shadows and highlights etc, but nothing major. Here’s an idea of how it all works, if 12, 14, 16, 24 bit files actually do better than 8.
Very quickly, each pixel in any image has its tonal value and color stored within it. Since we’re dealing with digital files aka computer files, understand computers store things in code of zeroes and ones. Bit depth is simply a reference to the number of digits the tonal info is stored in. Example, if you had a bit depth of 2, there would be two digits for use and corresponding 4 values. 00, 01,10,11. So basically four different colors. A bit depth of 8 (JPEG standard) is 2 to the 8th power, which is 256 values. That’s a lot more tones per channel. Consider a RAW file which has 4096 per channel. I’m sure from this, though you may not care, or care to care, you can see how more bits will carry more info, and can deduce that working in those files will result in less loss. Even if you ‘lose’ the same amount of data, losing it from 4000, is better than losing it from 256. So my advice would be to edit in RAW as much as you can, then go into the rest of the editing when you can do no more in RAW.
If you want to see a difference by looking at your histogram, we can do that, which will give you a visualization of what’s going on. I’ve opened up two images (sorry, didn’t have JPEGs of the RAW file for better comparison), one is RAW, and one is JPEG. I’ve given each some light manipulation and take a look at the histograms. The 8-bit JPEG has crazy spikes in it even after slight tonal adjustments in curves, and the RAW (which actually had more adjustment done), has none of that. Those spikes are lost info – and in this case, lost tone value. Broken pixels, if you will.
8-Bit JPEG as shot
8-Bit JPEG With Slight Curves (see spikes in historgam)
RAW File With Tonal & Color Adjustments (no spikes)
But Hang On…
I’ve just done the typical argument for high bit files, and shooting in RAW. Now, I’m still going to advocate shooting in RAW because I like knowing the info is there to play with if I want to, and frankly, I like the more subdued, somewhat granular look an uncompressed RAW file has. Yet, there’s a big hairy ‘but’ coming. I’ve tried to explain the virtues of this info to new photographers (who judged the final prints) and non-photographers (who didn’t care and judged the final ‘prints’), and most didn’t notice when one was a high-bit file or not. When I showed them the histogram they understood it, but at a small size, or even larger sizes, they just couldn’t tell.
There were a few types of shots where the differences were more pronounced and they understood some value of what was being presented. Those types were usually shots with high contrast and highlights where any manipulation would show a sort of gradient. Otherwise, however, especially when photos were actually printed (not professionally), the difference can be negligible. Oh, and in case your’e wondering, in regards to CMYK workflow and printing…most printers are going to work in 8-bit. So the choice is yours and I’d suggest, if you’re not going large, and are low on computer processing power and storage…maybe JPEGs are the way to go for casual shooting.
[RELATED: CMYK VS. RGB AND WHY YOU SHOULD CARE]
- Easily Create Great HDR Images From A Single RAW File
- 5 Things You Didn't Know You Could Do in Photoshop
- Living A Capture Worthy Moment? Good, Put The Camera Down...
- Shoot Like The Pros With Basic Gear | Patrick Demarchelier
- 15 Things To Think About Before You Press The Shutter & W...
- Using One On-Camera Flash To Create Multiple Light Sources