In the past few years, we’ve seen photo retouching programs introduce some really remarkable abilities, and not just introduce them, but make them so accessible. Capture One allows us to color grade an image like Leibovitz with relative ease, and Lightroom, with the right presets, can make a fillet out of a rump-roast with a single click. Just pulling up Photoshop and seeing how well something like the healing brush tool works now (yes, it’s better than before), and then the algorithm that allows Photoshop to ‘stabilize’ an image rogered with camera shake are astonishing. PS will be leaping tall buildings with a single bound next…
Of course, with each new reveal and release with added functionality, we welcome some change, and we worry about others. Photographers, to a certain extent, are concerned with the automation of photography, and that we may be relatively redundant. I don’t see that really and even if it were the case, there isn’t much to be done about it because it’s in the cause of moving forward, and humans, well, it’s just not in our nature to snuff out the fire.
So what next? Auto-colorizing seems a likely candidate. Sure, to a certain degree the ability for a computer to analyze and auto-color an image has been around for ages, but it’s hasn’t been that good, and it hasn’t been that easy. UC Berkley Computer Vision Ph.D. student Richard Zhang is about to change that and is using a “convolutional neural network” to do it. But how good is it? Impressive.
It stands apart from previous ways of achieving the result largely because it does it automatically, and in order to do that, it references a rather massive cache of a million color photos as inputs, resulting in significantly more realistic images. So realistic, in fact, that in tests of real color photos and colorized photos, people were fooled 20% of the time; and that rate, while it may not sound like much, is a significant leap.
It’s interesting to note this, given the human eye, on average, can perceive about 1 million colors. The real question is, what is this going to do for consumers of photography and the creators of it? For one, as the tech develops even more, perhaps using a larger database, we may have the ability to colorize images from long ago, and learn more about our history – almost like a hidden message in images that already exist. We may be able to see the world as some of the great photographers had, and feel more present at momentous moments in history than ever, maybe generating compassion and solidarity.
From a photographers perspective, I’m not so sure, though I think it could hold great purpose in retouching once the tech becomes good enough. I wonder if a trickle-down from this could be implemented into cameras or retouching software to allow them to white balance better, or more accurately portray skin tones? Who knows? Time will tell enough. If you’d like to learn more about it you can on the site, see the research paper here, and the demo code can be found here.