We’re getting to a point where neural networks, deep learning computer systems and A.I are not so much a future fancy, but as the here and now. This kind of stuff is at the forefront of our field, and while we may laugh and tease about using apps like PRISMA, that use said intelligence to stylize any image, we still use them; maybe not you, but many. But what if it was possible to use that same intelligence less for sh*ts and giggles, and more for proper function? A team of researchers at Cornell, in conjunction with Adobe, are pushing to find out, and what they’ve done so far is remarkable.

[RELATED: PHOTO EDITOR APP PRISMA IS BLOWING UP, ANNOYING ARTISTS, & MAKES THE COOLEST TIMELAPSES]

What can the work do? Well, laying neatly on top of what is bound to be a mountain of algorithms and engineering is the rather simple functionality of being able to take characteristics from one image and put it into another, without massive distortion; or the blending of those two images. Say you have a landscape shot with a pretty city skyline, but you took it at noon and you’d like to see it at night, you can do that. You’d use your skyline image as a base, then choose a night image as a reference, and the resultant final image would look a little like this:

=

That’s pretty astonishing no matter who you are, especially considering this is done from two images only. You can read the full paper here, and if/when you do you’ll notice how there are specific reference points in the images that the system looks for in order not to alter structures and so forth, so you don’t get clouds in the ground or bridges in the sky, and that sort of problem. This is, in a sense, an adaptation or augmentation of tech found in apps like PRISMA and elsewhere but with a more practical side.

I’ve spoken previously about a way to essentially lift the color palette from one image and apply it to another within Photoshop with great effect, and this seems like the way-down-the-line evolution of something like that, where now texture and structures are recognized and defined. Of course the thought process for photographers and image specialists here is that this tech might find its way into our post processing workflows, and there isn’t a lot to suggest it won’t. The question is, do we want it, and is this going to help or hinder photography as a craft?

Check out the full paper here with many other sample imagery.

 

Via: DPReview