New Workshop! Lighting 3 | Advanced Off Camera Flash

Gear & Apps

MIT Boffins Have Made A Camera That Will Never Overexpose An Image

By Kishore Sawh on August 19th 2015

Screen Shot 2015-08-19 at 3.23.27 PM

If you could pick one area of improvement for camera tech what would it be?

When you think of advancing camera tech, what are the camera facets that first come to mind? I’m willing to bet near the top of near everyone’s list would be better high ISO performance, perhaps higher resolution, size perhaps, and maybe a fast electronic shutter or high flash sync speeds. It’s the better ISO performance that seems to garner much of the attention, and I would say while important, what I’d like, and what I think we’re really going to see be the next big area of focus and development, is in dynamic range.

There are many reasons for this which I could get into at a later day, but obviously a large part of its importance is the given ability to recover data from a file, especially in high contrast situations. In that very vein, a research team at MIT has developed a new camera technology that aims to make sure any image is never overexposed. The boffins at MIT aimed and haven’t fallen far from the mark. As a first demonstrative iteration, it works.


It was presented in a work at the International Conference on Computational Photography earlier this year, and the new tech showed the tech that would allow camera sensors to draw large amounts of information from equally large amounts of light, without being overblown.

A smart tradeoff in taking ultra high dynamic range data with a limited bit depth is to wrap the data in a periodical manner. This creates a sensor that never saturates: whenever the pixel value gets to its maximum capacity during photon collection, the saturated pixel counter is reset to zero at once,
and following photons will cause another round of pixel value increase.

This rollover in intensity is a close analogy to phase wrapping in optics, so we borrow the words “(un)wrap” from optics to describe the similar process in the intensity domain. Based on this principle, a modulo camera could be designed to record modulus images that theoretically have an Unbounded High Dynamic Range (UHDR).

Now, I’m a bit thick sometimes (I didn’t go to MIT), but what this means at a simplistic level is that the sensors’ pixel sensitivity gets reset and refilled when the max exposure has been achieved – acting as a sort of HDR on an individual pixel level.


Screen Shot 2015-08-19 at 3.15.10 PM

Apparently, this is known as a ‘modulo’ camera; a name that takes its origins from modular arithmetic that is rooted in resetting numbers. It’s all quite brilliant if you ask me, though sadly probably not weeks, or months but years before it’s in a consumer camera. But it’s a good step, and one in the right direction. Sure, creatively many of us have come to love being able to expose as we ‘wish,’ but sometimes I just wish things were exposed overall correctly…

You can see the full presentation paper here, and see a short video of how it all sort of works below.

If you could pick one area of improvement for camera tech, again, what would it be?

Source: Imaging Resource, MIT Media Lab

This site contains affiliate links to products. We may receive a commission for purchases made through these links, however, this does not impact accuracy or integrity of our content.

A photographer and writer based in Miami, he can often be found at dog parks, and airports in London and Toronto. He is also a tremendous fan of flossing and the happiest guy around when the company’s good.

Q&A Discussions

Please or register to post a comment.

  1. Ralph Hightower

    Who’s he or she?
    Oh, the college and the geeks that go there. Now, I understand the title.

    | |
  2. Colin Woods

    I would like to see low ISOs as well. Why not be able to underexpose ten stops electronically instead of carrying ND filters. What about a built in grad ND – tell the camera that you want the upper half two stops less than metered. It must all be possible.

    | |
    • Dave Haynie

      The thing with current cameras is that there’s a “native ISO” for any sensor. To calculate native ISO, find the EV at which your sensor’s photodiodes saturate OR at which the sensor’s charge wells fill, and relate that EV back to a film ISO that saturates at the same point.

      You’ll have a good idea of the native ISO of your camera.. that’s the point at which the next lower ISO setting, if there is one, will be in the “extended” range. That’s because they’re getting that lower ISO (often ISO 50, or a just plain “L” setting) by software scaling of the native (usually around ISO 100) down a bit. So dynamic range will drop by about a stop, and the sensor will actually saturate at the same point it would for ISO100, thus the adding of it to the “extended” range (extended ISOs mean something not quite right — either it’s not meeting the manufacturer’s standards of quality or it’s not meeting the ISO spec).

      So unless there’s some way to actually reduce the sensitivity of the photodiodes themselves, you can’t get the “ND” function in hardware.

      | |
    • Colin Woods

      Thanks for that, my knowledge of digital tech just got a bit wider.

      | |
    • Matthew Saville

      It would be entirely possible to create a digital sensor with a lower base ISO, it’s just a matter of demand. Most people are fine with the quality they’re getting at ISO 100; the cry for over a decade now has been to R&D the crap out of high ISO performance.

      The Nikon D810 changed all that, with a true base (native) ISO of 64. Who knows, we may see another step in this direction in the future, but I doubt if ten stops will ever become possible via some electronic wizardry.

      However, the over-epxosing and re-filling each photosite (photon bucket) is a huge step in the right direction. It wouldn’t be the same as having an in-camera GND filter, but you would still find that your images are much more malleable in post-production.

      I once dreamed of a sensor that could pick the “right” ISO on a pixel-by-pixel basis. I’m sure that some form of technology along these lines is possible, however giving every single pixel its own variable ISO would probably require a complete re-design of circuitry, probably also needing some sort of whole new microscopic or atomic level of precision.

      Who knows, I’m not an EE, an ME, or any sort of engineer or physicist, so I’m just dreaming and talking hypothetically here.

      | |
    • Dave Haynie

      Hey Matthew –

      Well, the Nikon D800/D810 actually kind of illustrates the problem… it gets a lower native ISO by virtue of its relatively tiny pixels, as compared to the more common 20, 22, and 24Mpixel full frame cameras. The native ISO is the film equivalent of where the camera saturates. You can support more light — thus a lower ISO — by either sending fewer photons to each photodiode (smaller photodiodes, neutral density filters) or by building deeper charge wells. And we’ve seen both — pro cameras tend to have deeper charge wells in proportion to their pixel size, but a charge well is basically a capacitor — a large device, and to deliver half the ISO at the same sensitivity, you would need twice the charge well. I think you can get this for some specialty sensors, but those are usually CCDs, which are easier to tweak in various ways. And I am an electrical engineer — that’s what pays most of the bills :-)

      I think the modulo idea would be an interesting one, as long as it’s optional and forces the camera into native ISO mode, so that you’re only dealing with sensor saturation, not ADC saturation as well. And even that’s still going to be limited if the photodiodes themselves saturate.

      I read a spec sheet on a cellphone-size sensor the other day which had some tricks that might be applied to “real” cameras at some point. The sensor had an HDR mode, which employed different curves depending on the current level of light, so you get real HDR in one shot. They didn’t go into detail on how this would work, but it’s easy enough at the sensor level to NOT store the full output of each photodiode — you can arbitrarily shunt some of the charge to a resistor or something. So it would be possible to build a sensor with non-linear pixels… think on-sensor gamma curves. An on-chip log curve would allow a 14-bit ADC and typical sensor to capture an effective 16, 18, or 20-bits of useful information. Not as good as a linear 20-bit, but certainly much better — and more immune to highlight blow-out — than today’s 14-bit. Now, I’m not a chip designer, I don’t know what that does to the complexity of the electronics, but given that we’re dealing with CMOS chips, while photodiodes and charge wells are big, transistors are tiny — you could fit thousands per pixel.

      | |
  3. Matthew Saville

    New technology is always welcome. Eventually, it will find a way to make our lives easier instead of just being a fad. This has been the case with much of technology over the past few decades. Remember when you had to carry around dozens of CD’s in your car? Remember when you had a “high tech” CD changer in the trunk? …Now you have approximately one zillion songs at your fingertips on one little cell phone.

    But enough of that soap box. TL,DR; I’m glad they’re toying around with concepts like this, instead of just sitting around telling themselves that dynamic range is good enough where it is. *coughcough*

    | |
  4. Dave Haynie

    Interesting idea, and easy enough to add as a feature to a sensor chip. At present, every sensor has a photodiode array that fills per pixel charge wells full of electrons in proportion to the photons impacting each sensor site. When the charge well for a particular pixel is full, that pixel saturates. Why not offer a mode that simply dumps the charge and resets. Of course, you’ll need some very clever software to figure out the modulo pixels, at least without adding any additional circuitry to that supporting each pixel. On the other hand, given the move to BSI sensors, there should be a little more room for circuitry… adding a bit or more per-pixel to indicate at least one modulo might make things easier.

    Of course, this doesn’t solve every problem. At a certain light level, the photodiode itself is saturated (it’s at it’s maximum flow of electrons — additional light won’t release additional charge), and so you’re getting blown-out hightlights even with a modulo sensor. And unless you have a true “ISO-less” camera, you can have saturation in the ADC circuitry after the gain stage that occurs before charge well saturation occurs.

    | |
    • Barry Cunningham

      Mathematically, adding bits-per-pixel to count modular overflows is exactly equivalent to increasing the bit-depth-per-pixel and exposing to the left.

      | |
    • Dave Haynie

      Barry — absolutely true. But probably a whole lot easier, at some point, than building a deeper charge well. Though the in-between ground is to have a single bit that indicates “wrapping” or not.

      The other thing.. more or less, the charge well is scaled to the size of the photodiode. There’s some variation, but that’s the basic idea — no point in a conventional sensor in storing crazy amounts of extra charge, because of the whole issue with saturation of the photodiode itself. That makes me wonder if they have some solution for this, too. There has been lots of talk, around the photo geek community anyway, of sensors with variable actual sensitivity. I’ve seen a couple designed with an actual compression function in the photodiode circuitry itself (not sure what they did, but if you incorporated a charge scaling function, which looked at the present charge level to decide the level of compression, you’d essentially have a log sensor, which could deliver a far more useful range of input — gamma curves directly on-chip, rather than in software.

      | |
  5. Graham Curran


    | |
  6. Duy-Khang Hoang

    Seems a lot of people are quick to dismiss this concept. It probably doesn’t help that the articles emphasize the whole “never overexpose” an image side to the tech rather than the real advancement. There are a few situations where this technology comes in handy. e.g. you want to shoot shallow DOF portraits in the middle of the day when the sun is at it’s brightest, your camera can only do ISO 200 as base, and the shutter speed is only 1/4000s, you also don’t have an appropriate neutral density filter on hand. This tech solves that problem. You could say, stop down the lens (creative freedom), neutral density/polariser (fiddly to put on and take off), buy a better camera (fair enough), don’t take the photo (okay). The introduction of electronic shutters allowing for shutter speeds to hit 1/16,000s can help to a certain degree. It’s an interesting advancement and this kind of thinking is good in that it advances what we currently limit ourselves with. Whether the technology will ever become relevant is for the future to decide. This tech could also help with increasing full well count for ever shrinking pixel pitches.

    | |
  7. Dustin Baugh

    Photography is so much more than getting the proper exposure on an image.
    But if you’re in a situation where a couple seconds diddling with settings equal a missed image it could be a nice addition to Auto or Programmed Auto.

    | |
  8. norman tesch

    or you can just go to a gallery or hire someone to shoot you and your family so that you wont be bothered by little things like learning how to actually use that camera you just bought..funny how they talk about white balance but cant master simple thing like have the photo in focus

    | |
  9. Paul Empson

    so long as It can overexpose when I want it to.. fine..

    | |
  10. Max C

    Wake me up when this reaches the market.

    | |