Cultural Wedding Photography Guides (Launch DIscount!)

Sony's Exmor Full-Frame Sensor Gear & Apps

CMOS & CCD Sensors | How They Work & Where They’re Heading

By Kishore Sawh on May 6th 2015

Isn’t it ever the truth that we become blind to what we see every day. As modern photographers, part of the deal is being somewhat tech savvy, and with the rate of technological advancement, sometimes it can be hard to keep up. There are mountains of information out there to keep up on and be aware of, but to some degree, information is useful only to the extent that you can find and use it when you need it. So that means you’ve got to separate the wheat from the chaff and know what things you should really understand because they will be continually brought up. One such example? CMOS (complementary metal-oxide semiconductor) and CCD (charge-coupled device) sensors – what they are and why it matters.


In the animations herein, photographer Raymond Siri attempts to give a basic overview to these two sensor types and give insight into how they work so you can know which works best for you. These videos have been doing the rounds but honestly give just a tiny glimpse into the comparison. As with so much in photography when it comes to questions and comparison, the answers aren’t binary – there is no yes or no. So which is better? There are advantages to both types of sensor technologies, but CMOS seems to be leading the pack, and I’ll explain a little as to why this may be so.

Between A Rock & The Future

As image sensors are digital, they, of course, need power, and traditionally CCD sensors tend to use more power, even up to 100x. Their data throughput rates are slower than that of CMOS (this partially explains why medium format cameras don’t have a high continuous frame rate), but also traditionally they produce images of higher quality with less distortion and noise, and high QE (Quantum Efficiency). Generally, the amount of each single CCD pixel that is strictly for light gathering, versus any other function, is higher than CMOS. The other functions CMOS sensors do, however, are critical – they do some of the heavy lifting like image processing, noise reduction and allow for special effects to be done.

At the very base level, their theory is the same, as they both convert light into electrons, but how those analog charges are turned into digital by each sensor type varies quite a bit. Each pixel in a CMOS sensor, for instance, has many transistors next to it to amp up and move the charge using typical wiring. Often light entering the sensor hits these rather than the actual photodiode and the way the information moves can cause distortion. Not a surprise then that many high-end medium format systems have used CCD sensors.

Furthermore, CMOS sensors read information as if it were reading a book of pixels, line by line which is the cause of the rolling shutter we all hate. But a CCD captures the whole frame at once with what is known as a ‘global shutter.’ While there have been ventures at making CMOS sensors with a global shutter, it takes time to see this implementation throughout the industry.

Where It’s Heading

So CCD seems brilliant, but alas, CCD sensors don’t use traditional manufacturing whereas a CMOS sensor can be manufactured in a more typical silicon production line, making them all the more inexpensive. This is largely what, I believe, has pushed the utterly rapid development of the digital camera. The rate of development for devices requiring camera sensors now, like phones, is so high, and the turnover so great, that the lesser expense of CMOS sensors is crucial.


If we can accept that CMOS’ advantages generally lay outside of image quality, we can see the advantages are speed, cost, high efficiency of power, and some on-sensor processing. But that may not be for long, as the gap between the CMOS and CCD quality is closing due to the backlit CMOS sensor, and also the demands of video and fast autofocus. The processing abilities of CMOS make high-speed video possible, as well as high-speed AF, which are two regions of our field that really have our affection, and thus attention. We even see the promise in CMOS medium format. So you may now even need to concern yourself of which is better because the way it seems now, you may not even have a choice because CMOS is running away with it.

Source: Image Sensors World

Terms: #CMOS

Kishore is, among other things, the Editor-In-Chief at SLR Lounge. A photographer and writer based in Miami, he can often be found at dog parks, and airports in London and Toronto. He is also a tremendous fan of flossing and the happiest guy around when the company’s good.


Please or register to post a comment.

  1. Tom Blair

    Great info,like this one

    | |
  2. Ed Rhodes

    cool video, needs narration though

    | |
  3. | |
  4. Dave Haynie

    Nice. But more details.

    CCDs are no means inherently global devices anymore than CMOS devices are… and this is one reason that today’s CMOS already outperform most practical CCDs on sensitivity and noise. Yes, a full sensor CCD device will have more area available to the phototransistors, as it’s a much simpler device. However, not as much area as a BSI CMOS sensor. However, this is a slow readout device and will require a mechanical shutter… which, of course, is how we use our CMOS chips on most better still cameras, too. That’s why a mirrorless camera still clicks, of course, and why you don’t get rolling shutter in still photos (well, unless your camera has an electronic-shutter-only mode, which certainly some do these days).

    In order to make a global CCD, the first type is what’s called a frame-transfer CCD. Every CCD is basically a gigantic analog shift register… think of it as a bucket brigade. The charge in one cell is transferred to the next, and so on, and so on, until it leaves the chip. To make a faster (still not global) CCD, they built what’s been dubbed a frame transfer CCD. That’s achieved by building a double-sized chip with half of the chip kept in permanent darkness. So after exposure, the charge from the active half is quickly transferred to the covered half. The downside, of course, is that you need a chip that’s twice a big.

    Most modern CCDs are interline-transfer CCDs (in camcorders, etc… not necessarily specialized CCDs used for scientific imaging). In this case, you have alternate rows of the sensor masked off from light. After exposure, the charge from each exposed cell is immediately transferred to the non-sensitive cell next to it. Thus, a true global shutter… but a very large part of the sensor is not available to light capture. Some more modern sensors solve this by essentially stacking two sensors vertically, but that makes them much more expensive.

    CMOS versions of either of these “tricks” are certainly possible. And that’s exactly what global shutter CMOS chips, like Sony’s Pregius series, do. They build analog memory into the sensor, perhaps a bit more elegantly than simply “stacking sensors”. The problem is the same as in CCD… how do you do this without otherwise affecting the sensor. You don’t want to steal photodiode area (thus limiting sensitivity), you don’t want to steal charge bucket area (thus lowering saturation point), and you don’t want to add noise. It occurs to me that this might dovetail very nicely with BSI sensors, since the chip designer would have far more room on the top of the chip, given the photodiode array being on the underside.

    And here’s the thing… you don’t need a global sensor if you’re building a still camera, since you can include a mechanical shutter to provide your global shutter. The movement of large CMOS sensors into video is relatively recent thing, while CCDs have been optimized for the needs of video for decades. So it’s no surprise we’re not hearing still camera companies say much about global electronic sensors. With more companies (Panasonic, for example) using the same basic large sensor for primarily-video as well as primary-still cameras, I expect this changes in time.

    CMOS also has the advantage of on-chip ADCs. Anytime you run an analog signal off one chip and into another, you’re accumulating noise, far more noise than doing the same job on-chip. A CCD is a pure analog device, which is why it’s run on an analog chip process, not the volume CMOS process used for CMOS sensors , CPUs, memory, etc. A CMOS sensor can have an on-chip ADC, so that the chip is only communicating the captured image via a noiseless digital connection. And in fact, it can have multiple ADCs to speed things up, lots of other possibilities. They don’t necessarily — Canon’s DSLRs seem to still be doing off-chip ADC, one of the things limiting the dynamic range versus Sony, who’s doing all of their conversion on-chip. Same issues with CCDs.

    | |
    • Rob Harris

      Dave, the geek in me loves the details you bring forth from the depths of your knowledge on this subject. As always, thanks for sharing.

      | |
  5. Rob Harris

    These advances brought to us by capitalism which rocks.

    | |
  6. robert garfinkle

    no need to make light of the situation…

    | |
  7. Lauchlan Toal

    I’ve done a fair bit of study on CCD designs lately, but haven’t learned much about CMOS. Thanks for sharing these videos! Hopefully we see global shutters in CMOS cameras soon, to make them truly effective for video work.

    | |
  8. Brandon Dewey

    cool video

    | |