Showing posts with label CCD vs CMOS. Show all posts
Showing posts with label CCD vs CMOS. Show all posts

Tuesday, July 16, 2013

CMOS Sensor Operation

Until recently the industrial digital vision sensor market was dominated by the CCD array. However technological advances in CMOS production techniques have led to a gradual increase in the popularity of this sensor type. Like CCD arrays, CMOS sensors are also formed on a silicon substrate but the structure is more akin to that of other CMOS technology such as RAM and ROM memory devices.
The diagram below is that of an actual CMOS sensor showing the active pixel area in green and the area occupied by the on chip circuitry in yellow, which replaces that of the shuttered area on a CCD based sensor. The on chip circuitry actually converts the charge into voltage on each pixel whereas the CCD sensor shifts the charge vertically row by row, and then horizontally pixel by pixel to be converted to voltage when it reaches one or more output nodes. This gives CMOS sensors an advantage when it comes to windowing or a region of interest as the pixels can be read out randomly. CCD sensors can only limit its region of interest vertically with the resulting image always containing the data for the full image width.
CMOS CCD diagram
The on chip active amplifier and the sampling capacitor give CMOS sensors advantages in terms of speed, full well capacities and much improved response characteristics yet introduce dark current level noise and higher black pixel content. CMOS sensors can also produce higher levels of fixed pattern noise than that of CCD, but this type of noise can be easily removed with a software filter.
The development of CMOS sensor technology has been a rapid and varied process. The initial aim of CMOS sensors was to match the imaging performance of CCD technology, with lower power requirements and at less cost. To achieve this performance it was discovered that a much greater level of manufacturing process adaptation and deeper submicron lithography were required than initially expected. This led to the desired CMOS performance but increased development costs more than anticipated.
At first the low power feature of the CMOS imaging sensors was set to be one of their distinct advantages, however the improved development of CCD sensors means that while CMOS has the advantage in this area, the margin is now much smaller.
The integration of on chip control circuitry with the CMOS imager provides the sensor with greater flexibility and integration, the downside has been the introduction of greater noise levels. Both CMOS and CCD imaging sensors still require support chips to process the image, however CMOS imagers can be produced with more functionality on the sensor chip, as shown below.
CMOS sensors diagram
The spectral response of a CMOS sensor differs from that of the CCD sensors in that the peak response is sited at around 700Nm. Both sensors operate over the same range, typically 200Nm to 1100Nm.
Typical CMOS spectral response chart
The main advantages of CMOS imaging sensors still remain as faster response, increased integration flexibility and lower on-chip power demands. However the image quality has yet to match that of the CCD and the supporting chips required to increase the CMOS image quality goes some way to squander its previous advantages. Yet neither sensor is categorically superior to the other. They both have their own advantages and disadvantages and with CMOS developers working on the image quality, and CCD developers aiming to reduce power demands and increase flexibility, the existing margins in place to decide which sensor is most suitable for an application look to narrow further.

CCD Sensor Operation


Interline transfer CCD diagram














This diagram illustrates the general layout of the most common type of CCD array, the Interline Transfer CCD. The CCD is composed of precisely positioned light sensitive semiconductor elements arranged as rows and columns. Each row in the array represents a single line in the resulting image. When light falls onto the sensor elements, photons are converted to electrons, the charge accumulated by each element being proportional to the light intensity and exposure time. This is known as the integration phase. After a pre determined period of time the accumulated charge is transferred to the vertical shift registers.

Capturing the image CCD and CMOS sensors

There are two types of sensor units that can be used in digital cameras.
CCD (charge-coupled device) and CMOS (complimentary metal oxide semiconductor) units have one main feature in common. Both use an array of millions of tiny photo sensors. Each sensor creates an electrical current when exposed the light. The strength of the current is proportional to the brightness of the light. But the way in which this electrical data is captured and turned into an image file is very different.

The CMOS unit from the EOS-1Ds Mark II.

One charge at a time

The tiny photo sensors create only a minute electrical current. This must be amplified before it is of any use in creating an image. Some CCDs have a single amplifier. This deals with the current from each sensor in turn. The amplifier is placed at one corner of the sensor where it reads and amplifies the charge from the nearest sensor in the first row. The charge in this sensor is then released, leaving it ‘empty’. All the charges from each sensor in the first row now move along by one sensor, so that the amplifier can read and process the next charge. This continues until the amplifier has dealt in turn with the charges from each of the sensors in the first row.
At this point, all the sensors in the first row are ‘empty’ of charges. This allows all the charges from the sensors in the second row to move down to the first row. The second row is now empty, and is filled with the charged from the third row. In this way, all the charged drop down a row, leaving the top row empty.
The whole process then repeats itself, with the amplifier dealing with the charges that were originally in the second row.
You can guess the rest. The charges originally in the third row move to the first row, where they are fed one by one into the amplifier. This continues row by row, charge by charge, until the charge from the far end of the top row finally reaches the amplifier.
Of course, the word ‘finally‘ is relative. The whole process takes place very quickly, often in a fraction of a second.
Each charge from the sensor array is ‘tagged’ by the amplifier unit before is passed on to the camera’s microcomputer so that each piece of data can be reassembled in exactly the same sequence to produce the image.
Only five EOS digital cameras use a CCD image sensor, though four of these are not strictly Canon cameras. The EOS DCS3 (July 1995), DCS1 (December 1995), D2000 (March 1998) and D6000 (December 1998) were all produced in collaboration with Kodak. Canon provided the body, but the image sensor and electronics came from Kodak.
There is only one EOS digital camera that uses a CCD in which both body and electronics designed and built by Canon - the EOS-1D (September 2001) - and even here the CCD sensor is outsourced.
The charges (brightness data) from a CCD move across the array photo sensor by photo sensor until they reach the external amplifier unit(s).

The CMOS advantage

The CMOS unit takes a different approach to processing the charges from the millions of photo sensors. Instead of one amplifier at the side of the array, each individual pixel has its own personal amplifier. This means that all the charges can be processed at the same time, clearing the sensors for the next exposure.
Canon has concentrated all its research and development on the CMOS unit, rather than the CCD. It not only designs and makes all its own CMOS units, but also designs and manufactures the equipment that makes the CMOS units. The first EOS digital camera with a Canon CMOS sensor was the D30 May 2000). All later EOS digital cameras, with the exception of the EOS-1D, also feature a Canon CMOS image sensor.
To the outside world, Canon’s concentration on CMOS seemed a little perverse. In the late 1990s, images from CCDs were of significantly higher quality than those from CMOS units. There were two main reasons for this. CCDs are less prone to ‘noise’ - a grain-like pattern that appears in the image, especially at higher equivalent ISO speeds. Also, the light sensitivity of a CMOS unit was lower than that of a CCD. This required greater amplification of the signal - leading to more ‘noise’.
But Canon recognised that CMOS units offered long-term benefits, and set about overcoming the disadvantages.
One advantage is that CMOS units need a lot less power to operate than CCDs. A digital camera makes many demands on a battery to operate autofocusing, autoexposure, shutter control and - in some cameras - built-in flash. Any technique that helps to reduce the drain on the battery is a good thing.
Also, CMOS units are much cheaper to manufacture than CCDs. This might seem strange considering each sensor needs its own amplifier, but all this is done with microelectronics directly on the sensor unit. The CCD needs a separate amplifier and other associated electronics that add to the cost of the already expensive unit.
The lower cost of CMOS units is important to Canon because it is selling its consumer range of digital camera to a mass market. The use of CMOS was influential in the marketing of the first sub-€1,500 digital SLR - the EOS 300D - in September 2003.

Overcoming the problems

CMOS sensors were known for their ‘noise’. In simple terms, this is electrical activity that does not play any part in forming a true image of the subject. One cause of this noise is the individual amplifiers linked to each photo sensor. Inevitably, there are slight variations in output of the millions of amplifiers across the array, and this creates a noise pattern across the image.
Canon has overcome this with lateral thinking. Rather than trying to equalise the amplifiers, it has accepted that there will always be a noise pattern. So every time you take a picture, you actually make two exposures - one of the subject, and one with the shutter closed. This second exposure only captures the noise pattern. If the values of this pattern are subtracted from the first exposure, you obtain an image with little or no noise.
The second main problem with CMOS sensors is also related to the amplifiers, though in a different way. On a CCD, almost all the surface of the unit is occupied by photo sensors. On a CMOS unit, space is needed for amplifier circuitry, so less of the space is given over to the photo sensors. This means that some of the light reaching the unit hits areas between the sensors and is lost, reducing the overall sensitivity of the unit.
There are two solutions to this problem. First, the circuitry can be made smaller and the photo sensors bigger. But there is a limit to this reduction - and an escalation in manufacturing cost - so there will always be space between the sensors. To counteract this, millions of microlenses are used, one over each photo sensor. The microlens covers the photo sensor and the circuitry. Rays of light hitting the edge of the micro lens, which would normally be wasted, are now focused on the photo sensor. The effect is to increase the sensitivity of the sensor unit.
The result is that Canon CMOS sensor units can compete with, and even outperform, CCD sensor units. All current EOS professional digital cameras use CMOS sensor units, and reviews regularly comment on the lack of noise, even at high ISO settings.

Photo sensors on a CMOS unit are positioned inside ‘wells’. Between the wells is circuitry associated with the amplifier units. Rays of light reaching the spaces between the wells are wasted, reducing the efficiency of the CMOS unit.
Placing a micro lens over each sensor and its circuitry captures the light that would normally be wasted and redirects it to the photo sensors.

http://cpn.canon-europe.com/content/education/infobank/capturing_the_image/ccd_and_cmos_sensors.do

Background Information on CCD and CMOS Technology

CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor) image sensors are two different technologies for capturing images digitally. Each has unique strengths and weaknesses, giving advantages in different applications. Neither is categorically superior to the other. In the last five years much has changed with both technologies, and the outlook for both technologies is vibrant.
Both types of imagers convert light into electric charge and process it into electronic signals. In a CCD sensor, every pixel's charge is transferred through a very limited number of output nodes to be converted to voltage, buffered, and sent off chip as an analog signal. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise correction, and digitization circuits, so that chip outputs are digital bits. See figures 1 & 2.
ccd camera diagram
Figure 1: Diagram of a CCD.
On a CCD, most functions take place on the camera's printed circuit board. If the application's demands change, a designer can change the electronics without redesigning the imager.
cmos camera diagram
Figure 2: Diagram of a CMOS.
A CMOS imager converts charge to voltage at the pixel, and most functions are integrated into the chip. This makes imager functions less flexible but, for applications in rugged environments, a CMOS camera can be more reliable.
This difference in readout techniques has significant implications for sensor capabilities and limitations. Eight attributes characterize image sensor performance.
Responsivity, the amount of signal the sensor delivers per unit of input optical energy. CMOS imagers are marginally superior to CCDs.
Dynamic range, the ratio of a pixel's saturation level to its signal threshold. CCDs have the advantage here.
Uniformity, is the consistency of response for different pixels under identical illumination conditions. CMOS imagers were traditionally much worse than CCDs, however new amplifiers have made the illuminated uniformity of some CMOS imagers close to that of CCDs.
Shuttering, the ability to start and stop exposure arbitrarly, is superior in CCD devices. CMOS devices require extra transistors or nonuniform shuttering, sometimes called rolling shuttering to achieve the same results.
Speed, an area in which CMOS arguably has the advantage over CCDs because all of the camera functions can be placed on the image sensor.
Windowing, CMOS technology has the ability to read out a portion of the image sensor allowing elevated frame rates for small regions of interest. CCDs generally have limited abilities in windowing.
Antiblooming, is the ability to gracefully drain localized overexposure without compromising the rest of the image in the sensor. CMOS generally has natural blooming immunity. CCDs require specific engineering to achieve this capability.
Biasing and clocking. CMOS imagers have a clear advantage in the area, operating on a single bias voltage and clock level.
CCD and CMOS imagers were both invented in the late 1960's. CCD became dominant in the market, primarily because they produced superior images with the fabrication technology available. CMOS image sensors required more uniformity and smaller features than silicon wafer foundries could deliver at the time. Not until the 1990's, with the development of lithography was there a renewed interest in CMOS. That interest is due to lower power consumption, camera-on-a-chip integration, and lowered fabrication costs. Both CCD and CMOS imagers offer excellent imaging performance. CMOS imagers offer more integration (more functions on the chip), lower power dissipation (at the chip level), and the possibility of smaller system size.
Today there is no clear line dividing the types of applications each can serve. CCD and CMOS technologies are used interchangeably. CMOS designers have devoted intense effort to achieving high image quality, while CCD designers have lowered their power requirements and their pixel sizes. As a result, you can find CMOS sensors in high-performance professional and industrial cameras and CCDs in low cost low power cell phone cameras. For the moment, CCDs and CMOS remain complementary technologies-one can do things uniquely the other cannot. Over time this distinction will soften, with more CMOS imagers consuming more and more of the CCD's traditional applications. Considering the relative strength and opportunities of CCD and CMOS imagers, the choice continues to depend on the application and the vendor more than the technology.

CCD vs CMOS: A Short-note.

There has been neck to neck competition between CCD and CMOS imaging technologies. CCD and CMOS imagers were invented in the same era within a span of few years. Yet, CCDs became dominant because of the superior results from the then available fabrication technology. CMOS technologies focused on uniformity and smaller feature sizes, which did not quite happen until 1990s when lithography advanced enough to be support small feature sizes. It was after this that CMOS imagers had a comeback and since then both technologies have fought for market dominance. While CCD sensors are known to offer best image qualities, CMOS imagers offer more functions on the chip and attractive features like lesser power usage making them more popular in mobile phone cameras etc. A comparison can be drawn between the two on various aspects and a sound decision be made depending on the requirements of the application.
Factor
CCD
 CMOS
Responsivity
Moderate
Higher
Dynamic Range
High
Moderate
Uniformity
High
Low
Speed
Moderate
Higher
Anti blooming
High
High
Signal out of pixel/chip
Electron Packet/Voltage
Voltage/Bits
System/Sensor Complexity
High/Low
Low/High
Noise
Low
High
Markets have seen rapid decline of the CCDs share owing to the growing popularity of CMOS sensors in cell phones and point and shoot cameras and even industry stalwarts like Canon and Sony who used CCD products primarily are now shifting to CMOS imagers. It is expected that more than 95% of the camera market will switch over to CMOS sensors by 2014. But there still remains and would remain a predominant segment which will continue to bank on CCD sensors, the scientific research and the astronomer community, the biggest example being the Hubble Space Telescope. So, while the light from the CCDs might be fading from the earth, we’d still need a CCD to see what’s out there.