Showing posts with label ccd. Show all posts
Showing posts with label ccd. Show all posts

Thursday, July 18, 2013

Sequential Three-Pass Color CCD Imaging

Three-pass sequential color CCD imaging systems employ a rotating color wheel to capture three successive exposures in order to obtain the desired RGB (red, green, and blue) color characteristics of a digital image. The major advantage of this technique is the ability to fully utilize the entire pixel array of a CCD imaging chip, by using one pass for each color.
Silicon based charge-coupled devices lack the ability to distinguish color information presented to the pixel elements by incoming photons. Even though electromagnetic radiation of varying energy passes through the devices to a depth determined by the wavelength, the interaction that produces free elections and holes is not color sensitive. A typical sequential color imaging system design is illustrated in Figure 1, which shows the red filter being used to pass illuminating light waves from the microscope optics to the CCD surface. The primary advantage of this technique is the ability to achieve the highest resolution capable of the device, which equals the size of the CCD array.
After all of the image information has been captured in three individual passes, it is recombined off-chip and processed in a manner similar to that of other CCD architectures. The major disadvantage of this system is the relatively long exposure times necessary to accumulate three individual color arrays, which requires an almost stationary subject and vibration-free operation of the rotating color wheel mechanical components. This technique is being slowly phased out as single-shot CCD cameras with higher resolutions become commonplace. However, a number of applications now incorporate a rapidly switchable liquid crystal array screen that can be used to capture the three colors in milliseconds, thus speeding the throughput of the device and reducing the risk of mechanically-induced vibration.
Contributing Authors
Mortimer Abramowitz - Olympus America, Inc., Two Corporate Center Drive., Melville, New York, 11747.
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/threepass.html

Electron Multiplying Charge-Coupled Devices (EMCCDs)

The inherent advantages of scientific charge-coupled device (CCD) sensors for digital imaging in optical microscopy have made them ubiquitous in a wide variety of applications. One of the few significant shortcomings of conventional high-performance CCD cameras is that very low signal levels typically fall beneath the read noise floor of the sensor, limiting the imaging capabilities in a number of currently productive research areas that demand rapid frame-rate capture at extremely low light levels. An innovative method of amplifying low-light-level signals above the CCD read noise is employed in electron multiplying CCD technology. By incorporating on-chip multiplication gain (see Figure 1), the EMCCD achieves, in an all solid-state sensor, the single-photon detection sensitivity typical of intensified or electron-bombarded CCDs at much lower cost and without compromising the quantum efficiency and resolution characteristics of the conventional CCD structure.
The primary feature that distinguishes this novel new technology is the inclusion of a specialized extended serial register on the CCD chip that produces multiplication gain through the process of impact ionization in silicon. By elevating photon-generated charge above the read noise of the device, even at high frame rates, the EMCCD has the capability of meeting the needs of ultra-low-light imaging applications without the use of external image intensifiers. Consequently, the approach is applicable to any of the current CCD sensor architectures, including back-illuminated devices, and sensors employing electron multiplying registers are considerably less expensive to manufacture due to the signal amplification stage being incorporated directly into the CCD structure.
Several major areas of current research focus in the biomedical sciences rely on specific targeting of subcellular structures or single molecules with appropriate fluorophores in order to follow the dynamics of biological processes. The rapid kinetics combined with extremely small specimen volumes and low fluorophore concentrations utilized in such experiments require both high sensitivity and rapid frame-rate data acquisition. In the evaluation of transient, low-intensity signals, such as those encountered in single-molecule investigations, total internal reflection fluorescence (TIRFM; see Figure 1), spinning disk confocal in live-cell imaging, flux determinations of calcium or other ions, and time-resolved three-dimensional microscopy (4-D techniques), the electron multiplying CCD offers significant advantages over other sensors designed for low signal levels. Additionally, when employed with the higher signal levels of conventional fluorescence imaging techniques, the extreme sensitivity of the EMCCD system allows the use of lower fluorophore concentrations and/or lower power levels from the excitation source, thereby reducing both the potential toxicity to living cells and photobleaching of the fluorescent probe.
The performance of all CCD-based detectors has improved dramatically in recent years, and increased sensitivity has lowered detection limits in high-performance low-light imaging systems significantly. Quantum efficiency now exceeds 90 percent and read noise is limited to less than 4 electrons rms (root-mean-square) in some high performance back-illuminated CCD camera systems. This low level of read noise performance is attainable in traditional CCD sensors only at moderate readout rates, however. In addition, because the charge packet from a single pixel is often only a few electrons in challenging microscopy investigations, the signal is too frequently lost in the read noise even at slow readout rates. Furthermore, when imaging is performed at video frame rates and faster, the read noise increases to an unacceptable level, relative to the signal, in low-light conditions.
One proven solution to the read noise limitation when higher frame rates are required has traditionally been to employ an image intensifier to multiply the number of emitted specimen photons prior to detection and readout by a conventional CCD. In this approach, which is based on an operation principle similar to that of photomultiplier tubes, the signal is amplified to a level that exceeds the read noise generated at the desired frame rate. The intensified CCD (ICCD) camera system is presently among the most commonly employed imaging methods for low-light techniques such as time-resolved fluorescence experiments, ratio imaging of ion-sensitive fluorochromes, single molecule fluorescence, and other dynamic studies in living cells. These systems are sometimes referred to as proximity-focused image intensifiers and utilize a photocathode closely coupled to a micro-channel plate (MCP) electron multiplier.
The amplified electron output from the MCP is accelerated by a high potential difference onto a phosphorescent screen that converts the electrons to photons, which are subsequently relayed to the CCD surface through an optical relay lens or direct fiber optic coupling. Because potential differences ranging from 2500-5000 volts are maintained to accelerate electrons across the gaps separating the components of the ICCD, a high internal vacuum is necessary, requiring the device to be precisely assembled and completely free of contaminants. The manufacturing costs are consequently relatively high, and the intensifiers present certain other disadvantages as well, among them reduced spatial resolution compared to an equivalent conventional (non-intensified) CCD, high background noise, relatively low quantum efficiency (Figure 2), and susceptibility to irreversible damage from exposure to high light levels.
The resolution of the ICCD is ultimately limited by the resolution of the photocathode, the micro-channel plate, and the output phosphor. Continued improvements in the phosphor composition and microchannel plate architecture of modern devices has resulted in resolutions of 64 line pairs per millimeter, or better, which corresponds to a full-width at half maximum (FWHM) spot size of approximately 25 micrometers. Unfortunately, for single electron recording events (necessary in single molecule imaging) about 50 percent of the signal is spread into neighboring pixels with an ICCD, which results in considerable spatial averaging compared to an EMCCD. This effect must be carefully scrutinized when examining data collected during single molecule experiments using ICCD detector systems.
Electron-bombarded CCDs (EBCCD) are a less widely used detector variation for low-light camera systems, and in similarity to intensified CCDs, incorporate a photocathode for photon-to-electron conversion, followed by acceleration across a high-voltage gradient. The energetic electrons impinge directly on a back-thinned CCD, where they generate multiple charges, resulting in a modest signal gain. The devices can be operated at video frame rates, but have limited gain adjustment range, and exhibit similar disadvantages to the intensified CCD, including reduced quantum efficiency and resolution, and the potential for damage to the external image-intensifying components if exposed to high light levels. The development of the electron multiplying CCD employing on-chip multiplication gain provides the basis for cameras that achieve the signal gain benefit of systems using external intensifiers, while maintaining the customary CCD advantages of high, spectrally broad quantum efficiency (Figure 2), full native pixel resolution, and immunity to damage from high light levels.
Diagrammed in Figure 2 are the quantum efficiencies and spectral profiles of front and back illuminated electron multiplying CCDs, as well as the photocathodes in popular Gen II and Gen III intensifiers. The back-thinned CCD has a quantum efficiency of 90-percent or greater over the wavelength region of 500 to 700 nanometers and exhibits the highest values of any device in the near-infrared. In contrast, the front-illuminated CCD features a much lower quantum efficiency (approximately 60 percent) over a narrower wavelength range of 550 to 700 nanometers. Both of the CCD devices depicted in Figure 2 have significantly greater quantum efficiencies than the Gen intensifiers, which range between 35 and 45 percent in the visible spectral region. Intensifier photocathodes designed to operate more efficiently in the ultraviolet and infrared region are available.
On-Chip Multiplication Gain
Conventional cooled CCD cameras achieve relatively high sensitivity through the process of integrating signal within each pixel prior to readout in order to overcome read noise, which is incurred only once for each frame. At low light levels, long exposures are required in order to accumulate sufficient signal and achieve the detector's maximum read-noise performance. Consequently, frame rate speeds are limited to a relatively slow fraction of a frame up to a few frames per second. In applications suitable for "slow-scan" signal acquisition, for which the detector can be operated in the photon shot noise-limited regime, traditional back-illuminated CCD systems provide superior overall performance, including maximum quantum efficiency (as illustrated in Figure 2), which takes into account noise factors associated with electron multiplication. However, when it is necessary to capture temporal data requiring video frame rates or faster, at very low light levels, the conventional CCD camera is fundamentally limited by read noise.
The electron multiplying CCD incorporates a structural enhancement to amplify the captured signal before the charge is transferred to the on-chip amplifier, which has the effect of reducing the read noise, relative to signal, by the value of the multiplication gain factor. Because very weak specimen signal levels may produce a charge packet from a single pixel of only a few electrons, even with slow readout from a high-performance CCD, the signal is lost in read noise. The primary advantage of the EMCCD is to provide a mechanism to improve signal-to-noise ratio for signal levels below the CCD read-noise floor. In applications that require extremely fast gating (on the nanosecond level), the EMCCD is not appropriate, and intensified CCDs maintain an advantage in rapid kinetic data collection of this type.
Electron multiplying CCD sensors are produced utilizing conventional CCD fabrication techniques by making relatively simple structural modifications. The unique feature of the EMCCD is an electron multiplying structure (in effect, a charge amplifier) positioned between the end of the shift register and the output node, which is often referred to as the multiplication register or gain register (see Figure 3). This special extended serial register provides multiplicative gain following detection of photons in the device's active pixel array, and therefore, the technology can be adapted to any current CCD architecture and format. The most widely used sensors produced by the two companies that pioneered the technology employ frame-transfer architecture, and camera manufacturers have also introduced systems based on back-illuminated versions of the electron multiplying CCDs.
The functional layout of a frame-transfer electron multiplying CCD is illustrated in Figure 3, in which the gain register is added to the charge transfer path following the frame-transfer area of the chip and the conventional serial register, and preceding the on-chip charge-to-voltage conversion circuitry. The structure of the additional register differs from the regular shift register in that the full-well capacity is increased and electrons are accelerated from element to element in the multiplication register by application of much higher clock voltages at selected transfer electrodes. When charge is transferred by applying a higher-than-normal voltage, secondary electrons are generated in the silicon by the process of impact ionization. In the gain multiplying register, each stage comprises four gates, three of which are clocked as in the conventional 3-phase structure, with the fourth (between phases 1 and 2) being held at a low fixed direct current (DC) potential.
Figure 4 illustrates the transfer of charge through the gates. Note that the gates for phases 1 and 3 (R1 and R3) are clocked with drive pulses of normal potential, which is typically on the order of 5 to 15 volts (the R3 gates have zero potential for the clocking phase illustrated in Figure 4). The clock pulses used in the same phases of the regular readout register can be employed for these gates. Phase 2 (R2 in Figure 3) is clocked at higher voltage (35-50 volts) preceded by a gate held at a low DC level (denoted by the Low DC gate in Figure 4). The potential difference between the fixed-level gate and the high-voltage clocked gate results in sufficient field intensity to sustain the impact ionization process as electrons are transferred from phase 1 to phase 2 in the normal clocking sequence. Although the charge multiplication per transfer is only on the order of 1.01 to 1.016, the gain accumulated over the large number of pixels in the multiplication register (dependent on the horizontal pixel array size) is substantial, and can be hundreds or thousands. The multiplication gain is exponentially proportional to the applied high phase-2 voltage, and can be increased or decreased by varying the clock voltages.
Figure 5(a) illustrates the exponential increase in gain that accompanies increasing amplitude of the clocking voltage applied to the phase 2 electrode. It is obvious that relatively small adjustments to the voltage, beyond a certain value, result in large changes in the on-chip multiplication gain. In commercial EMCCD camera systems, this voltage adjustment is commonly mapped to a high-resolution digital-to-analog converter that can be precisely controlled through computer software. In spite of the very low probability of impact ionization occurring and the low mean gain per stage, the overall gain factor in the multiplication register can easily be in excess of 1000x due to the large number of pixels over which the electron charge packet grows in cascading fashion. The probability of secondary electron generation is dependent on the serial clock voltage levels and the CCD temperature, and as indicated above, typically ranges from 1 to 1.6 percent. While the probability of secondary electron generation is described by a complex function, the total gain (M) of the cascaded elements in the multiplication register is given by the following equation:
M = (1 + g)N
where g is the probability of generating a secondary electron and N is the number of pixels in the multiplication register. A CCD having 512 elements in the gain register and an impact ionization probability of 1.3 percent (0.013) would, therefore, generate a total charge multiplication gain of over 744.
Because of the (exponential) relationship of the multiplication gain to clock voltage, a wide adjustment range is available, allowing setting the gain to a sufficiently high level to effectively reduce readout noise to insignificant levels under most imaging conditions. Since the multiplication gain is independent of readout speed, setting a gain level equivalent to the read noise in electrons, at the desired readout frequency, produces an effective noise level of 1 electron rms. Increasing gain beyond this range will reduce noise to sub-electron levels. Significantly, by utilizing higher gain settings at faster frame rates, this noise performance can be achieved at any speed. As an example, a current high-performance back-illuminated electron multiplying CCD, with a read noise specification of 60 electrons rms at 10 megahertz, can achieve a sub-electron effective noise level with any on-chip multiplication gain value of 60 or greater.
As discussed, electron multiplication gain can be used to overcome any readout noise, although it is desirable to minimize this factor because at some level, increasing gain results in a limitation of sensor dynamic range (as illustrated in Figure 5(b)). Although the bit depth of the analog-to-digital converter of the camera system determines the maximum dynamic range, at gain levels beyond that required to overcome read noise, dynamic range will decrease due to the multiplied signal exceeding the pixel full well capacity and/or the magnitude of the output amplifier. For example, if a register designed to contain a normal full-well charge of 200,000 photoelectrons is used at a gain level of 250x, then pixels at the end of the gain register will become saturated whenever the original charge packet is greater than 800 photoelectrons. By taking specific design steps to maximize full well depth and amplifier throughput, camera manufacturers are able to provide for high bit-depth imaging with moderate gain and at high frame rates. Because this requires the readout amplifier to be optimized for larger pixels at high speeds, the read noise specification is necessarily increased. In addition, the ultimate size of a register pixel is limited by the fact that a triplet of transfer electrodes can only control photoelectrons over a maximum silicon band size of approximately 18 micrometers.
Increasing the EMCCD multiplication gain factor overcomes higher levels of read noise, but the dynamic range of the camera system suffers, limiting the use of the camera with brighter signals amenable to slower readout. To maintain full dynamic range, some electron multiplying camera systems are being equipped with dual amplifiers (see Figure 6), including a conventional unit for slow-scan wide dynamic range applications such as brightfield or fluorescence imaging, as well as a high-speed amplifier for high-sensitivity operation requiring the use of on-chip gain. Such a combination provides a camera system with the traditional CCD advantages of high resolution, high quantum efficiency, and wide dynamic range coupled to the highest sensitivity achievable.
Additional Noise and Performance Variables
Several additional factors are significant with regard to the performance of electron multiplying CCDs, including the relationship between on-chip gain and dynamic range (discussed above), other gain-related noise sources, evaluation of quantum efficiency, a phenomenon known as gain ageing, and considerations regarding cooling requirements of the image sensors. The efficiency of the impact ionization process, which produces charge gain during electron transfer in the specialized serial register, is inversely dependent on temperature. The probability of secondary electron generation increases as temperature is decreased, and consequently a camera equipped with a well designed cooling system is able to achieve higher gain values at lower clock voltage settings.
The optimum level of cooling depends on the camera system and application, but the variation of multiplication gain with temperature illustrates the importance of maintaining precise temperature stability in order to avoid adding noise to the measured signal. Dark noise arising from thermal dark current generation in the electron multiplying CCD is identical to that in conventional CCDs, and is similarly reduced by cooling the sensor. With conventional high-performance detectors, the sensor is usually cooled to a temperature at which dark current shot noise arising during the expected integration (exposure) interval is negligible. Once the dark noise is substantially lower than the noise associated with signal readout, further cooling does not provide any additional practical benefit.
Electron multiplying CCD cameras are able to detect even single-photon events when the on-chip multiplication is utilized to elevate the signal above the read noise level, and it must be recognized that any level of unsuppressed dark current is significant since it is subject to being multiplied along with the signal. Ideally, therefore, the dark current should be completely eliminated in the EMCCD, and cooling systems designed to reduce CCD temperature to -75 degrees Celsius or lower are incorporated in the most advanced camera systems.
Note that different noise components are relevant in intensified CCD systems. While signal is amplified above both dark current and read noise in the ICCD, making increased cooling less beneficial, another source of noise arising in the intensifier photocathode, referred to as equivalent background illuminance (EBI), occurs in intensified systems. The electron multiplying CCD does not exhibit EBI, and overall, dark current is a less significant limitation for the EMCCD with effective cooling than EBI is for intensified CCD cameras. Although increased cooling can reduce EBI in the photocathode, effective cooling systems for the more complicated multi-component structure of intensified CCDs, which usually include fiber optic couplings, are much less practical.
Due to the probabilistic nature of the impact ionization process utilized in the EMCCD, a statistical variation occurs in the on-chip multiplication gain. The uncertainty in the gain produced introduces an additional system noise component, which is evaluated quantitatively as the excess noise factor (or simply noise factor, abbreviated F), and which acts as a multiplying factor for both dark and photon-generated signal in the camera system. Excess noise factors vary for the different low-signal detector types, and are attributable to a combination of various loss mechanisms (if they exist) and to statistical variation in the electron multiplication process arising either in the silicon crystal lattice of the EMCCD or the micro-channel plate of the ICCD.
A conventional CCD that does not have any significant loss mechanisms or additional noise from amplification processes exhibits a noise factor of unity, as does an EMCCD utilizing normal clock voltages and producing no multiplication gain. With increased gain settings, the statistical variations begin to add additional noise, the magnitude of which depends upon both the gain and the signal level. According to theory, the excess noise factor for the electron multiplication process is approximately 1.4 (square root of 2) over a wide range of gain levels. Experimental measurements are typically lower, and range between 1.0 and 1.4 for multiplication gain factors up to 1000x. A value of 1.3 is a commonly stated average for EMCCDs, in comparison to noise factors of 1.6 to 2 for intensified CCDs employing Gen II and Gen III filmed and filmless photocathodes. Filmed image intensifiers generally have higher noise factors because of the additional loss mechanism imposed on electrons by the film.
One noise phenomenon that exists in the EMCCD, and which has no equivalent in intensified CCDs, is referred to as spurious charge or clocking induced charge (CIC). When electrons are being transferred through the multiplication register under the influence of clocking pulses, the sharp clock waveform inflections produce impact ionization in a small proportion of transfers even with normal clocking voltages. Furthermore, the clock pulses may produce a secondary electron even when no primary electron is present for transfer. By careful manipulation of clock waveform amplitudes and edges, manufacturers can minimize CIC, which is normally estimated to produce only one electron in approximately 100 transfers. Even in high-performance low noise conventional CCDs, clocking induced charge is totally lost in readout noise; however in the EMCCD at high gain settings, additional CIC is generated, and is generally treated as an additional component of dark-related signal.
Clocking induced charge is independent of exposure time, but because it is attributed to impact ionization, it is usually considered to increase with decreasing temperature, just as electron multiplication does. When EMCCDs are utilized at high gain, single electron events are recorded as spikes in the image, and any contribution from CIC would seemingly be visible. Under typical operating conditions of the EMCCD, background events causing such spikes, rather than readout noise, determine the detection limit of the camera. Recent dark image tests performed at various cooling temperatures by one manufacturer indicate that whatever the CIC contribution is to dark current, it does not appear to set a cooling limit as temperature is reduced to as low as -95 degrees Celsius. In those tests background spikes appearing above readout noise are attributed to dark current, and are dramatically reduced as temperature is lowered.
Evaluation of the signal-to-noise ratio (SNR) of an electron multiplying CCD requires that the conventional expression applied in the calculation for CCD sensors be modified to reflect the effect of on-chip multiplication gain and the excess noise factor. In effect the SNR is equivalent to the total number of photons detected per pixel during the integration interval divided by the combined noise from all sources, as follows:
SNR = (S � Qe) / Ntotal
where S represents the number of incident photons per pixel, and Q(e) is the quantum efficiency, or proportion of total photons actually detected as signal. The total noise in the system is represented by N(total), which combines several variables according to the following relationship:
Ntotal = [(S � Qe � F2) + (D � F2) + (Nr / M)2] 1/2
where F represents the excess noise factor, D is the total dark signal, N(r) is the camera read noise, and M is the on-chip multiplication gain. The noise terms in the denominator of the EMCCD noise equation represent the familiar CCD noise components, photon shot noise, dark noise, and read noise, respectively, with appropriate modifications to account for loss mechanisms and statistical noise sources specific to the process of on-chip multiplication gain. This is accomplished by applying the excess noise factor (F) to the first two terms, and the multiplication gain factor (M) to the read noise term. The effective shot noise and dark noise are increased by the excess noise factor, while read noise is reduced by the multiplication gain achieved in the gain register.
Both electron multiplying and intensified CCDs suffer from a gain degradation artifact known as gain ageing, which occurs in the gain register or microchannel plate of the devices. In EMCCDs, gain ageing is manifested by a slow decrease in gain over time and is quantitatively based on the total electric charge that has passed through the multiplication register. Although the exact nature of gain ageing has not been fully elucidated, CCD designers speculate that the high voltages used in the multiplication process (30 to 50 volts) trap accelerated electrons in the silicon-silicon dioxide interface region beneath the transfer electrode. The trapped electrons effective alter the electric field gradient at this point and thus create the gain ageing phenomenon. Gain ageing occurs exponentially over time and is most prominent during the first hours of use in the EMCCD gain register.
In order to compensate for gain ageing, commercial camera manufacturers often pre-age cameras at high gain settings for several hundred hours or more before readjusting the circuitry. Additionally, several manufacturers are now using computer algorithms to compensate for gain ageing and protect the gain register. Gain ageing can be controlled by reducing the gain and blocking illumination when the camera is not being used. In general, the gain should be adjusted to a level that just offers sufficient gain to overcome the readout noise. No further increase in the signal-to-noise ratio is achieved once the readout noise becomes less than one, and continuing to add gain only enhances the rate of gain degradation. Finally, the investigator can periodically monitor gain characteristics with a standardized specimen to ensure maximum performance from EMCCD cameras.
The solid-state on-chip electron multiplication of the EMCCD gives it a number of decided advantages over intensified CCDs, including preservation of the spatial resolution of the CCD, and superior quantum efficiency performance due to not being constrained by limitations of the intensifier phosphor. In comparing quantum efficiencies of different detector types, the effect of all loss mechanisms and statistical noise sources must be considered. In terms of the resulting effective quantum efficiencies, electron multiplying CCDs, particularly back illuminated versions, exhibit substantially broader and higher quantum efficiency values than any other low-light detector.
Contributing Authors
Thomas J. Fellers and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
 http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/emccds.html

Electron-Bombarded Charge-Coupled Devices (EBCCDs)

The electron-bombarded charge-coupled device (EBCCD) is a hybrid of the image intensifier and the CCD camera that is useful in fluorescence microscopy for imaging specimens at very low light levels. In this device, photons are detected by a photocathode similar to that in an image intensifier. The released electrons are accelerated across a gap and impact on the rear side of a back-thinned CCD.
These energetic electrons generate multiple charges in the CCD resulting in a modest gain of a few hundred. Figure 1 illustrates the design of an electron-bombarded CCD in which photoelectrons, accelerated by a high voltage gradient (1.5-2.0 kilovolts), impact directly onto a back-thinned CCD operating at video rate.

The advantages of this device over a cooled, slow-scan CCD are the additional gain and accompanying speed. EBCCDs also demonstrate no significant geometrical distortion or shading, relatively low noise (40 electrons/pixel) because of design improvements in CCD read-out, on-chip integration capability, and the option for a variety of read-out rates and formats such as binning and subsampling. The main disadvantages are the lower quantum efficiency of the photocathode (30 percent) compared to that of an unmodified back-thinned CCD (80 to 90 percent) and a significant degradation in the modulation transfer function compared to that of the back-thinned CCD alone (see Figure 2).
Limitation of the dynamic range of the EBCCD is also a consequence of the increased gain. This occurs because each photoelectron generates approximately 300 electron/hole pairs causing the wells fill 300 times faster than in an ordinary CCD. The result is that a CCD having a full-well capacity of 150,000 electrons is completely filled by only 500 photons.
Compared to an intensified CCD, the electron-bombarded CCD usually has higher spatial resolution and a better signal-to-noise ratio at moderate light levels, but the limited gain adjustment range and modest low-light-level detection capability make the EBCCD the solid-state equivalent of the outmoded silicon intensifier target (SIT) camera.
Contributing Authors
Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
  

http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/ebccd.html

CCD: Proximity-Focused Image Intensifiers

Image intensifiers were developed for military use to enhance our night vision and are often referred to as wafer tubes or proximity-focused intensifiers. They have a flat photocathode separated by a small gap on the input side of a micro-channel plate (MCP) electron multiplier and a phosphorescent output screen on the reverse side of the MCP.
Substantial voltages are present across the small gaps between the photocathode, the phosphorescent output screen, and the MCP, which require careful construction of the devices to ensure they are free from contamination and can maintain high internal vacuums. Proximity-focused intensifiers are free from geometrical distortion or shading because the photoelectrons follow short, direct paths between the cathode, output screen, and the MCP rather than being focused by electrodes. Input and output windows are typically around 18 millimeters in diameter and consist of either a multialkali or bialkali photocathode (Gen II intensifiers) or a gallium arsenide photocathode (Gen III and Gen IV devices) and a P20 output phosphor. The overall photon gain of these devices averages about 10,000, which is calculated according to the equation:
Gain = QE x G(mcp) x V(p) x E(p)
where QE is the photocathode quantum efficiency (0.1 to 0.5 electrons/photon), G(mcp) is the microchannel plate gain (averaging between 500-1000), V(p) is the voltage between the MCP and the output phosphor (around 2500-5000 volts), and E(p) is the electron-to-light conversion efficiency of the phosphor (0.08-0.2 photons/electron). When the voltage drop between the MCP and the output phosphor decreases below 2500 volts, the phosphor becomes unresponsive.

The photocathode in the latest generation of these devices, while similar to that in photomultiplier tubes, has a higher quantum efficiency (up to 50 percent) in the blue-green end of the spectrum. The gain of the micro-channel plate is adjustable over a wide range with a typical maximum of about 80,000 (a detected photon at the input leads to a pulse of 80,000 photons from the phosphor screen). The phosphor matches the spectral sensitivity of the eye and is often not ideal for a CCD. Resolution of an intensified CCD depends on both the intensifier and the CCD, but is usually limited by the intensifier microchannel plate geometry to about 75 percent of that of the CCD alone. The latest generation of image intensifiers (denoted blue-plus Gen III or sometimes Gen IV; Figure 2) employ smaller microchannels (6 micron diameter) and better packing geometry than in previous models with a resultant substantial increase in resolution and elimination of the chicken-wire fixed-pattern noise that plagued earlier devices. The broad spectral sensitivity and high quantum efficiency (Figure 2) of the "high blue" GaAs and gallium arsenide phosphide (GaAsP) photocathodes are ideally suited to applications in fluorescence or low-light-level microscopy.
Image intensifiers have a reduced intrascene dynamic range compared to a slow-scan CCD camera and it is difficult to obtain more than a 256-fold intensity range (8 bits) from an intensified CCD camera. Intensifier gain may be rapidly and reproducibly changed to accommodate variations in scene brightness, thereby increasing the interscene dynamic range. Indeed, since image intensifiers can be rapidly gated (turned off or on in a few nanoseconds), relatively bright objects can be visualized by a reduction in the "on" time. A gated, variable gain intensified CCD camera is commercially available with a 12 order of magnitude dynamic range. Gated, intensified CCD cameras are required for most time-resolved fluorescence microscopy applications because the detector must be turned on and off in nanoseconds or its gain rapidly modulated in synchrony with the light source.
Thermal noise from the photocathode as well as electron multiplication noise from the microchannel plate reduce the signal-to-noise ratio in an intensified CCD camera to below that of a slow-scan CCD. The contribution of these components to the noise created by the statistical nature of the photon flux depends on the gain of the device and the temperature of the photocathode. Generally, a reduction of the gain of the intensification stage is employed to limit the noise although intensified CCD cameras are available with a cooled photocathode.
Intensified CCD cameras have a very fast response limited by the time constant of the output phosphor and often the CCD camera read out is the slowest step in image acquisition. Because of the low light fluxes emanating from the fluorochromes bound to or within living cells, intensified CCD cameras are frequently employed to study dynamic events and for ratio imaging of ion-sensitive fluorochromes. The simultaneous or near-simultaneous acquisition of two images at different excitation or emission wavelengths is required for ratio imaging and intensified CCD cameras have the requisite speed and sensitivity.
Two of the most popular approaches for relaying the output of an image intensifier to a video-rate camera (vidicon or CCD) are using an optical relay lens coupling or a fiber-optic coupling. Relay lenses are designed to capture light from the intensifier output window with minimal geometrical distortion or spherical aberration and project as much of the image as possible onto the video pickup device. The efficiency of a relay lens is given by the equation:
Efficiency = T/[4f2(1 + M2)]
where T is the lens transmission (around 0.9), M is the magnification (ranging between 0.5x and 2x), and f is the lens f-number (1.0 to 2.8). An ideal 1:1 relay lens with 100 percent transmission and an f-number of 1.0 will give a maximum transfer efficiency of only around 12 percent. When the input window of the video sensor (CCD array size) is smaller than the intensifier output window, the relay lens is required to demagnify the image to match the format of the sensor. Coupling efficiency increases proportionally with demagnification according to the efficiency equation given above. If the intensifier has sufficient gain and output luminance, the losses in the relay lenses may not adversely affect overall performance. Optical relay lenses work well with Gen II inverter tubes and some Gen III (or Gen IV) tubes coupled to Newvicon tube or CCD detectors because the high gain and high screen luminance of these intensifiers help to offset the inefficiency of the relay lenses.
The optimum method for coupling Proximity-focused image intensifiers to CCD sensors is through a fiber-optic taper (Figure 1). This approach achieves a coupling efficiency between 40 and 80 percent with matching formats, but requires a high degree of skill in bonding the fiber-optic taper to both devices. Maximum efficiency and minimal fixed-pattern noise are achieved when the CCD front window is removed and the fiber-optic taper is machined to fit directly onto the diode array surface. High resolution, artifact-free images require precision quality tapers having a small fiber diameter (between 2 and 3 microns) with very few missing or broken fibers and low fixed-pattern noise.
Use of optical relay lenses allows for convenient interchange of the video camera, CCD, and/or intensifier tube, and provides electrical isolation of the sensitive video camera input from the high voltages and high-frequency electrical interference present on the output of the image intensifier. Bonding fiber-optic tapers to the CCD surface is relatively permanent, and CCD failure can lead to loss of an expensive image intensifier and fiber-optic taper. To alleviate this problem, improvements in nonpermanent, optically matched, silicon bonding materials make it possible to disassemble fiber-optic coupled systems without destruction.
Contributing Authors
Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
 
http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/proximity.html

CCD: Photomultiplier Tubes

A photomultiplier tube, useful for light detection of very weak signals, is a photoemissive device in which the absorption of a photon results in the emission of an electron. These detectors work by amplifying the electrons generated by a photocathode exposed to a photon flux.
Photomultipliers acquire light through a glass or quartz window that covers a photosensitive surface, called a photocathode, which then releases electrons that are multiplied by electrodes known as metal channel dynodes. At the end of the dynode chain is an anode or collection electrode. Over a very large range, the current flowing from the anode to ground is directly proportional to the photoelectron flux generated by the photocathode.
The spectral response, quantum efficiency, sensitivity, and dark current of a photomultiplier tube are determined by the composition of the photocathode. The best photocathodes capable of responding to visible light are less than 30 percent quantum efficient, meaning that 70 percent of the photons impacting on the photocathode do not produce a photoelectron and are therefore not detected. Photocathode thickness is an important variable that must be monitored to ensure the proper response from absorbed photons. If the photocathode is too thick, more photons will be absorbed but fewer electrons will be emitted from the back surface, but if it is too thin, too many photons will pass through without being absorbed. The photomultiplier used in this tutorial is a side-on design, which uses an opaque and relatively thick photocathode. Photoelectrons are ejected from the front face of the photocathode and angled toward the first dynode.

Electrons emitted by the photocathode are accelerated toward the dynode chain, which may contain up to 14 elements. Focusing electrodes are usually present to ensure that photoelectrons emitted near the edges of the photocathode will be likely to land on the first dynode. Upon impacting the first dynode, a photoelectron will invoke the release of additional electron that are accelerated toward the next dynode, and so on. The surface composition and geometry of the dynodes determines their ability to serve as electron multipliers. Because gain varies with the voltage across the dynodes and the total number of dynodes, electron gains of 10 million (Figure 1) are possible if 12-14 dynode stages are employed.
Photomultipliers produce a signal even in the absence of light due to dark current arising from thermal emissions of electrons from the photocathode, leakage current between dynodes, as well as stray high-energy radiation. Electronic noise also contributes to the dark current and is often included in the dark-current value.

Channel photomultipliers represent a new design that incorporates a unique detector having a semitransparent photocathode deposited onto the inner surface of the entrance window. Photoelectrons released by the photocathode enter a narrow and curved semiconductive channel that performs the same functions as a classical dynode chain. Each time an electron impacts the inner wall of the channel, multiple secondary electrons are emitted. These ejected photoelectrons have trajectories angled at the next bend in the channel wall (simulating a dynode chain), which in turn emits a larger quantity of electrons angled at the next bend in the channel. The effect occurs repeatedly, leading to an avalanche effect, with a gain exceeding 100 million. Advantages of this design are lower dark current (picoamp range) and an increase in dynamic range.
Confocal microscopes, spectrophotometers, and many high-end automatic camera exposure bodies utilize photomultipliers to gauge light intensity. Spectral sensitivity of the photomultiplier depends on the chemical composition of the photocathode with the best devices having gallium-arsenide elements, which are sensitive from 300 to 800 nanometers. Photomultiplier photocathodes are not uniformly sensitive and typically the photons are spread over the entire entrance window rather than on one region. Because photomultipliers do not store charge and respond to changes in input light fluxes within a few nanoseconds, they can be used for the detection and recording of extremely fast events. Finally, the signal to noise ratio is very high in scientific grade photomultipliers because the dark current is extremely low (it can be further reduced by cooling) and the gain may be greater than one million.
Contributing Authors
Mortimer Abramowitz - Olympus America, Inc., Two Corporate Center Drive., Melville, New York, 11747.
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
 

 http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/photomultipliers.html

CCD: Avalanche Photodiodes

An avalanche photodiode is a silicon-based semiconductor containing a pn junction consisting of a positively doped p region and a negatively doped n region sandwiching an area of neutral charge termed the depletion region. These diodes provide gain by the generation of electron-hole pairs from an energetic electron that creates an "avalanche" of electrons in the substrate.
Presented in Figure 1 is an illustration of a typical avalanche photodiode. Photons entering the diode first pass through the silicon dioxide layer and then through the n and p layers before entering the depletion region where they excite free electrons and holes, which then migrate to the cathode and anode, respectively. When a semiconductor diode has a reverse bias (voltage) applied and the crystal junction between the p and n layers is illuminated, then a current will flow in proportion to the number of photons incident upon the junction.

Avalanche diodes are very similar in design to the silicon p-i-n diode, however the depletion layer in an avalanche photodiode is relatively thin, resulting in a very steep localized electrical field across the narrow junction. In operation, very high reverse-bias voltages (up to 2500 volts) are applied across the device. As the bias voltage is increased, electrons generated in the p layer continue to increase in energy as they undergo multiple collisions with the crystalline silicon lattice. This "avalanche" of electrons eventually results in electron multiplication that is analogous to the process occurring in one of the dynodes of a photomultiplier tube.
Avalanche photodiodes are capable of modest gain (500-1000), but exhibit substantial dark current, which increases markedly as the bias voltage is increased (see Figure 2). They are compact and immune to magnetic fields, require low currents, are difficult to overload, and have a high quantum efficiency that can reach 90 percent. Avalanche photodiodes are now being used in place of photomultiplier tubes for many low-light-level applications.
Contributing Authors
Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
 http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/avalanche.html

CCD: Metal Oxide Semiconductor (MOS) Capacitor

At the heart of all charge-coupled devices (CCDs) is a light-sensitive metal oxide semiconductor (MOS) capacitor, which has three components consisting of a metal electrode (or gate), an insulating film of silicon dioxide, and a silicon substrate.
MOS capacitors are segregated into two classes of devices, one having a surface channel structure and the other having a buried channel design. It is the latter device that is used in the fabrication of modern CCDs, due to several advantages of the buried channel architecture. The MOS capacitor array is fabricated on a p-type silicon substrate (illustrated in Figure 1) in which the main charge carriers are positively charged electron "holes". Prior to the multi-step photolithography-driven CCD fabrication process, a polished silicon wafer is bombarded with boron ions to create channel stops that localize integrated charge within the confines of a single pixel gate set (not shown in Figure 1). After impregnation of the wafer with boron ions, a 10,000 angstrom layer of silicon dioxide is grown over the channel stops.
The next step in the fabrication process is to create the buried channels by implanting phosphorous ions in areas that will eventually be covered by polysilicon gate electrodes. The n-type semiconductor formed by phosphorus contains negatively charged electrons as the primary charge carriers and forms a pn-type diode structure, which serves to localize potential wells deep beneath the silicon/silicon dioxide interface. The potential well illustrated in the central portion of Figure 1 is a schematic drawing of the diode structure.
The primary function of the buried channel is to localize integrated electrons away from the silicon/silicon dioxide interface, where they can become trapped during charge transfer. By localizing charge deep within the p-type silicon substrate, transfer of charge occurs more efficiently with a minimum of residual charge remaining in the gate.
After the buried channels are formed within the silicon substrate, a layer of silicon dioxide is thermally grown on the silicon wafer surface to provide an insulating base for the gate electrodes. Next, a phosphorous-doped layer of polycrystalline silicon (polysilicon) about 5,000 angstroms thick is grown on top of the oxide layer. This layer of polysilicon comprises the gate electrodes (see Figure 1) and is transparent to visible light, making it an ideal substance for use in CCDs. Although, the fabrication of a complete CCD takes additional steps, the basics of the MOS capacitor assembly have been completed at this point.
When the capacitor is unbiased (does not have an applied voltage), electrons residing in the n-region of the device equilibrate to the lowest potential energy:
Potential Energy = -|q| x Y
where q is the magnitude of charge density on an electron and Y is the electrostatic potential. From this equation, it follows that electrons will localize where the electrostatic potential is greatest. A potential energy diagram for the n-region is presented in Figure 2, which illustrates where the electron ensemble is congregated within the capacitor (about 1 micron beneath the oxide layer).
After a quantity of charge has been integrated by interaction with photons and a voltage is applied to the gate electrode with the silicon substrate held at ground potential, the electrostatic potential curve drawn in Figure 2 will tend to flatten at the peak. As the gate voltage is increased, the potential of electrons trapped in the buried channel rises in a linear manner.
Also illustrated in Figure 1 are neighboring gates (denoted by a -V symbol) that are biased to form barriers to the potential well created by the central gate. The MOS capacitor has the ability to move integrated charge (generated by incoming photons) by selectively changing the bias (or voltage) on the three gates relative to one another. This collection and transfer of electrons by the capacitor is the basis for the CCD image sensor.
Contributing Author
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
 

 http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/moscapacitor.html

CCD: Microlens Arrays

Microlens arrays (also referred to as microlenticular arrays or lenslet arrays) are used to increase the optical fill factor in CCDs, such as interline-transfer devices, that suffer from reduced aperture due to metal shielding. These tiny lens systems serve to focus and concentrate light onto the photodiode surface instead of allowing it to fall on non-photosensitive areas of the device, where it is lost from the imaging information collected by the CCD.
A typical lenslet placement scheme is illustrated in Figure 1, where a tiny optical lens is strategically placed over the dye layer and metal light shield of a photodiode. The lenslets are either grown in parallel arrays during the CCD fabrication process or manufactured out of a material such as quartz and placed on the array surface during packaging. Each lenslet is a high quality optical surface containing refractive elements ranging in size from several hundred to around 10 microns in diameter, depending upon the application. Lens quality is so good that microlenses are physically equivalent to an ordinary single-element lens.
Addition of microlens arrays to CCD photodiodes can increase the optical fill factor by up to three times that realized without the tiny optical components. Increasing the fill factor yields a corresponding increase in the sensitivity of the photosite. Microlens arrays provide a substantial increase in performance of interline-transfer CCD imaging arrays that have lateral overflow drains and a sizeable amount of shielded pixel space. These devices typically suffer from reduced optical fill factors because of reduced active pixel area compared to total pixel size.
Interactive Java Tutorial
Microlens Arrays
Discover how light is focused onto the photodiode surface by microlens arrays to improve the optical fill-factor of CCDs. 



Illustrated in Figure 2 is a schematic diagram of an interline-transfer CCD pixel pair, one equipped with a microlens to concentrate light into the photodiode, while the other must absorb incident light rays without the benefit of optical assistance from a microlens. Incident photons that strike the microlens are directed into the photodiode by refraction through the glass or polymer comprising the microlens. The photodiode without a microlens collects a significantly lower portion of incoming photons, because those that impact on shielded areas (the exposure gate and neighboring structures) are not useful in charge integration. The optical fill factor of interline CCDs can be reduced to less than 20 percent by shielded vertical transfer shift registers. With the microlens array, the fill factor can approach 100 percent, depending upon manufacturing parameters.
Organization of the cone of light reaching the microlens surface depends upon the optical characteristics of the microscope or camera lens used to direct light to the CCD. Also, polysilicon gate thickness heavily influences the ability to collect light by the photodiode positioned beneath the gate structure. Microlens arrays are fabricated using reflow techniques on resist layers to achieve numerical apertures ranging from 0.15 to 0.4 with short focal lengths and corresponding lens diameters of 20 to 800 microns. The fill factor of a microlens array is strongly dependent upon the manufacturing process used to create the array. Glass lenses of somewhat lower (0.05 to 0.2) numerical aperture are also utilized. Lower numerical aperture microlenses have fewer optical aberrations with significantly longer focal lengths.
Disadvantages encountered with microlenses are far outweighed by increased sensitivity of devices having these optical components in place. One of the primary difficulties occurs when light rays from the outer portions of a pixel are focused onto an adjacent lens (and subsequently onto the detector photodiode) resulting in mis-registration. In addition, when detector pixel size reaches the diffraction limit of the microlenses, the pixels become overfilled leading to inaccurate measurements. As photodiodes become smaller, the problems associated with producing quality microlenses increase. Higher quality microlenses are needed to produce images on these arrays, but spherical aberration then becomes a problem. Adding microlenses to CCDs increases the number of processing steps, and the uniformity of the lens array is a variable that can often cause problems during fabrication.
Contributing Authors
Mortimer Abramowitz - Olympus America, Inc., Two Corporate Center Drive., Melville, New York, 11747.
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
 http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/microlensarray.html

Digital Camera Readout and Frame Rates


Recent imaging applications in wide field fluorescence and confocal microscopy have increasingly centered on the demanding requirements of recording rapid transient dynamic processes that may be associated with a very small photon signal, and which often can only be studied in living cells or tissues. Technological advances in producing highly specific fluorescent labels and antibodies, as well as dramatic improvements in camera, laser, and computer hardware have contributed to many breakthrough research accomplishments in a number of fields. As high-performance camera systems, typically employing low-noise cooled charge-coupled device (CCD) detectors, have become more capable of capturing even relatively weak signals at video rates and higher, certain performance factors necessarily take on greater importance. A camera system's readout rate and frame rate are interrelated parameters that are crucial to the ability of the system to record specimen data at high temporal frequency.
Sensors designed for quantitative imaging, such as those utilized in high-performance optical microscopy cameras, primarily employ a variation of one of three well-known CCD architectures: full frame, frame transfer, or interline transfer. The frame-transfer and interline-transfer formats generally provide faster frame rate capabilities, but manufacturers incorporate a range of structural and clocking enhancements in designs of each type in order to improve performance. Figure 1 illustrates a full-frame CCD sensor designed to achieve high frame rates by use of a split parallel register that can be clocked to transfer charge in two directions toward dual serial registers, each having a separate output amplifier. The frame rate of the sensor can be approximately doubled by this transfer scheme. Several additional modifications of the standard CCD architectures provide similar advantages and are briefly described in the subsequent discussion.
Readout rate is governed by the time required to digitize a single pixel (the serial conversion time) and is defined as the inverse of that value. Because the conversion time for a single pixel is considerably less than one second, typical readout rates have the units of pixels per second. The rate is often stated as a frequency (hertz, Hz), and some camera manufacturers refer to this specification as pixel clock rate or simply clock rate. The frame rate of an imaging system incorporates the exposure time and extends the single pixel readout rate to the entire pixel array. It is defined as the inverse of the time required to acquire an image and to completely read the image data out to the amplifier. This variable is typically stated in frames per second (fps) or in frequency units (Hz). An approximation of frame rate is obtained by taking the inverse of the sum of total pixel digitization time and the exposure (integration) time, as follows:
Approximate Frame Rate (fps) = 1 / [(Npixel / tread) + Texp]
where N(pixel) is the number of sensor pixels being read, and t(read) and T(exp) represent the single-pixel read time and exposure time, respectively. In the equation, the total pixel digitization time for the array is represented by the quotient of the total pixel number divided by the single pixel read time (N(pixel) / t(read)).
Although this simplified expression for calculating frame rate is useful for certain comparison purposes, it omits a variety of other factors that affect the true frame rate achieved in practice, among them the operation mode of the CCD and the required exposure duration relative to frame read time in a given application. The details of the charge collection and transfer mechanisms employed by a particular sensor design, as well as the choice of operation modes, such as binning and reduced-array scanning, are significant in determining the actual imaging frame rate. Furthermore, it is implicit that absolute maximum frame rate is achieved at the expense of exposure duration, and a long exposure time relative to the time required to read out the accumulated charge becomes the limiting factor in such circumstances. In listing camera system specifications, a manufacturer may specify frame rates achieved under "typical" conditions, or in some cases frame rate values are stated for an exposure time of zero for comparison of performance achieved using different scan modes or array sizes.
The true frame rate value is determined by the combined frame acquisition time and frame read time, each of which depends upon operational details specific to the camera system and application. Quantitatively the frame rate is therefore the inverse of the sum of these two variables, as expressed by the following equation:
Frame Rate (fps) = 1 / (Frame Acquisition Time + Frame Read Time)
To obtain an accurate evaluation of the true frame rate, a more detailed accounting of the two primary components is required than is provided by the simplified approximation, which uses the number of pixels divided by the readout rate plus the exposure time. While the frame acquisition and frame read times are conceptually obvious, each encompasses several operational factors that vary with CCD architecture and operating conditions. A number of basic operations typically contribute to the frame acquisition and frame read time intervals, and these are listed and discussed further below.
Frame acquisition time components:
  • Time required to clear charge from the parallel register prior to beginning integration.
  • Shutter opening delay in CCDs employing mechanical shutters.
  • Exposure time.
  • Shutter closing delay, if any.
Frame read time components:
  • Time required to clear charge from the serial register prior to beginning readout.
  • Time required for a parallel row shift times the number of rows in the array.
  • Serial discard time multiplied by the number of pixels not intended to be read.
  • Serial conversion (digitization) time per pixel times the number of pixels to be read.
Prior to image acquisition, it is necessary with many CCD sensors to clear the pixel array of charge that may have accumulated prior to exposure, due to dark current, cosmic ray interaction, or other charge-generating events. The time required to clear the entire parallel register (referred to as the parallel clear time) depends upon the charge transfer clocking cycle, which may be repeated several times for complete charge removal. The total time required is equal to the parallel clear time multiplied by the number of clearing cycle repetitions. Elimination of any charge accumulated in the array before starting the actual frame integration reduces the phenomenon of image smear, as well as allowing more precise control of exposure time from frame to frame. Typically, current CCD sensors perform several parallel clearing cycles, but because the charge is discarded and does not have to be digitized, these cycles are faster than normal readout times.
The delay in opening and closing a shutter for control of the exposure times is strictly dependent upon the particular camera system and how it is being operated. Some CCD architectures require the use of an external shutter to shield the array from light during the readout phase of image acquisition. If a mechanical shutter is employed, operation times are likely to be significantly increased. Many high-sensitivity systems are based on frame-transfer CCD architecture, which features separate on-chip storage and integration regions, and these may be operated continuously at high rates without a shutter. The exposure, or integration, time in many optical microscopy applications is commonly the dominant factor contributing to frame acquisition time, and in limiting the maximum frame rate achievable.
Following the data acquisition stage, readout of collected charge occurs through one of several different transfer sequences, depending upon the CCD architecture. In the case of a full-frame device, readout takes place by shifting pixel rows directly from the parallel register into the serial register for transfer to the output amplifier. The frame-transfer CCD differs in that following signal integration, data from the entire image array is shifted to a storage array by simultaneously clocking the two sections in parallel, followed by single-row shifts of data in the store section into the serial register. The shift from the image to the storage array takes place rapidly, and while the storage array is being read out, the image array is available to integrate charge for the next frame. Consequently, the transfer from integration to the storage section is typically not significant in the frame read time determination for frame-transfer devices. Whether the CCD is of the full-frame or frame-transfer design, it may be necessary to clear accumulated charge from the serial register prior to transferring charge from the parallel register. The time required for this operation is referred to as the serial clear time.
After any residual serial register charge is cleared, image readout begins with a clocked sequence of gate potentials that causes all charge packets in the parallel register to be shifted one pixel row toward the serial register, such that the first row (adjacent to the serial register) moves into the serial register. The parallel shift time is the time required to perform a single parallel shift cycle. At this point, the first charge packet in each column is in position to be transferred through the serial register to the output node for processing. Multiplying the parallel shift time by the number of rows in the image array yields the portion of the frame read time accounted for by parallel shifts of pixel rows into the serial register.
Note that the normal mode of CCD readout is to shift one pixel row into the serial register, then to read each charge packet in that row by performing a series of column shifts in the register, with each pixel's charge being read as it advances to the output node and is collected for amplification and processing. When the entire serial register has been read out by alternating column shifts and pixel read cycles, another parallel shift cycle moves the next row from the array into the serial register. This process is repeated until all charge is shifted out of the parallel register. The major component of the frame read time is the pixel read time, or serial conversion time, which is multiplied by the total number of pixels being read from the image array. Figure 2 represents diagrammatically the normal sequence of accumulating, transferring, and reading out charge from a full-frame CCD.
Illustrated in Figure 2(a) is a truncated parallel CCD pixel array (4 x 4) that has been exposed to light in order to accumulate a charge pattern of photoelectrons (represented by red spheres). Charge in the parallel register is shifted by one row from Figure 2(a) to Figure 2(b), with the edge row of photoelectrons from the parallel register being transferred into the serial register. In Figure 2(c) the first pixel in the serial register is shifted into the output node before being transferred to the amplifier (Figure 2(d)) and output for processing. Simultaneously in Figure 2(d), the charges in the serial register are shifted toward the output node by one pixel. The next charge in the serial register is shifted from the output node to the amplifier in figure 2(e), and the other charges in the serial register are again shifted toward the output by one pixel in Figure 2(f). This sequence is repeated until the entire charge pattern is transferred from the parallel array through the serial register to the amplifier.
Because the pixel read cycle dominates the frame read time, unnecessary pixels should be discarded or ignored rather than being measured. This is the mechanism by which reading a reduced array, or subarray, of a CCD can increase camera frame rate. Pixels transferred off the chip that occur before or after a defined region of interest in the frame can be discarded during the readout cycle. Additional pixels, which also must be discarded, are often positioned adjacent to the output node, extending the serial register size. The serial register of most CCDs has one pixel for each column of the imaging array, plus an additional number (typically 10 - 50) at the end of the register between the data array and the output amplifier (Extended Pixels, see Figure 1). These extra pixels serve both to provide a dark reference level and to help stabilize the serial clock pulse and signal chain before image data reaches the output amplifier.
Discard times may be associated with serial shifts as well as with parallel row shifts, and the total discard time per frame cycle is obtained by multiplying the appropriate discard time by the corresponding number of discards performed. Individual pixels can be ignored (discarded) by performing repeated shifts in the serial register while omitting the pixel read step, resulting in a discrete time value for this step, termed the serial discard time. Entire pixel rows are discarded by programming repeated parallel shifts without performing the serial row readout. Parallel discard time is therefore equivalent to the parallel shift time, and any time saving results from reducing the total number of necessary serial conversions by the number of pixels in the discarded row. In practice, by manipulating the clocking sequences for parallel and serial transfer and for charge readout cycles, portions of the image frame can be programmed for signal output as necessary for a particular application, with corresponding changes in the achievable frame rate.
Pixel binning is another mechanism that is utilized to reduce image readout time and increase frame rate in CCD imaging, and is performed in the same manner as subarray display, by programmed variations in clock cycle sequences that control the transfer and digitization of sensor-generated charge packets. The technique of binning combines charge from adjacent pixels during the readout process, thereby improving signal-to-noise ratio and dynamic range of the system. Although an effectively larger pixel size lowers spatial resolution, the reduced number of charge packets to be transferred and digitized allows increased readout speed in conjunction with the improved signal level.
Both parallel and serial binning are possible, and in similarity to reduced-array readout, a charge integration period is performed, but the subsequent clocking sequences for charge transfer and pixel readout differ from those normally programmed. Parallel binning is performed during the readout cycle by clocking two or more parallel transfers into the serial register while holding the serial clocks fixed. The effect is to sum pixel charge from multiple rows into each serial pixel before the serial shift cycle begins. The serial binning process transfers two or more charge packets from the serial register into the CCD output node before the charge is read out. Figure 3 presents a binned readout sequence, in which charge from two parallel transfers is summed in the serial register, followed by summing of two serial pixels into the output node for readout. Each readout cycle thus contains the charge from four adjacent pixels.
Various degrees of pixel binning can be utilized, and this is indicated by specifying the number of pixels being combined in the parallel and serial shift directions (termed binning factor, with a value of 1 indicating no binning). For example, a 3 x 3 binning factor specifies that three charge packets are summed into each well of the serial register by parallel shift repetitions, followed by three serial shift repetitions for each cycle of charge readout. Thus, for 3 x 3 binning, each charge packet digitized for image display or quantitative analysis represents nine adjacent pixels of the CCD array. Practically, any combination of parallel and serial binning factors may be programmed as a readout mode provided that the sum of charge from the binned pixels does not exceed the full well capacity of the device. In order to accommodate charge summing and to maintain charge transfer efficiency, pixels in the serial register are typically designed to have higher well capacity than those in the parallel register. With regard to the effect of binning on the frame read time, parallel shift and serial conversion times are not affected, and the increased readout speed results simply from the reduction in the number of charge packets (combined pixels) subject to processing through the readout node.
To summarize the factors that contribute to frame acquisition time and frame read time, and which therefore determine a camera's true frame rate, the following expressions can be employed:
Frame Acquisition Time = (TPR � Nclear) + Topen + Texp + Tclose
where T(PR) and N(clear) are the time required to clear the parallel register and number of clear cycles performed, and T(open), T(exp), and T(close) represent the shutter opening delay time, the exposure time, and the shutter closing delay time, respectively.
Frame Read Time = TSR + (Trow � Nrow) + (TSD � Ndiscard) + (tread � Nread)
where T(SR) represents the time required to clear the serial register, T(row) and N(row) are the time required for a parallel row shift and the number of rows in the array, T(SD) and N(discard) are the serial discard time and the number of pixels not being read, t(read) is the serial conversion time per pixel and N(read) represents the number of pixels to be read. This latter number is, at maximum, the total pixel array size, and is reduced in accordance with subarray readout and/or pixel binning operations.
Although the frame acquisition and frame read times combine in determining CCD frame rate, the interrelationship may not be simply additive because in different operation modes and with various CCD architectures, portions of the two processes overlap to different degrees. The architecture design employed in the CCD sensor affects whether the processes of integration and readout must take place sequentially or can occur simultaneously, and in different circumstances, frame rates may have limits imposed by either the exposure time or readout time. In frame-transfer devices, during the data collection interval the clock voltages are not being cycled for charge transfer between imaging and storage sections. However, data previously collected and held in the storage region of the chip can be clocked and transferred at will while signal collection occurs concurrently in the image section. Frame-transfer sensors, consequently, may be operated in several different modes with respect to timing of exposure and readout sequences. Interline-transfer CCDs share a number of the performance attributes of frame-transfer devices due to their also incorporating a storage area, which is arranged, not as a separate storage array, but as columns of masked pixels alternating with columns of active unmasked pixels. The alternating columns of imaging and storage pixels across the parallel register allow signal integrated in the active columns to be shifted quickly under the interline mask, where it is read out to the serial register while charge for the next frame is being integrated in the unmasked pixel columns.
As previously discussed, the relationship between frame rate and the combined time required for acquiring and reading out photon-generated charge in a CCD system depends in part on the exposure and readout modes employed by the CCD. In general, the charge integration phase of image acquisition is carried out in the same manner regardless of the camera's operating mode, and the selection of functions such as subarray image output or pixel binning can provide increased frame rates primarily due to reductions in the frame read time. The time required to clear charge from the CCD registers, and the shift times are sufficiently short that they do not have a major effect on readout rate, which is instead dominated by the pixel read time (serial conversion time).
Individual charge packets are shifted through both the parallel and serial registers under the influence of repeated cycles of clock signals applied to the CCD electrodes. Each pixel is typically driven by three electrodes that permit three-phase clock cycles, in which corresponding electrodes of each pixel are connected in parallel and produce the same charge-transfer effect for every pixel being clocked. A single cycle of the three clock phases produces either a one-row (vertical) shift in the parallel register or a one-column (horizontal) shift in the serial register. Changes in operating mode, such as selection of subarray scanning or various binning factors, are effected by modifying the clock cycle sequences applied to the electrodes. In the selection of a camera system, the degree to which the timing of clock sequences can be controlled in operation of the camera may be a determining factor in the suitability of the system for a particular application.
Because the interrelationship of exposure time and read time is critical in determining camera frame rate, those two variables are often illustrated in timing diagrams for different operating modes supported by a particular CCD architecture. High-performance camera systems typically employ frame-transfer or interline-transfer sensors to enable the fastest frame rates and continuous imaging without the necessity of a mechanical shutter to control exposure times. Because these CCD designs include both a light-sensitive sensor area and a storage area that is shielded from light and used for frame transfer to the serial register, the two processes of exposure and readout can overlap in time. Camera systems may offer selectable exposure and readout modes, which are categorized as non-overlap and overlap modes on the basis of whether the two operations are performed independently in sequence, or simultaneously. Other systems exercise control of these variables in a different manner, or in combination with other features. For example, clocking modes may be determined automatically by camera firmware to maximize frame rate, utilizing overlapped or non-overlapped exposure-readout sequences as required, or may be set to optimize sensitivity without regard to high frame rate. High-sensitivity operation typically requires non-overlapped operation regardless of exposure time.
When a CCD is operated in non-overlap mode, any exposure time can be specified and fully completed, with readout occurring in sequence when the exposure has ended. A timing diagram illustrating this exposure-readout mode is presented in Figure 4. In non-overlap mode, the same cycle is repeated for each frame in a sequence; the CCD is cleared of residual charge, charge is integrated for the specified exposure time, the charge is shifted from the light-sensitive array to the masked storage array (separate storage section or interline array), and finally charge is read out. Depending upon specific circumstances, the exposure time may be shorter or longer than the read time, and the total time per frame is the sum of the two intervals (in the absence of mechanical shutter delay), since clearing time and the time required to shift data to the storage array are very rapid and not significant in determining frame rate. This operation mode provides similar performance to that of conventional full-frame device architecture. The timing diagram (Figure 4) illustrates the exposure and readout time sequence for a series of three image frames. Using the arbitrary times shown in the example, 10 millisecond (ms) exposure and 50 millisecond frame read time, the total time to acquire three frames is 180 milliseconds (3 x 10 ms + 3 x 50 ms). The corresponding frame rate is therefore 16.7 frames per second (3 frames/0.180 second).
Overlap mode is utilized in applications requiring the recording of dynamic processes, in which continuous imaging is necessary to provide adequate temporal resolution. To maximize the proportion of time devoted to data collection, the CCD is operated continuously (100-percent duty cycle). After the initial exposure, data are shifted to the frame transfer array, and the next exposure interval begins immediately, while readout occurs for the previous frame. This sequence continues, with the timing from frame to frame determined by either the exposure time or the frame read time, depending upon which is longer. The minimum exposure time is therefore equivalent to the frame read time. In situations in which the programmed exposure time is less than the readout time, the first frame in a sequence is exposed for the exact time programmed, with subsequent frames exposed for the readout time. The sequence timing is in effect controlled by the longer-duration readout cycle. Figure 5 illustrates a timing diagram for overlap mode operation with a 10-millisecond programmed exposure time, and CCD readout time of 50 milliseconds, as used in the previous (non-overlap mode; Figure 4) example. The total time required to acquire 3 frames in this mode is 160 milliseconds, calculated on the basis of one 10-millisecond exposure followed by two 50-millisecond exposures, which overlap the three readout cycles of 50 milliseconds each (10 ms + 3 x 50 ms). Operating in overlapped mode results in a reduction in time of 20 milliseconds for a three-frame sequence, and a corresponding frame rate of 18.8 frames per second (3 frames/0.160 second). In this type of sequence, the first frame is exposed for a shorter time than the succeeding ones, and will generally not match in image intensity.
When the programmed exposure time is less than the frame read time, the following general expression can be used to calculate the total time required to capture a specified number of frames (N):
TN = (Tread � N) + Texp
where T(N) is the total time required to capture a sequence of N frames, T(read) is the single-frame read time, and T(exp) represents the programmed exposure time.
A second possible timing variation occurs during operation in overlap mode when exposure time is greater than frame read time, and therefore controls the timing of sequential frames. As a result, each frame in a sequence is exposed for the exact time specified in the system control software, and all images are of equal intensity. As illustrated by the timing diagram for this operation mode (Figure 6), after the initial frame of a sequence is exposed, readout takes place during exposure of the each subsequent frame, and the sequence ends with a final readout-only cycle. The timing diagram illustrated is for three frames exposed for 75 milliseconds each, and a CCD frame read time of 50 milliseconds. The sequence of three frames requires 275 milliseconds, calculated as three exposure intervals and one additional readout cycle (3 x 75 ms + 50 ms), and results in a frame rate of 10.9 frames per second (3 frames/0.275 second).
The following equation is utilized to calculate the sequence capture time for N frames in overlap mode when the programmed exposure time is greater than the frame read time:
TN = (Texp � N) + Tread
where T(N) is the total time required to capture a sequence of N frames, T(exp) represents the programmed exposure time, and T(read) is the single-frame read time.
Although the operational modes described above correspond to many situations typically encountered in optical microscopy applications, including those requiring maximum frame rate imaging, additional timing modes can be implemented with some high-performance camera systems by employing external trigger sources. Precise triggering of image sequences is necessary for certain time-delay techniques used to follow dynamic processes, including time-resolved fluorescence, and ion diffusion studies. Image sequences may be triggered by delay generators or by laser timing signals coordinated with laser excitation pulses, as well as by other sources. Several different triggered imaging modes are commonly employed, and with frame transfer CCDs, all are generally operated in non-overlapped mode. The simplest of the external-trigger timing variations is referred to as trigger-first mode. This mode utilizes a single trigger pulse to initiate a sequence of image frames that are executed by alternating exposure intervals with readout cycles. The sequence continues until the specified number of frames have been acquired, with each exposure having the exact programmed duration.
A similar triggered sequence, referred to as strobe mode, produces a series of images exposed for a programmed integration time, but with each initiated by a separate trigger pulse. Triggering each exposure individually allows the exposure interval to be delayed after readout of the previous frame if desired, rather than beginning immediately after the read cycle, as occurs when the entire sequence is triggered by one pulse. In another timing variation, the exposure time for each frame in a sequence is determined by the trigger pulse width, and readout occurs immediately following each exposure. This type of operation is termed bulb mode, with the trigger pulse duration functioning in a manner similar to holding a camera shutter open for a timed ("bulb") exposure in conventional photography.
In addition to providing operational features, such as the various pixel formats that are achieved by binning and subarray readout, CCD manufacturers often incorporate a variety of design modifications to enhance performance specifications that are critical to certain applications. In order to provide faster frame rates and readout rates, some sensors are designed with multiple output nodes, each with its own amplifier. A number of different arrangements may be used to feed the outputs. One variation utilizes a separate serial register and associated output amplifier at opposite sides of the parallel array, which is divided electrically along the optical centerline in order to transfer accumulated charge from the center toward both serial registers during readout (see Figure 1). To increase the rate at which rows can be shifted in the parallel register, the clock signals used to drive the parallel gate phases can be input from both edges of the array. Another possible enhancement is to split the serial register so that it feeds an output amplifier at each of its ends. In combination with bi-directional parallel transfer, four amplifiers located at the sensor corners can be utilized for readout in this configuration. Additionally, the CCD may be further divided into quadrants and operated in frame-transfer mode allowing the center two sections to be used for charge integration, while the two outside quadrants serve as storage arrays for readout to the serial registers located at either sensor end. Some systems designed for ultra-high speed imaging employ the split-transfer architecture, and position separate output amplifiers at the top and bottom of every pixel column.
When pixel charge packets are processed and digitized through multiple serial registers and amplifiers to increase read rate (rather than in a single serial output stream), the intensity values associated with each pixel are reassembled by the computer into their correct image locations for display. Any mechanism that reduces the number of transfers to which a charge packet is subjected helps to maintain the sensor's overall charge transfer efficiency. In low-signal-level applications, this should be as close to unity as possible to avoid significant image degradation, and many well designed high-performance systems can perform the thousands of transfers often required for frame readout without significant charge loss.
Contributing Authors
Thomas J. Fellers and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
 http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/readoutandframerates.html