Glossary¶
- Glossary
What is the EMVA1288 Standard?¶
The EMVA1288 standard for measuring and reporting the imaging performance of image sensors enables sensors to be compared based on an objective and consistent set of measurements.
Such combinations of EMVA1288 measurements can be used to assess the relative suitability of a sensor (and camera) for your application.
For example, fluorescence microscopy applications, where every photon possible should be detected, will benefit from a low Absolute sensitivity threshold, which is a combination of Quantum efficiency and Temporal dark noise.
Cameras for autonomous vehicles will require high Saturation capacity and Dynamic range to perform well in uncontrolled lighting outside.
When comparing cameras, it is important to consider multiple performance factors.
Image sensors can not excel in each parameter and the final choice is usually a balanced compromise of trade-offs.
The reliance on a single measurement can result in poor overall performance if other important criteria are neglected.
EMVA1288 Specification Comparison Parameters¶
Quantum Efficiency (%)¶
Quantum efficiency (QE) is the ability of the sensor to turn photons into electrons, meaning - to turn incoming light into an electrical signal for imaging.
A higher QE % means greater sensitivity for detecting light.
A sensor with a measurement of 59 % means that for every 100 photons that hit the sensor an average of 59 will be detected.
If more incident photons are successfully converted into electric charge, the quantum efficiency increases
proportionally.
Dynamic Range (dB)¶
Dynamic range describes the camera's ability to detect the maximum and minimum light intensities - shadows and highlights.
Models with higher dynamic range can detect more detail in the darkness and under light.
Temporal Dark Noise / Read Noise (e-)¶
Temporal dark noise (also referred to as Read noise) comes from energy within the sensor and the surrounding electronics surrounding the sensor.
Over time, random electrons are created that fall into the sensor wells.
These are detected and wrongly turned into signal.
Thus, cameras with lower read noise measurements produce cleaner images.
Saturation Capacity / Full Well Depth (e-)¶
The saturation capacity (well depth) is the largest charge a pixel can hold before over-saturation occurs and signal degradation begins.
Saturation must be avoided because it diminishes the quantitative ability of the sensor and in the case of CCDs produces image smearing due to a phenomenon known as blooming.
Absolute Sensitivity Threshold (γ)¶
The Absolute sensitivity threshold is the minimum number of photons needed to equal the noise level.
The lower the number the less light is needed to detect useful imaging data.
Guide to Using the EMVA1288 Specifications¶
Quantum Efficiency¶
This measurement is often used as an indicator of low light sensitivity.
CMOS and CCD image sensors convert light into electrical signals using the photoelectric effect.
When photons enter the photodiode in a pixel, they create a charge by knocking electrons off silicon atoms.
The more efficiently a sensor can convert incoming photons into electrical charge, the higher its Quantum Efficiency will be.
While no sensor is 100 % efficient, some CMOS sensors can achieve up to 77 % QE, compared to 95 % that Backside illuminated sCMOS (scientific CMOS) sensors can provide.
Silicon is most sensitive to green light with a wavelength of 530 nm, while the QE generally falls to 0 % at wavelengths beyond 1050 nm.
Monochrome sensors have higher QEs than color sensors, as the RGB color filters restrict the range of wavelengths that can enter the pixel, reducing the number of photons that reach the photodiode.
The polarizing filters located on the sensor will also reduce the amount of light that can enter a sensor’s pixels, reducing its QE.
Temporal Dark Noise (Read Noise)¶
To read the information captured by a pixel on a CMOS image sensor, the charge created by incoming photons is converted to voltage and the voltage value is digitized.
Small variations at each step of this process can add up and can appear to show a signal even when
no photons entered the sensor.
Read noise is not affected by exposure time.
Typical read noise values of current CMOS sensors are around 2.5 e-, while sCMOS sensors can achieve even lower numbers, towards 1 e- and below.
Absolute Sensitivity Threshold¶
Absolute Sensitivity Threshold (AST) combines QE and read noise and provides a much more useful measure of the actual sensitivity of a sensor than either of these measurements alone.
AST is the weakest signal which can be distinguished from the read noise.
AST is a key metric for applications where low-light imaging performance is critical.
It is also extremely helpful when comparing sensors with different pixel architectures, as high QE does not necessarily translate into good low-light performance.
Signal to Noise Ratio (SNR)¶
The higher the signal to noise ratio, the greater the amount of signal there will be relative to noise. Greater SNR yields better contrast and clarity, as well as improved low-light performance.
Typical CMOS SNR is about 40 dB, with some achieving an SNR of 44 dB in Low Conversion Gain mode.
This is an important parameter for Traffice applications.
Power dB Ratio¶
dB is measured on a logarithmic scale.
With every increase in 10 dB, the power increases by a factor of 10.
Saturation Capacity¶
The photodiode in a pixel can only hold a finite amount of charge.
Saturation capacity is the maximum number of electrons that an individual pixel can store.
Generally, the larger the surface area of a pixel, the greater the saturation capacity.
At saturation, additional photons entering a pixel will not result in a further increase in the brightness value recorded by the pixel.
A small saturation capacity may limit dynamic range.
However, due to the dependence of dynamic range on additional factors, a large saturation capacity does not guarantee a higher dynamic range.
Dynamic Range¶
Dynamic range is the difference between the maximum and minimum light intensities that a sensor can detect.
A high dynamic range will enable sensors to capture details in both dark shadows as well as brightly lit highlights.
Dynamic range is important for a wide range of applications including automated optical inspection where identifying defects on dark IC packages and reflective solder joints in a single exposure is desirable, and autonomous vehicles which must be able to detect and avoid obstacles in highly variable and uncontrolled lighting conditions.
The dynamic range of images captured by a camera can be limited by reducing the bit-depth of the camera’s Analog to Digital Converter (ADC), and by the bit-depth of the pixel format selected.
The Dynamic range of images When viewed on a display is limited - in the case of standard LCD displays to 8-bit color, while HDR monitors provide 10-bit color.
Compressing the dynamic range of higher bit-depth images to display on lower bit-depth displays requires post-processing known as tone mapping.
Gain¶
EMVA Gain is the number of electrons required to increase the pixel value from a 16-bit greyscale value to one value higher.
Sensors with higher gain will appear brighter with fewer electrons. Thus, high gain can be useful for detecting very weak signals in low light conditions.
Terms related to image quality¶
Color Aberration¶
Lenses are designed so that light refracted at a lens meets at one focal point.
Light, however, contains various wavelengths that refract differently.
Thus, light rays of different wavelengths (red, blue, and green, for example) may not meet at a common point.
When this happens, color fringes appear along the borders of very light or very dark parts of an image affecting the image quality.
White Balance¶
The white balance function of a digital camera ensures the objects in the image field are captured with colors in correlation to the light source.
Without proper white balancing, the colors in an image will not reflect the correct colors perceived by the human eye leading to low image quality.
Shading¶
This term describes the light fall-off or color variation from the sensor center to the corners that do not originate from the captured scene.
Thus, shading is a decrease in the image brightness from the center to the corners.
Therefore, it influences image quality by creating unwanted dark or shaded edges.
XIMEA camera software comes with software tools for shading correction following some techniques.