Machine vision with flickerless LED lighting

Published:

Topics: Open machine vision, Open hardware

Antmicro’s engineering team is involved in numerous machine vision projects, ranging from biometrics and security, through industrial process control to drones and autonomous vehicles. These usually rely on an edge AI platform such as NVIDIA’s Jetson, Qualcomm Snapdragon or NXP i.MX and anything from one to six high-resolution cameras. Image quality is often an important aspect of the projects - the performance of machine vision AI is directly linked to the quality of input data, while software pre-processing can hardly compensate for the shortcomings of the image captured by the sensor. That’s why such applications often involve an illumination source, infrared or visible light, controlled from the image processing host for optimal lighting conditions. Depending on the size of the scene and the image features analyzed by the AI, the active illumination unit can vary from a single 0.5 W LED to an array of over 200 W. In most cases, output power control is required to compensate for distance shifts and external lighting dynamics.

The lack of proper lighting control may cause visual artifacts and loss of information, which ultimately results in poorer algorithm performance. In this note we will describe a recurring theme in some Antmicro’s edge AI device development customer projects, namely how flickerless LED solutions can help alleviate these problems, and show how our end-to-end competence in both hardware and software can be used to build practical edge AI systems.

Lighting quality challenges in machine vision

As mentioned, advanced machine vision applications require appropriate lighting conditions, which are typically achieved by lighting intensity regulation. This is especially useful in close-scene imaging applications - text recognition, fault detection, biometric imaging, small object recognition. The most commonly used LED power regulation method is PWM (pulse-width modulation) dimming. The LED is driven at the maximal required level, modulated by a 100 Hz - 10 kHz square wave with variable “ON” pulse width. The method is simple in implementation, precise, linear and very energy efficient. However, while such illumination may be quite neutral to the human eye, it can negatively affect machine vision where frame-to-frame video quality is important.

In general, digital cameras use two main image capture methods. In the rolling shutter method the photosensitive matrix of the camera is scanned progressively, line by line. Since this allows the image sensor to continue gathering photons during the scan, the key benefit here is the effective increase of the sensor’s sensitivity. Rolling shutter is commonly used in CMOS cameras. This image acquisition method, however, can be significantly affected when exposed to a modulated lighting source, such as a PWM-controlled LED.

Diagram depicting the rolling shutter method

As the lighting changes during the matrix scan, the consequent sensor lines represent different lighting conditions, resultings in image artifacts, as presented below.

Picture with artifacts

On the other hand, global shutter is a more robust solution when it comes to scene changes during exposure. In this method, the entire image frame is captured in a single moment, and scanned and converted during the non-photosensitive phase. It is often used in cameras using CCD sensors, more expensive and sensitive than their CMOS counterparts.

However, even this method is not completely free from the challenges of modulated lighting. As shown in the diagram below, unsynchronized light flickering may result in differences in accumulated light exposure between consecutive frames. The eventual percentage of brightness variability will depend on PWM parameters and their relation to the camera’s exposure time.

Diagram depicting the global shutter method

Flickerless LED for AI systems

To overcome these challenges, we often implement flickerless LED dimming in our designs. We use DC/DC buck or boost LED drivers in tracking mode, with LED current feedback being compared against an adjustable voltage reference. The voltage reference, which is basically an analog signal representing the dimming level, is controlled from the host application - usually by I2C-controlled DAC. As the DC/DC converter drives the LED array, the LED current is set in proportion to the input reference voltage. This is done for each converter cycle, at a relatively high frequency (0.75 - 2 MHz), and the output ripple is filtered to a value constant in time, resulting in an unmodulated LED brightness level.

As the LED’s current-to-luminous flux characteristic is close to linear, and the method introduces a proportional relation between the DAC voltage output and the LED array current, the host application gains linear brightness control over the flickerless light source.

With a full 12-bit DAC resolution, i.e. 4095 brightness levels, this method allows very precise lighting adjustments. If application requires it, multiple LED sources can be controlled independently. Power efficiency of the flickerless LED driver ranges from 80 to 90%.

Diagram depicting the flickerless LED solution

Achieving optimal illumination

A sample video processing chain of an edge AI application would include camera drivers, image pre-processing, image recognition layer and data output. The illumination control subsystem is usually linked with the pre-processing stage and continuously adjusted in a closed loop, to maintain optimal input image quality. In some applications multiple light sources can be switched from image recognition layer to bring out various image modalities e.g. IR vision or UV fluorescence.

The flickerless LED driver in most use cases is tightly coupled with the image acquisition process. This is handled by the software stack responsible for setting the lighting conditions and triggering the image acquisition. Since the LED driving is no longer based on pulse-width modulation, a careful sequencing scheme needs to be implemented and maintained in the software. This is a crucial requirement for high-power LED lighting modules operating at full brightness which also causes extreme heat dissipation.

One of the possible approaches assumes setting the DAC to get proper light intensity and then acquiring an image frame from the sensor; this however relies on sequential processing and absorbs CPU time. The other control scenario involves using a frame synchronization signal driven by the sensor. Most of the available imagers offer a frame synchronization output signal which can be utilized for gating the LED driver. The third, and most precise option can be implemented with programmable logic which allows accurate time synchronization between setting the desired light parameters (with temperature compensation) and triggering the image acquisition. In addition, the captured frames can be passed through the programmable logic and the image frames can be stamped with light conditions set during frame capture. Antmicro has implemented all the discussed approaches in numerous machine vision devices.

Improving machine vision with open source tools

As described above, there are many parameters that need to be set on the actual device to make the image perfect for various camera sensors in various environmental conditions. We believe that the best way to control those parameters is by using open source standards and common API integrations. For camera sensors this can be implemented using a regular V4L2 (video for Linux) API. This standard API also allows us to use similar tooling across various projects, including our own open source tools, such as pyvidctrl or Raviewer.

Proper object illumination is, of course, only one part of a large pipeline in machine vision systems. Antmicro is often involved in projects where experience in all steps of the pipeline is necessary, starting from designing the hardware, through initial video processing and Linux drivers up to high level software development including artificial intelligence and machine learning systems and frameworks. If your project requires multi-disciplinary competence and you want to benefit from the constantly growing open source ecosystem, reach out to us at contact@antmicro.com and find out how we can help.

See Also: