Vision 2018: Machine vision switches to edge platforms
The 2018 edition of the biennial leading event for machine vision in Europe, VISION Stuttgart, was a great opportunity to showcase Antmicro’s latest projects and research in that area. Back in 2016 our participation in the show saw us introducing the first operating vision system based on a Xilinx UltraScale+ SoM. This year, we decided to demonstrate our capabilities in building comprehensive edge AI systems on almost every type of platform the industry has to offer.
To illustrate this, we decided to fill up our booth (#1A07) - see a photo below - with a dazzling display of real-life demos, videos of in-field applications and a wide array of edge AI and embedded vision products - starting with the popular Jetson TX2/TX2i Deep Learning kits, the FPGA MPSoC-based UltraScale+ Processing Module, on to the more recent Zynq Video Board.
What was interesting in our discussions with the fair attendees was their readiness to switch from cloud and PC-based algorithmics to mobile embedded devices for on-board computations. Following the first wave of AI implementations in the field, it is becoming more evident to customers that edge platforms are the only way to ensure flawless real-time data processing. What we were demonstrating at VISION was proof that platforms such as the Jetson Xavier are now opening the PC-based machine vision market to move to embedded.
Antmicro helps build complete AI-enabled products by offering software services, interfacing sensors, data fusion and deep learning, as well as prototyping customized hardware. Our expertise comes from thorough knowledge of state-of-the-art edge AI platforms ranging from GPGPU (NVIDIA Jetson series), smart AI accelerators (Intel Movidius Myriad), heterogeneous multi-core (NXP i.MX6/7/8), hybrid systems with FPGA MPSoC (Xilinx UltraScale+), LTE-enabled Qualcomm Snapdragon family, all the way to the newer, open processing platforms based on RISC-V.
Real-time AI with Jetson Xavier AGX
The latest NVIDIA Jetson Xavier AGX was definitely in the spotlight among our interactive demos. Antmicro developed an AI application for identifying and tracking people and objects in real-time taking advantage of the power offered by the latest Jetson and MIPI CSI-2 interfacing our carrier board with Allied Vision’s ALVIUM cameras. An analogous demo was doubled on the dedicated Antmicro pod at Allied Vision’s booth (#1D30) that proved to be the center of this year’s show.
Allied Vision’s interesting concept of using a scalable SoC-like design with integrated ISP in their ALVIUM line of cameras draws a direct correspondence with Antmicro’s principles of creating elegant, platform-agnostic software. Enabling a broad range of cameras with built in, standardized ISP capabilities with Allied Vision will allow customers from different segments to rely on an open API.
The synergies that can be achieved thanks to this were also presented by Antmicro earlier in September at the successful Xavier AGX Developer Day at GTC Europe in Munich (read more on this topic in the original blog note).
X-MINE hybrid FPGA+GPGPU system
Also on display was the functional prototype of our hybrid FPGA+GPGPU stereovision system developed for the X-MINE smart mining project. The device, presented in dedicated industrial casing, combines vision pre-processing on Antmicro’s UltraScale+ Processing Module to create 3D images in real-time with GPU-accelerated tracking on the Jetson TX2 provided as a stand-alone solution that can be further integrated with ore sorting systems in mining or other industrial setups.
Furthering the idea of open design, Antmicro is complementing the use of license-free computer architectures (early adoption of RISC-V), open software (working with Google around TensorFlow Lite) and open hardware (our Zynq Video Board) with, indeed, open AI. At the bottom of this lies our drive to deliver modular and scalable designs to enable seamless data fusion, platform migration and integration with larger industrial systems to our customers and partners.
If you would like to discuss how your next edge AI / machine vision product could benefit from an open design approach, don’t hesitate to contact us at firstname.lastname@example.org.