December 27th, 2016
Android or Linux for embedded systems, some thoughts

While now hard to remember, there was a time when the term “embedded Linux” was viewed with suspicion. The outcrop of small, low-power, Linux-based devices – and readily available open source software – has since transformed the industry; nowadays it’s not “why would you run Linux here” but rather – “why wouldn’t you”?

With embedded systems boosting more performance and graphics capabilities than ever before, and with GUI-enabled touchscreen devices taking hold in a wide range of applications, from small control panels to huge infotainment screens, Android – once viewed as a purely consumer OS – is becoming an interesting alternative.

In this post, as a long-time partner and provider of software & product development services for mutual customers of Toradex, as well as the maintainer of Android images for Toradex modules, we explain the differences and commonalities between those two operating systems.

Over the years, we have developed numerous software solutions and complete devices, both industrial and consumer, running Linux and Android, and we believe there is no silver bullet – what OS is better for your use case depends on the use case itself and your devices’ planned life-cycle.

Linux is a great choice for the the majority of embedded use cases. Linux build systems such as Buildroot and OpenEmbedded can be used to create customized BSPs tailored to almost any size and a wide array of application software and SDKs is available, from gstreamer through Python to even node.js. An OpenEmbedded/Yocto-based Linux is the default distribution supported by Toradex, and a vast development community supports a multitude of programming language environments and frameworks. Modern Graphical User Interfaces (GUI) can be developed with many technologies including Qt, HTML5, to the point that can make it hard to choose. But once you build your base OS image with the necessary software components, update capabilities and APIs – a task for which you also can use service providers like Antmicro – you have all the freedom in the world to build your application software, and it is not so hard to change your mind down the road in case you need that.

TAQ

Android, on the other hand, forgoes some of the OS-level freedom in favor of standardization: there is an “Android” way to do things in order to benefit from the good sides of this OS. In return, you get a unified GUI framework, Java programming paradigm* and a familiar developer experience (a natural consequence of Android’s smartphone/consumer origin) which can in fact be critical for your use case, especially if your device includes a touchscreen which is intended for regular use by various people.

For example, if you have an already existing dedicated app for smartphones/tablets that your users are accustomed to – whether it’s a smart home control center or a mobile industrial measurement device – and you are building a dedicated device to replace or complement them, Android is the perfect choice. Without the need to rebuild your user interface from scratch, you save vast amounts of work and numerous user studies to get the UI right, and people care most about what they see and interact with. You will need an industrialised Android image (with e.g. single-app lock in, customised branding and OS abstractions for various interfaces) to accomplish this, but it will probably be a smaller investment than recreating the user experience in Linux.

Even if you do not have a pre-existing app, you may also have an in-house team of Android application developers (or know a good Android application design studio) who could potentially develop your UI for you. With a much broader application development community, tons of example apps, standardised application packaging and emulators, developing end user apps in Android is easy. The clear separation between OS and application layer through standardised APIs (in Android you use different “API levels” to indicate compatibility) means that you can reuse existing mobile apps or have a separate team work on them and carry on with testing or adjusting the user interface with the target users while an embedded team works on ensuring all the Android features you want to support.

Good use cases for choosing Android especially includes scenarios with a larger, varied community of users. That includes not only typical consumer devices such as wearables or smart home IoT, but also things such as corporate devices used by industrial professionals in larger numbers – from hand-held instruments to networks of machines situated on site. Even foregoing the application development experience, the familiar user interface components, gestures and interactions of Android may be enough to warrant picking it over Linux.

Android on TK1

As mentioned above, Android – although based on the Linux kernel – has its own way of doing things, including a fairly complicated buildsystem (related to its huge codebase), and the necessity to expose your kernel level additions to the OS layer in order to make them work in your application. There are also more requirements from the hardware – since Android requires graphics acceleration and memory for its virtual machine, you will typically not be able to run modern Android versions on a device with less than 512 RAM or without a GPU. As with any other choice, using Android in your embedded device should be motivated by a need for the benefits it provides.

Antmicro has helped numerous customers choose the right operating system for their devices based on Toradex platforms, and provides Android demo images and development services for Android 5.1 or 6.0 for the Toradex T30/i.MX6 and TK1 modules to offer better time-to-market.

If you need assistance in deciding which option to pursue for your next embedded device, Antmicro will be happy to guide you further. For details please contact mgielda@antmicro.com or visit www.antmicro.com.

[*] you can also use an NDK to develop your applications in C/C++ and C#, use Qt as your presentation framework or develop mobile apps using Javascript in frameworks like Cordova or React Native – but we are focusing on the most popular scenario here.

September 28th, 2016
Antmicro joins partners NVIDIA and Toradex at GTC Europe 2016

GTC, NVIDIA and Toradex logos

We are excited to announce that Antmicro has been invited to participate in NVIDIAⓇ’s upcoming GPU Technology Conference (GTC) Europe – the first European edition of the world’s biggest conference focused on GPU-accelerated systems and topics like VR, AI, autonomy and deep learning this side of the world, hosted in Amsterdam, September 28th-29th. The European edition of the event is expected to gather a crowd of researchers, decision-makers and developers representing many established as well as emerging industries.

Building further on our expanding cooperation with strategic partners, Antmicro will be joining Swiss module vendor Toradex and NVIDIA with two official demonstrators of high-processing embedded applications based on the TegraⓇ line of CPUs, as well as showcasing its own NVIDIA TK1 and TX1 hardware.

Moped TK1 camera setup

At NVIDIA’s booth, Antmicro will be exploring deep learning in action by once again showcasing the well-received MOPED automotive computer vision platform with autonomous sign recognition running CUDA on Toradex’s Apalis TK1 SoM.

On Day 2, don’t miss out on Michael Gielda’s talk at 3:00 p.m. in track EMBEDDED 16: Jetson™ TK1/TX1 – Moving Parallel Computing Into The Field While Keeping The Comfort to learn more about Antmicro’s work with the TK1 and TX1 technology and the benefits of the well established NVIDIA ecosystem.

Axiom Gamma

Not surprisingly, our partner Toradex will be exhibiting their exquisite line of TegraⓇ SoMs at booth #E6 on both days. Antmicro was invited to demonstrate the official prototype of the AXIOM 4K open source camera, featuring our custom camera board with Android 6.0 and CUDAⓇ capabilities on Apalis TK1, alongside a 24-node TK1 data cluster demo from Brytlyt, Christmann and Bielefeld University.

Our participation in GTC Europe supports Antmicro’s market position as an official NVIDIA Jetson™ Ecosystem partner.

Needless to say, we are very excited to see what’s up next!

September 14th, 2016
CUDA® and Android on Apalis TK1 in advanced camera systems

In our first of a series of partnership guest posts on Toradex’s blog, we are putting CUDA®, Vision Processing and the new Toradex Apalis TK1 SoM, based on NVIDIA’s powerful Tegra® K1 SoC, in the spotlight. This is the highest performance Toradex module yet, offering deep learning and embedded vision capabilities enabled by the built-in, programmable GPU with 192 CUDA-enabled cores connected to the quad-core Cortex-A15 CPU at 2.2 GHz.

A POWERFUL FOLLOW UP TO A SUCCESSFUL LINE OF CPUs

Antmicro has a long history of many successful projects with the Toradex modules based on the NVIDIA Tegra 2 and 3, many of which had a video processing and camera focus, involving drivers, gstreamer plugins / processing pipelines and multi-camera systems. Some of this work was described in Antmicro blog notes like the ones about the ADV7280 and Epson S2D13P04 chips. Related demos were also jointly exhibited by Toradex and Antmicro at Embedded World in 2014 and 2015.

Thus, by the time the Toradex Apalis TK1 module was released, there was already an established track record of successful vision/video projects with Toradex Tegra CPUs and appetites were high for a Tegra K1 CPU module coming from Toradex.

This is because TK1 is more than ‘just’ a successor to its older brothers. While both T20 and T30 included an embedded GPU, the one in TK1 is not only much bigger, with 192 CUDA cores, but also – more importantly – fully programmable. The TK1 makes general purpose high-performance parallel programming with CUDA (or OpenCL) available for the first time for embedded applications.

This capability is extremely interesting for various drone, defense and industrial video applications where local, low-latency processing of high volumes of data is the key differentiating factor. With local processing using CUDA, tasks such as deep neural network inference, 4K video processing or live object detection in video streams become viable in a small embedded chip for a broad range of possible use cases.

As an official NVIDIA Jetson ecosystem partner with a high interest in high-performance parallel processing for its own and its customers’ projects, Antmicro was deeply interested in pushing forward a powerful yet cost-effective platform like the Apalis TK1.

That is why Antmicro and Toradex worked closely around the new Apalis TK1 SoM even before its release. The work related to interfacing various cameras with the TK1 leading up to the module’s launch resulted in a blog note about cameras and the Jetson platform. Antmicro had also built the demonstrator for the module’s launch at Embedded World this year, the road sign-recognizing MOPED RC car, which is the first application presented in this post. At the same show, Antmicro – being Toradex’s partner for the Android operating system – also demonstrated Android running on the TK1 within days of receiving the first samples; Android on the TK1 is described further in the article.

MOPED AUTOMOTIVE RESEARCH PLATFORM WITH APALIS TK1 FOR TRAFFIC SIGN RECOGNITION

To demonstrate what the Apalis TK1 module is capable of, autonomous vision systems are probably one of the most exciting, given their broad applications and growing market demand. Autonomy is perhaps best exemplified with the broadly discussed automotive use case, where self-driving cars are a motor for much of the development in autonomous machines in the recent years, hence Toradex’s and Antmicro’s choice of the TK1 demonstrator for Embedded World.

MOPED

The MOPED project is an open source research platform originally developed by Antmicro’s partner research institute, SICS, for the automotive industry in the form of a classy RC car. It was rebuilt from the ground up by Antmicro to serve the machine learning/vision use case and runs CUDA for on-board traffic sign recognition on the Apalis TK1 module as its main processing unit. MOPED, which was designed to reflect the typical setup present in many modern cars today, is capable of detecting and recognizing traffic signs thanks to a pre-trained Deep Neural Network using the computing power of the module’s 192 Kepler cores.

The application is based on OpenCV, performs camera configuration and frame capture, does the road sign recognition and encodes the images in h264 before streaming it live over the network for demonstration purposes (in the demos, a tablet is used for that aim).

The setup includes signs of different shapes and sizes which are displayed on the external monitor. The type of sign and its position is randomized. The classification thread has to solve two problems: finding signs in the surrounding environment and recognizing their type. The latter is done in CUDA® using the cuDNN and Caffe frameworks.

TK1-camera-board

Since object detection and classification does present significant computational complexity, performing it in real-time on embedded platforms used to be difficult to impossible – that is, before platforms like NVIDIA Tegra K1 emerged, bringing down the performance-per-watt for such applications to levels acceptable for packing it into a relatively small physical footprint. In fact, MOPED’s size is mostly owed to its demonstrational and physical aspect – the processing power provided by Antmicro’s very small TK1 development kit including a custom Apalis TK1 baseboard and dual camera board, and other, much smaller applications such as drones, can easily be imagined (in fact, some autonomous vehicle/drone projects are currently under way for Antmicro’s customers).

MOPED has been on display around the world, most recently at XPONENTIAL (New Orleans, USA), a world class event for autonomous/unmanned vehicles, co-exhibited with NVIDIA. Further work is under way to bring the pedestrian detection use case to MOPED as well.

AXIOM GAMMA 4K OPEN CAMERA W/ CUDA® & ANDROID ON APALIS TK1

Another application that makes great use of the Apalis TK1 is the AXIOM Gamma 4K open camera. The AXIOM Gamma project was recognized by the EU H2020 funding scheme as breakthrough for offering the world’s first fully modular and open source professional 4K camera. Thanks to the modularity and extensibility aspect, AXIOM can serve as a computer vision platform to suit countless use cases: from filmmaking to industrial vision systems.

AXIOM-Gamma

The AXIOM is primarily FPGA-based, but enables extensions by means of a backplane exposing the video data to additional modules. In order to provide a widely available computer vision programming paradigm (CUDA) as well as a good-looking and familiar UI (Android) for the camera, an Apalis TK1 camera add-on board was developed by Antmicro with the accompanying software. Currently the TK1 add-on runs Android 5.1 but it will be upgraded to 6.0 in the coming weeks.

The Apalis TK1 module in the camera can be used to process the video data coming in from the camera over MIPI-CSI2. The Android OS enables the use of standard Android camera applications to show and manipulate the video stream, and the CUDA capability makes it possible to further process the video data in real time.


There are more Apalis TK1 projects under way at Antmicro, but we hope we have caught your attention with the above examples. This note is the first in a series of notes by Antmicro concerning interesting topics and work related to Toradex products.

The next post will discuss the differences between Android and Linux on Toradex platforms, what are their strengths and weaknesses and when it is best to use either for your next product.

As a partner of Toradex, Antmicro provides development services for Toradex customers from designing hardware for custom, small form-factor electronics with specialized I/Os, through software development, to implementing GPGPU processing, CUDA, deep learning and embedded vision algorithms.

To learn more about Antmicro’s development services around the Toradex Apalis TK1 SoM, please enquire at contact@antmicro.com.

July 12th, 2016
Antmicro’s smart MCU GUI library at Arduino Developer Summit

Antmicro’s smart GUI library for contained MCU devices has been drawing quite some attention recently. Following last year’s presentation at Designers of Things in San Jose, USA, a stand of its own at Nuremberg’s Embedded World, Germany, later in 2016, and now an invitation to participate in the first Arduino Developer Summit – the library is making a strong appearance in the MCU space.

Developers, makers and influencers from the open source community gathered at the Skyway Montebianco centre, immersed in the clouds of icy Mont Blanc, Italy, June 30th through July 1st. Semiconductors including our partner STMicroelectronics, Microchip and Nordic Semiconductor were also present, completing the MCU business chain.

arduino-summit

The event offered interesting presentations (including a highly enjoyable one from the creator of Snap4Arduino), good networking opportunities and a laid-back atmosphere that made the stunning view outside the panoramic window of the Skyway centre all the more enjoyable.

On the event’s second day, Michael gave his speech explaining the applications of the “Smart graphics display library for contained MCU devices like the Arduino STAR OTTO”. This fitted in quite nicely as the summit’s main focus was how to bridge the gap between maker-level platforms and professional products, accelerating innovations using small-sized electronic platforms in the growing IoT environment.

Antmicro’s graphical user interface library allows you to embed a good-looking, intuitive GUI even in lower-end, microcontroller-based devices with power and unit cost limitations – think smartwatch, portable instrument, vending machine or wall-mounted display. You can design the GUI via simple XML description and code logic separately. Easy to maintain and certify, the framework can be integrated with remote update and recovery mechanisms for a full interface solution.

To learn more about Antmicro’s smart GUI library and our development services please enquire at contact@antmicro.com.

June 26th, 2016
Axiom Gamma’s final stretch and demonstration at the 4th RISC-V Workshop, MIT

We are happy to report that the Axiom Gamma Horizon 2020 project has taken the final stretch on its way to making the world’s first truly open source and open hardware modular 4K camera a reality.

Axiom Gamma enclosure

On June 10th, 2016 our consortium partners – Apertus, AF Inventions, Denz and the University of Applied Arts Vienna – arrived at Antmicro’s premises in Poznan for the final project meeting. Budgeting and reporting before the EU’s conclusive review was discussed, followed by a lovely night out in the city centre. Spirits are high, as the project, despite all its complexity, is on schedule. As a follow-up to the meeting, the partners each received their own copy of the camera’s casing (see picture to the right), so you can now get a feel of the final look.

In more technical terms, Antmicro’s progress on the Axiom has been very good. One of the developments has been implementing the RISC-V ISA-compliant Z-Scale CPU in the final hardware configuration of the Axiom Gamma – which, being open source itself, nicely fits the character of the open camera project.

Written in the Scala-based Chisel hardware construction language, the 3-stage Z-Scale core is used to drive the communication between the camera modules and control of the image sensor. With the upcoming 4th RISC-V Workshop at MIT on July 12-13th, 2016, the Axiom Gamma will be presented as an example of how the RISC-V architecture can be successfully used in practical, high-profile applications. As both Founding Member of the RISC-V Foundation and co-creator of the Axiom Gamma, we are looking forward to the event next month.

Another group of important developments has been centered around our custom Computer Vision camera module for the camera featuring the Toradex Apalis TK1 SoM with the NVIDIA Tegra TK1 CPU, enabling CUDA GPGPU processing of the video data from the camera. More news about that, as well as our own Apalis TK1 carrierboard and camera modules, soon!

May 23rd, 2016
Multi camera systems with Epson S2D13P04 and Toradex Nvidia Tegra T30

Seeing the growing interest in our series of posts on vision systems spanning from low-cost analog video chips to high-end 4K filmmaking cameras that we have been working with, we decided to showcase yet another interesting use case and open source the related code. This time we focus on the Epson S2D13P04 – an integrated chip designed for multiple camera systems. As in many previous projects, we have created a driver that allows you to use it with an Nvidia Tegra embedded CPU, the popular T30 featuring a quad-core Cortex-A9 CPU. To learn more about our partnership with Nvidia and services for Nvidia embedded solutions, especially in the cameras and vision field, see http://antmicro.com/nv/

Epson S2D13P04

The S2D13P04 allows you to connect up to 4 analog PAL/NTSC cameras, simultaneously decoding the image and outputting the stream in various modes:

  • all four input video streams on one output image (Merge Mode),
  • single stream mode (Fixed Mode),
  • automatically switching between input sources (Auto Scan Mode), and
  • streaming sources in pairs and switching to the next pair (Compression Mode).

The chip can also convert the image from interlaced to progressive (480p) mode, so you do not need to struggle with the CPU-consuming conversion yourself.

The setup we were working with includes an internally designed adapter board for the Colibri and Apalis development/evaluation boards for T30 modules from our partner Toradex. As in many cases in our long-term cooperation with Toradex, we have integrated the driver with their official Linux git repository.

To get the S2D13P04 to work, follow the instructions below:

1. First, clone the latest kernel sources from the Toradex git repository:

1
2
git clone git://git.toradex.com/linux-toradex.git -b tegra-next
cd linux-toradex/

2. Configure the kernel so that you can build the S2D13P04 driver as a module:

1
2
3
4
5
export ARCH=arm
# modify this according to your host system setup:
export CROSS_COMPILE=arm-linux-gnueabihf-
make colibri_t30_defconfig
make menuconfig

You have to manually enable SoC camera support, Tegra soc_camera host driver and s2d13p04 support as modules. You can find them in Device Drivers -> Multimedia support -> Video Capture adapters

3. Now it’s time time to compile the kernel and modules:

1
2
make uImage
make modules

4. Let us assume you have a running Colibri T30 board, connected to your local network. Copy your newly built modules and the uImage to the board’s internal memory using scp:

1
2
scp arch/arm/boot/uImage root@<board_IP_address_goes_here>:/media/boot/
scp drivers/media/video/*.ko root@<board_IP_address_goes_here>:

5. Finally, restart the board, connect all the cables, and load the modules:

1
2
3
4
5
6
7
8
insmod videobuf-core.ko
insmod videobuf2-core.ko
insmod videobuf2-memops.ko
insmod videobuf2-dma-nvmap.ko
insmod soc_mediabus.ko
insmod soc_camera.ko
insmod s2d13p04.ko
insmod tegra_v4l2_camera.ko

and check how it works:

1
gst-launch v4l2src ! nvvidconv ! nvxvimagesink

You should now see the video stream divided into four sub-streams showing the input feed from the cameras (“Merge Mode” is currently the only mode we support). More detailed information about the S2D13P04 chip can be found at the Epson website.

If you would like to seek our help in developing your next multi-camera product, or have any inquiries about our embedded software development or PCB design services, do not hesitate to contact us at contact@antmicro.com.

April 26th, 2016
Antmicro to demo MOPED with TK1 at NVIDIA booth at Xponential 2016

Things are getting really interesting! Our recently announced partnership with NVIDIA is now taking us to the US, where Antmicro will be co-exhibiting with NVIDIA at Xponential 2016 – a world class event for the Autonomous/Unmanned Vehicles industry, taking place in New Orleans on May 2-5.

Xponential 2016 logo

Following its successful debut at ECS in Stockholm last autumn, great press at Vehicle ICT Arena in Gothenburg and later at Embedded World in February, the automotive vision research platform MOPED will now be shown at XPONENTIAL as a great example of what can be achieved with modern embedded systems. Initially featuring the Jetson TK1 development kit with a single-camera setup, the vehicle has been further upgraded and now its autonomous sign recognition system is running on Toradex’ brand new Apalis TK1 SoM with our custom dual-camera board, showcasing the possibility to create custom hardware with NVIDIA’s TK1 for a wide range of industrial applications.

Visit us at booth 1231 to see MOPED in action. Antmicro has also been invited to give a speech on deep learning and the company’s services around that subject in a response to the growing demand for embedded intelligence in unmanned vehicles, so while at the NVIDIA booth, be sure to look out for our presentation!

March 31st, 2016
eCos with SMP support for Zynq FPGA SoC

The combination of a dual-core ARM CPU and FPGA found in the Zynq SoC is perfect for applications which can benefit from parallel execution. Using the FPGA fabric of Zynq, critical functionalities of the system can be accelerated using customized IP.

Mars ZX3 in a PM3 baseboard

Typically, Zynq users will run Linux on the ARM CPU, but in solutions with real-time constraints or where code size and more fine-grained control over the behaviour of the system are important, RTOS such as eCos are a good alternative.

Back in 2012, soon after first Zynq System on Modules (SoMs) like the Enclustra Mars ZX3 appeared on the market enabling a broad adoption of this FPGA SoC in industrial applications, we have ported eCos to Zynq and have been supporting it ever since.

Our original port of eCos utilized only a single core of the ARM CPU, as we typically run it (and expected others to do so too) it in AMP (Asynchronous Multicore Processing) mode, with the other core reserved for Linux, or in situations where the CPU side was mostly used for configuration and real-time control purposes.

If however you want to perform some more processing also on the ARM cores, it makes sense to use both of them in an SMP (Synchronous Multicore Processing) mode. Therefore we decided to add SMP support to our eCos port so that threads can be distributed on both ARM cores to optimize the runtime of multi-threaded applications.

Below you will find a short tutorial on how to download and run our eCos port with SMP (symmetric multiprocessing) support. As usual, we will be using our partner Enclustra’s Mars ZX3 module for that end.

Getting the sources and tools

Grab a copy of the eCos sources using Git:

1
2
git clone https://github.com/antmicro/ecos-mars-zx3.git ecos
cd ecos

Building the system

First of all, the ECOS_REPOSITORY environment variable needs to be set.
To do so, execute:

1
2
3
cd packages
export ECOS_REPOSITORY=`pwd`
cd ..

eCos is always built outside the source tree.
Create a new directory in which the system will be built:

1
2
mkdir build
cd build

Choose the configuration file to build upon using the ecosconfig tool:

1
ecosconfig --config=../mars_zx3_ecos_smp.ecc tree

Building the system is triggered using the Makefile system, simply run:

1
make -j9

Then run basic tests, including a test that verifies the actual multiprocessing:

1
make tests -j9

The output executable can be found in ./install/tests/kernel/current/tests/smp relative to your current directory.

Deploying the example application

The following instruction describes how to run the application on the Mars ZX3 module on Mars PM3 baseboard.

First, get U-Boot running on the board.
For instructions on how to get U-Boot running on the Mars ZX3 module please refer to the user documentation.

Put the executable file either on the SD card or inside a TFTP server directory, and load it using U-Boot.

For SD cards run:

1
fatload mmc 0 0x0 smp

For the TFTP server, make sure you have the ipaddr and serverip environment variables set correctly, and the physical Ethernet cable connection is working and run:

1
tftpboot 0x0 smp

Once the file is loaded, it can be run using the bootelf command.

1
bootelf 0x0

The application runs a few tests in a loop to check whether all CPUs are active and whether the scheduling and rescheduling of specific threads works as expected.

Part of the application output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
INFO:<Timeslice Test: done>
INFO:<CPU Test: Check CPUs functional>
INFO:<CPU Test: done>
INFO:<Timeslice Test: Check timeslicing works>
 Thread    CPU  0   CPU  1   Total
      0         0    36308   36308
      1     41705    36570   78275
      2     42273    36491   78764
      3     42200    36560   78760
      4     42238       53   42291
      5         0        0       0
 Total     168416   145982
Threads         4        5
INFO:<Timeslice Test: done>
INFO:<CPU Test: Check CPUs functional>
INFO:<CPU Test: done>
INFO:<Timeslice Test: Check timeslicing works>
 Thread    CPU  0   CPU  1   Total
      0         0    36288   36288
      1     41682    36568   78250
      2     42271    36492   78763
      3     42200    36551   78751
      4     42237       52   42289
      5         0        0       0
 Total     168390   145951
Threads         4        5
INFO:<Timeslice Test: done>
PASS:<SMP tests OK>
EXIT:<done>

As you can see, basic SMP works. All of the CPUs are used, new threads get scheduled to the least occupied CPU.
Good luck multithreading!

February 23rd, 2016
Antmicro becomes Nvidia® Jetson™ Ecosystem Partner

Nuremberg, 23 Feb 2016

Antmicro, an international research and development company known for its work with emerging technologies in embedded and cyber-physical systems, has announced today that it has become an official Jetson™ Ecosystem Partner, thus joining efforts with NVIDIA® to co-market the NVIDIA Jetson TK1 and TX1 Embedded Platform and Antmicro’s development services around this family of products.

nvidia-logo

As an NVIDIA Jetson Ecosystem Partner, Antmicro offers comprehensive services to help customers take full advantage of the NVIDIA Tegra® embedded CPU line. With advanced vision systems being one of the company’s highlights, the CUDA®-enabled Tegra K1 and its follow-up, X1, with 192 Kepler™ and 256 Maxwell™ GPU cores respectively are the perfect match in designing state-of-the-art embedded devices requiring uncompromised graphics and video performance.

“Already since 2010 – when NVIDIA’s breakthrough high-performance embedded Tegra 2 CPU was made widely available for general embedded applications via a SoM from our mutual partner, Toradex – Antmicro has been helping customers to successfully implement high-performance applications taking advantage of Tegra’s multiple ARM cores and powerful integrated graphics.”, says Michael Gielda, Business Development Manager of Antmicro. “By becoming an official NVIDIA Jetson Ecosystem Partner and applying the Jetson TK1 and TX1 Embedded Platform’s capabilities, we can further strengthen the offering in this segment.”

antmicro-logo

Antmicro uses the opportunities brought by the Jetson Embedded Platform’s advanced computational capabilities through CUDA and other parallel processing paradigms for computer vision systems, deep learning and GPGPU programming, enabling new areas which require substantial computational capabilities in a small form factor such as robotics, drones, defense, ADAS, medical imaging or avionics.

For an example of Antmicro’s application of the NVIDIA® Jetson™ Tegra® K1 in an automotive set-up, see the MOPED experimental platform for computer vision research featured at Toradex’ booth 639 in Hall 1 at Embedded World 2016. At the same booth, Antmicro is also showing its industrial Android™ distribution demo featuring NVIDIA® Tegra® K1.

About Nvidia

Since 1993, NVIDIA (NASDAQ: NVDA) has pioneered the art and science of visual computing. The company’s technologies are transforming a world of displays into a world of interactive discovery — for everyone from gamers to scientists, and consumers to enterprise customers. Recently, NVIDIA introduced the Jetson TX1, a credit card-sized module that harnesses the power of machine learning to enable a new generation of smart, autonomous machines that can learn. NVIDIA’s embedded solutions address the challenge of creating a new wave of millions of smart devices — drones that fly autonomously; compact security surveillance systems that don’t just scan crowds but identify suspicious activity; and robots that don’t just perform tasks but tailor them to individuals’ habits — by incorporating capabilities such as machine learning, computer vision, navigation and more.

About Antmicro

Antmicro Ltd is an industrial R&D company which combines latest software and hardware technologies to create advanced products for its customers. Basing on internal R&D and its own development frameworks, Antmicro provides know-how and guidance to customers looking to innovate by applying new technological developments to solve practical problems across various verticals. Antmicro’s projects often involve computer vision, FPGA SoCs, heterogeneous, multi-core and/or multi-node systems.

For further information please contact: Antmicro Ltd, Michael Gielda, Business Development Manager, +48 504 631 956, mgielda@antmicro.com

Android is a trademark of Google Inc. Jetson, Maxwell and Kepler are trademarks of NVIDIA Corporation.

February 18th, 2016
Antmicro at Embedded World 2016 – sneak peek

With less than a week to go before Embedded World 2016, here is a quick sneak peek at Antmicro’s exhibition this year.

AXIOM Gamma

We have prepared 3 major de 5852 mos to be shown at our main stand in Hall 4A (booth 121, side by side with our long-time partner, Enclustra), with a general focus on what has been the highlight of our recent projects – advanced vision systems.

This includes a high-speed Zynq-based stereovision system, implementing state-of-the-art FPGA technology, and the very first taste of the prototype Axiom Gamma 4K open source camera hardware. The third demo is Antmicro’s dedicated smart IoT display running our own MCU GUI library, providing connectivity with Contiki and the very first no-gateway-required implementation of a IPSO Smart Objects as defined by the IPSO Alliance, which we are a proud member of.

MOPED

Another partner of ours, Toradex, will be hosting three further – and just as exciting! – demos of Antmicro in Hall 1, booth 639. Firstly, the TAQ self-balancing robot – developed in cooperation between Toradex (T), Antmicro (A) and the Qt Company (Q) running Linux + RTOS on the new Colibri SoM with NXP’s brand new i.MX7 SoC (also available to see at the Qt and NXP booths).

Secondly, the much-awaited upgraded MOPED computer vision research platform for the automotive industry – which, thanks to CUDA/OpenCV and deep learning on Nvidia Tegra K1, is now able to detect objects on the move.

And last but not least, Antmicro’s latest Industrial Android 5.1.1. application for the Toradex Cortex-A module portfolio – including the brand-new Tegra K1!

How about that for a collection of new-tech demonstrators?

If you’d like to schedule an appointment or receive our dedicated EW 2016 leaflet – give us a shout. Either way, guests are always welcome!

 

Copyright © 2009 - 2017 Antmicro. All rights reserved. | Design: Duind.com