Scaling AI Hardware for Smart Vision Applications
July 05, 2021
The hardware components that drive smart vision applications go through a lot of changes during their development lifecycle.
But with the inclusion of neural networks, machine learning, and other artificial intelligence (AI) capabilities in these use cases, we are now finding that hardware requirements change during a product’s deployment lifecycle.
Perhaps the biggest benefit of AI technology is its ability to continuously learn over time, which allows a platform like a smart vision system to gradually improve inferencing accuracy. The more inferences the system makes, the more data the solution has to train a model, which makes it better at inferencing, and so on.
The challenge this upgradability presents to many designers and users of AI systems is that eventually the system may demand greater processing, memory, or connectivity performance than the hardware that was originally shipped can provide. And that is just the gradual increase in demand from a given application. It does not even consider scenarios in which one system may be asked to do different or additional tasks down the road.
Fortunately, a solution to this type of hardware scalability exists in the form of computer-on-modules (COMs). COMs are a two-board hardware architecture that rely on a base or carrier board that contains all of the application-specific I/O needed in a given deployment, and a separate, plug-in compute module that houses the processor, memory, and additional hardware resources.
In the issue of a use case like the smart vision one mentioned previously, the COM architecture allows users to swap out the processor module to improve performance months, years, or even decades after a platform ships.
Congatec, an embedded hardware OEM, recently released the conga-SMX8-Plus COM dedicated to AI use cases like smart shopping carts and vehicle detection.
Image Credits: Congatec conga-SMX8-Plus
More Performance Offers a Vision of the Future
The 82 mm x 50 mm conga-SMX8-Plus is based on SGeT’s SMARC 2.1 COM standard, and harnesses smart vision capabilities from the onboard NXP i.MX 8M Plus embedded applications processor. The i.MX 8M Plus SoC targets AI workloads and features dual/quad-core Arm Cortex-A53 CPU cores, two image signal processors (ISPs), and an integrated neural processing unit (NPU) that delivers 2.3 TOPS of performance.
The chip also features an Arm Cortex-M7 for real-time and control tasks as well as a wide range of camera interfacing options to support intelligent vision applications. These include a dual-channel LVDS and four-lane shared LVDS channel for the serial camera interface. It also supports three independent displays simultaneously.
All this pairs with the conga-SMX8-Plus’s onboard 6 GB DRAM that can execute 4000 multi-threads per second and a 128 GB eMMC for locally storing large ML datasets. In power consumption terms, all of that costs a mere 2-6W.
The conga-SMX8-Plus is compatible with Linux, Yocto Project, and Android operating systems. It is specified for use across a wide temperature range of -40°C to +85°C.
Image Credits: Block Diagram
As mentioned, the primary benefit of the conga-SMX8-Plus’s SMARC 2.1 COM architecture is that it reduces post-deployment hardware risk by allowing the processor module to be swapped out for a more performant device in the future. This virtually eliminates hardware obsolescence in that portion of the design and prevents the need for costly retrofits or system redesigns.
Cognatec has enabled its conga-SMX8-Plus's technology to give engineers the ability to upgrade their hardware even when it has been deployed for a long period of time. The adaptability of the COM allows for quick adjustments, as they are needed, and builds an inference database that it continuously expands on to build upon the algorithms already in place amongst of the hardware.