Models Help Achieve Maximum Operational Efficiency

By Rich Nass

Executive Vice President

Embedded Computing Design

March 23, 2021

Blog

Models Help Achieve Maximum Operational Efficiency
(Image courtesy of Pixabay)

A typical design goal is to maximize the operational efficiencies of your system. However, that likely includes designing systems around the latest GPUs, and that can be a daunting task.

GPUs, such as those offered by NVIDIA, can by quite complex. While the vendor may supply a fair amount of documentation, sample code, and other details, the engineer is still left to deploy the chip/board/system and have it operate at maximum efficiency.

One good course of action is to deploy a software development kit (SDK) or other tools provided by the GPU supplier. But if you need to venture outside what’s offered by those tools, it can get hairy. For example, NVIDIA is very good at assisting with the AI portion of the design. But when you need to make the proper connections, handle security, and ensure that the system can handle the rigors of a rugged environment, you may need to turn to other sources.

A good example is the system that’s connected to multiple cameras and needs to use AI to do some sort of image detection. Think of an airport, a train station, or even a manufacturing facility. As you know, images can be very data intensive, and when you have multiple cameras connected, the amount of data that requires processing can grow very quickly.

Here, the business objective can clearly be outlined—are you checking for luggage, products coming off an assembly line, etc.? To solve that problem, you turn those needs into a deployable solution that drives some action or outcome.

Identifying the Desired Outcome

The next step is to identify the desired action or outcome. Then, you choose what data is needed, which leads to the type of processing that’s required. Not until you answer these questions can you start the design and deployment process. While this path may seem obvious, you’d be surprised at how many people try to start at the last step, then end up redesigning their system because the objectives were not outlined at the start.

Take for example, the “smart space.” Here, you want to detect objects, and if something seems out of place, an alarm will sound. That “alarm” may be something audible, or it could generate a message that’s sent to an operator. This is a far better solution than filling a room with monitors and having one or more people constantly watch those monitors for some action that’s out of place.

ADLINK Technology handles this situation through data capture, specifically by using an NVIDIA DeepStream SDK. That SDK is used to capture the video and write it to disk so that you have some training. Once that occurs and you have enough data, you can move to the next phase, which is the training phase.

Another assist from NVIDIA comes in the form of the Transfer Learning Toolkit that gets you to a deployable processing element for your solution far faster than if you were to start from scratch, potentially (and hopefully) resulting in a greater RoI. To that end, ADLINK has put together a set of use cases which, according to the company, should provide a great starting point for the developer with the optimized domain-specific models. In other cases, the models may come from NVIDIA, such as for public safety, smart cities, or product quality inspection. In instances where a model is not available, the developer may be able to start with an existing model, one that’s similar to hisa use case, and adapt it for those specific needs.

To drive actions or outcomes, that unstructured data is run through the processing models such as DeepStream, resulting in structured data. From there, the information can be sent out through a pipeline to other applications to take an action.

To hear more about ADLINK’s medical-imaging products and technologies, check out the talk delivered by Toby McClean, Vice President of AIoT Technology and Innovation, as part of NVIDIA’s upcoming GTC.

Richard Nass’ key responsibilities include setting the direction for all aspects of OSM’s ECD portfolio, including digital, print, and live events. Previously, Nass was the Brand Director for Design News. Prior, he led the content team for UBM’s Medical Devices Group, and all custom properties and events. Nass has been in the engineering OEM industry for more than 30 years. In prior stints, he led the Content Team at EE Times, Embedded.com, and TechOnLine. Nass holds a BSEE degree from NJIT.

More from Rich