Imagimob tinyML Platform Supports Quantization of LSTM and Other TensorFlow Layers

By Tiera Oliver

Associate Editor

Embedded Computing Design

December 14, 2021

News

Imagimob tinyML Platform Supports Quantization of LSTM and Other TensorFlow Layers
Image Courtesy of Imagimob

Imagimob announced that its tinyML platform Imagimob AI supports quantization of Long Short-Term Memory (LSTM) layers and a number of other Tensorflow layers.

LSTM layers are well-suited to classify, process, and make predictions based on time series data, and are therefore of value when building tinyML applications.
Imagimob AI takes a Tensorflow/Keras h5-file and converts it to a single quantized, self-contained, C-code source file and its accompanying header file at the click of a button. No external runtime library is needed.

In tinyML applications, the main reason for quantization is that it reduces memory footprint and reduces the performance requirements on the MCU. That also allows tinyML applications to run on MCUs without a FPU (Floating Point Unit), which means that customers can lower the costs for device hardware.

Quantization refers to techniques for performing computations and storing tensors at lower bit widths than floating point precision. A quantized model executes some or all of the operations on tensors with integers rather than floating point values. This allows for a more compact model representation and the use of high performance vectorized operations on many hardware platforms. This technique is particularly useful at the inference time since it saves a lot of inference computation cost without sacrificing too much inference accuracy. In essence, it's the process of converting the floating unit based models into integer ones and downgrading the unit resolution from 32 to 16 or 8 bits.

Per the company, initial benchmarking of an AI model including LSTM layers between a non-quantized and a quantized model running on an MCU without FPU show that the inference time for the quantized model is around 6 times faster, and that RAM requirements are reduced by 50% for the quantized model when using 16 bit integer representation.

Source: Imagimob

Further, the quantization algorithm is implemented with great care so that the error between the quantized and non-quantized neural network is kept at a minimum, meaning that argmax errors (misclassifications due to the quantization) rarely happen. This involves solving a difficult optimization problem.

Imagimob AI supported TensorFlow layers for quantization

  • Batch Normalization (TensorFlow Class BatchNormalization)
  • Convolution 1D (TensorFlow Class Conv1D)
  • Dense (TensorFlow Class Dense)
  • Dropout (TensorFlow Class Dropout)
  • Flatten (TensorFlow Class Flatten)
  • Stateless Long Short-Term Memory (TensorFlow Class LSTM)
  • Max Pooling 1D (TensorFlow Class MaxPool1D)
  • Reshape (TensorFlow Class Reshape)
  • Time Distributed (TensorFlow Class TimeDistributed)
  • Imagimob AI supported TensorFlow activation functions (lookup tables)
  • ReLU (TensorFlow Class ReLU)
  • Tanh

More layers and activation functions are added continuously.

According to the compaany, the Imagimob AI software with quantization was first shipped to a Fortune Global 500 customer in November, and is since then in production. Currently few other machine learning frameworks/platforms support quantization of LSTM.

For more information, visit: www.mynewsdesk.com/imagimob/news/imagimob-tinyml-platform-supports-quantization-of-lstm-and-other-tensorflow-layers-439389

Tiera Oliver, Associate Editor for Embedded Computing Design, is responsible for web content edits, product news, and constructing stories. She also assists with newsletter updates as well as contributing and editing content for ECD podcasts and the ECD YouTube channel. Before working at ECD, Tiera graduated from Northern Arizona University where she received her B.S. in journalism and political science and worked as a news reporter for the university’s student led newspaper, The Lumberjack.

More from Tiera