Chances and challenges for machine learning in highly automated driving, part 3: Algorithm selection criteria for functional safety

By Sorin Mihai

Energy Efficiency Engineer

Elektrobit

August 15, 2018

Story

Chances and challenges for machine learning in highly automated driving, part 3: Algorithm selection criteria for functional safety

Criteria for selecting the right machine learning algorithm for functional safety are necessary. These criteria are described below.

In part one of this three-part series, the authors investigate the drivers behind and potential applications of machine learning technology in highly automated driving scenarios. Part two defines the theoretical background of machine learning technology, as well as the types of neural networks available to automotive developers. Part three evaluates these options in the context of functional safety requirements.

Deep learning has revolutionized machine learning systems and their capabilities, but it is not necessarily the most suitable approach for all tasks. It may be more appropriate to use traditional pattern recognition methods such as logistic regression, naïve Bayes, or k-means clustering for several other types of application. Criteria for selecting the right machine learning algorithm are therefore necessary. These criteria are described below.

The complexity of the problem is a straightforward criterion governing choice, which must fit the complexity of the method. This criterion can be translated into the number of parameters that the algorithm has to learn. As an example, the logistic regression algorithm learns two parameters for the mapping function h_θ (x) in Figure 8. A deep neural network might be required to learn millions of parameters to get at best similar results to those of the logistic regression method. Figure 12, page 11, shows an approximate distribution of machine learning algorithms ordered according to their complexity.

Figure 12: Classification of machine learning algorithms based on their complexity.

The math behind each algorithm is the basis for this empirical finding. The bias-variance tradeoff is one important aspect when choosing and building a machine learning system. Bias is the error produced by erroneous assumptions made by the learning method. It is directly related to the issue of underfitting. High bias algorithms fail to find the relevant relation- ships between the input features and the target labels. In contrast, variance is a measure of the sensitivity of the method to the random noise that is present in the input data. A high variance system can cause overfitting, where the algorithm models the random noise instead of the actual input features. In practice, a tradeoff between bias and variance must be found because these two quantities are proportional to each other. Another criterion that should be taken into account is the number of tuning parameters that a data engineer needs to tune when training a classifier.

Finally, the nature of the input data also needs to be considered. Linear separation of the data in the feature space is unusual in the real world. Arguably though, linearity can be assumed for some applications. An example of this is the classification of car and non- car objects based on their size and speed described at the beginning of section 3. This assumption is crucial in choosing an appropriate machine learning approach, since a linear classifier is faster and more effective for data that can be separated linearly compared to a non-linear classifier.

Functional safety considerations

Functional safety is part of the overall safety of a system. ISO 26262 “Road vehicles - Functional safety” describes the development of electrical and electronic (E/E) systems in road vehicles. A system is made safe by various activities or technical solutions. These so-called safety measures are reflected in the process activities that specify requirements, create architecture and design, and perform verification and validation.

The avoidance of systematic failures is one aspect of ISO 26262. Human failures have been the systematic failures in traditionally engineered systems. Some clear examples of such failures are: requirements and test cases that are incomplete, significant aspects of the design that are forgotten, or verifications that fail to discover issues. The same is also true when using machine learning. Furthermore, the task to be learned and the corresponding test cases are all also described by humans. Systematic failures can still occur here. The development of machine learning models therefore requires the application of best practice or of an appropriate standard process. This alone is not enough. Safety measures are required in order to control systematic failures in the machine learning algorithms, given that parts of the development of system elements will be accomplished in future by means of such algorithms. These failures can be eliminated only if both can be guaranteed.

More attention has been given recently to safety in the context of machine learning, due to its increased use in autonomous driving systems. Amodei et al., 2016, discussed research problems related to accident risk and possible approaches to solving them. The code in traditional software systems has to meet specific requirements that are later checked by standardized tests. In machine learning, the computer can be thought of as taking over the task of “programming” the modules by means of the learning method. This “programming” represents learning the parameters or weights of the algorithm when considering the technical background presented in section 3. The learning procedure is very often stochastic, which means that no hard requirements can be defined. The machine learning component is therefore a black box system. As a result, it is difficult or even impossible to interpret the learned content, due to its high dimensionality and the enormous number of parameters. 

Environmental sensors and the related processing play a decisive role that is beyond the requirements of functional safety, especially in the case of highly automated driving. The safety of the intended functionality (SOTIF) is concerned with the methods and measures used to ensure that safety-critical aspects of the intended functionality perform properly, taking sensor and processing algorithms into account. However, this problem has to be clarified for traditionally engineered systems and machine learning systems alike, and it is still the subject of ongoing discussions.

Analysis within a virtual simulator is one approach to disclosing such algorithms. We used this approach for the experiments with self-learning systems that are presented in section 2.2. A theoretically infinite number of driving situations can be learned and evaluated in such a simulated environment before the machine learning system is deployed in a real-world car.

Lives are at stake now that machine learning has progressed from gaming and simulation into real-world automotive applications. As discussed, functional safety issues are becoming important as a result, and this also affects the scientific community. One consequence is research into approaches to benchmarking different machine learning and artificial intelligence algorithms in simulation. OpenAI Gym (Brockman et al. 2016) is one such simulator that is a toolkit for developing and comparing reinforcement learning algorithms.

Conclusions & Outlook

The application of machine learning based functionalities to highly automated driving has been motivated by recent achievements. Initial prototypes have indeed produced promising results and have indicated the advantages when addressing the related complex problems. A significant number of challenges remain, however, even though machine learning can be suitable. It is necessary first of all to select the right neural network type for the given task. This selection is related to the applied learning methodology, the necessary preprocessing, and the quantity of training data. There is still discussion about the best way to decompose the overall driving task into smaller sub-tasks. Deep-learning technologies are capable of enabling end-to-end approaches without any need for decomposition, but this is currently considered to be less appropriate with regard to verification and validation capabilities. The machine-learning community needs to develop enhanced approaches, not least in order to address functional safety requirements, which are the foundation for successful industrialization of related functionalities.

Elektrobit is convinced that machine learning has the potential to reshape the future automotive software and system landscape, despite the challenges that remain. For this reason, two aspects of investigation have been started. The first is the application of machine-learning-based approaches as a solution to (selected subsets of) highly automated driving scenarios, such as the use cases mentioned above. The EB robinos reference architecture as well as the partnership with NVIDIA among other things contribute to the development environment. In the second, Elektrobit uses its expertise in the area of functional safety and industrialization of automotive software to bring these ideas and the products of its partners and customers to life.

Associate Professor Sorin Mihai Grigorescu received his Dipl.-Eng. degree in Control Engineering and Computer Science from the Transylvania University of Brasov, Romania in 2006, and his Ph.D. in Robotics from the University of Bremen, Germany, in 2010. Between 2006 and 2010 he was a member of the Institute of Automation at the University of Bremen. Since June 2010, he has been affiliated with the Department of Automation at Transylvania University of Brasov, where he has been leading the Robust Vision and Control Laboratory. Since June 2013, he has also been affiliated with Elektrobit Automotive Romania, where he is the team manager of the Navigation Department. Sorin M. Grigorescu was an exchange researcher at several institutes, such as the Intelligent Autonomous Systems Group (Technical University Munich), the Korea Advanced Institute of Science and Technology (KAIST), and the Robotic Intelligence Lab in University Jaume I (Spain). Sorin was the winner of the EB innovation award 2013 for his work on machine learning algorithms used to build next-generation adaptable software for cars.

Markus Glaab is an expert in automotive software architectures at EB Automotive Consulting. In 2010, he received his M.Sc. degree in Computer Science from the University of Applied Sciences Darmstadt, Germany, while gaining professional experience as a software developer in the area of Car2X communication. In 2010, he also started research at both the In-Car-Multimedia Labs at his home university, where he worked on future service delivery platforms for vehicle-to-backend architectures, and the Centre for Security, Communications, and Network Research at Plymouth University, UK. Markus has been with EB since 2016 and works on future automotive E/E architectures and the integration of related technologies such as machine learning.

André Roßbach is a senior expert for functional safety at EB Automotive Consulting. In 2003, he received his degree in Business Information Technology from the University of Applied Sciences Hof. He then gained experience as a software developer in the biometric and distributed database development area. André has been with EB since 2004 and has developed software for medical systems, navigation software, and driver assistance systems. Currently, his focus is on functional safety, agile development, and machine learning.

Bibliography

Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. “Concrete Problems in AI Safety.” arXiv Preprint arXiv:1606.06565. http://arxiv.org/abs/1606.06565.

Bernhard Wymann, Eric Espie, Christophe Guionneau, Christos Dimitrakakis, Remi Coulom and Andrew Sumner. “TORCS, The Open Racing Car Simulator,” http://www.torcs.org, 2014.

Bojarski, Mariusz, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, et al. “End to End Learning for Self-Driving Cars.” arXiv Preprint arXiv:1604.07316, 2016. http://arxiv.org/abs/1604.07316.

Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang and Wojciech Zaremba. “OpenAI Gym,” 2016.

Jia, Yangqing, and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor. “Caffe: Convolutional Architecture for Fast Feature Embedding,” http://caffe.berkeleyvision.org, 2014.

LeCun, Yann, Bengio, Yoshua, Hinton, Geoffrey. 2015 “Deep learning” Nature. May 28.

Mnih, Volodymyret. al. 2015. “Human-level control through deep reinforcement learning” Nature. February 26.

Silver, Andrew. 2016. “Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe.” IEEE Spectrum: Technology, Engineering, and Science News. October 7. http://spectrum.ieee.org/cars-that-think/transportation/self-driving/why-ai-makes-selfdriving-cars-hard-to-prove-safe.

Elektrobit

www.elektrobit.com

@EB_automotive

LinkedIn: www.linkedin.com/company/elektrobit-eb-automotive

Facebook: www.facebook.com/EBAutomotiveSoftware

YouTube: www.youtube.com/user/EBAutomotiveSoftware

 

Involved in international and national industrial projects with major impact on increasing energy efficiency and productivity.

More from Sorin

Categories
Automotive