Artificial Intelligence: The future of autonomous driving
Artificial Intelligence has experienced rapid advancements in the last five years, enabling new technologies previously only imagined in Hollywood films or best-selling novels. The advent of deep learning - high performance computing combined with big data and sophisticated neural networks - has spurred an avalanche effect in AI development for a variety of product domains. As technology improves, researchers then design prototypes, algorithms are subsequently created, and developers begin looking to optimize hardware solutions to help enable these new technologies.
This new AI sophistication has led to a natural cohesiveness between AI and autonomous driving. For years the prevailing wisdom was that Advanced Driver Assistance Systems (ADAS) would gradually evolve into self-driving capabilities, but the industry has found that innovation at this level is limited. In order to proceed to a truly self-driving world, vehicles must do more than operate independently - they must be able to think and reason much like the human mind can do.
Artificial Intelligence for self-driving vehicles isn’t limited to a Knight Rider-like driving experience where your vehicle anticipates your needs and acts accordingly. For true L5 self-driving, the car will have to solve the door-to-door transportation task, which means the entire system will have to be integrated and supervised by AI. That AI will take many forms - everything from detection, to decision making, to supervision. In the real world, AI is a critical component for two very important requirements: recognition of the world around the car and propelling the car into motion.
Training AI to Recognize
The first step with AI technology in automobiles is detection and recognition. As with anything involving machine learning, the AI must be trained to put context and meaning behind images, obstacles and various scenarios one might encounter behind the wheel - this is the recognition layer, a necessary precursor to decision-making. While it’s straightforward enough to teach AI the difference between another vehicle, or pedestrian, or bicycle or building, there is much greater difficulty in training AI for the very real possibility of inclement weather, adverse driving conditions, unexpected obstacles or car accidents. A significant challenge lies in the acquisition of quality, representative, diverse and well-labeled data, and since most of these situations happen at random, it is nearly impossible to find real-world ways of exposing AI to the types of road scenarios that drivers encounter everyday.
There are solutions to these problems, and advancements in a wide range of technologies have helped to make training AI much easier. Rather than relying on real world happenstance to expose AI to various driving conditions, at AImotive we use video games, an augmented simulation environment and the principles behind game mechanics to engage the AI hundreds, thousands and even millions of times to recognize everything a driver might find on the road and act accordingly. Today, most of the data used in the training of AI is manually labeled (annotated), which is not scalable, but AImotive has taken a different approach - providing photorealistic images and scenarios that can enhance the data used for training.
Training the AI to Drive
As sophisticated as AI training has become, the truly advanced part of using AI in autonomous driving is in the motion. Far beyond simply helping the car to drive, motion includes using AI to maneuver and function in the real world, and solve the dynamic driving task. This is no easy task - it is one of the biggest questions plaguing the industry today. Beyond the ability to simply see and recognize various traffic scenarios and issues on the road, in order to be truly autonomous vehicles will also need to immediately decide and react, much like a human would, yet with greater precision and enhanced agility.
AI has the power to recognize various obstacles and events in context - meaning it has an awareness of where the car is in relation to what is happening around it, and can maneuver the car using concrete terms around the its planned trajectory. The result is that the AI is able to see and respond to everything around it, even more precisely than a human driver who can more easily be distracted. This precise vehicle movement, or plan, is based on: what the AI has seen previously (the recognition), the prediction of the behavior of the object, and the free space detection (detection of the drivable surface).
At AImotive, we believe the solution lies in achieving the right balance between AI capabilities and traditional rule making. While AI is a crucial part of motion planning and decision making, it is imperative to follow very strict rules that will serve as a guideline for the AI. For example, rules include everything from how to interpret red lights or stop signs, to managing speed limits and lane changes, to merging traffic or slowing through a school zone or construction area. When deeply integrated into the autonomous system, the AI then allows for a better and more thorough understanding of how to react and respond to any situation given its deep knowledge of intrinsic traffic boundaries.
Solving the Hardware Problem
In the world of autonomous driving technology, the software solution is only one part of a difficult equation. A huge piece of the puzzle lies in finding a way to incorporate the kind of hardware needed to run such massive computing applications while fitting, inconspicuously, in the form factor of a car.
When it comes to incorporating the sophistication of AI recognition and motion, the problem grows even more complex. For driving solutions in particular, energy efficiency is a necessity, but there has traditionally been a trade off between high performance and low power consumption. The current model has been focused around trying to solve operations for Neural Networks (NN) using GPUs, which have proved wholly inefficient for the demand, due to the unique nature of advanced NN operations.
To make matters worse, there is currently no plug-and-play possibility between NN software developer tools and hardware. Development is done for each specific use-case, on target hardware with only compatible software tools. There exists an urgent need for an industry standard, where an entire framework can be designed and built for any hardware supporting the standard.
AImotive has been at the forefront of this initiative, spearheading a working group for the Neural Network Exchange Format (NNEF) standard development. This standard enables reliable import/export between network creating tools, inference engines and other toolkits, which then reduces deployment friction and encourages a richer mix of cross-platform deep learning tools, engines and applications.
As more and more software and hardware developers utilize a working standard, the continued development of AI-powered, autonomous driving solutions direct to the OEM market will see rapid progression.
The AI of Today will be the Self-Driving Car of Tomorrow
The future of autonomous driving is brighter than ever before. While still in the early stages, never before has imagination merged so seamlessly with real world capabilities. OEMs are beginning to see themselves as more than just automotive designers and manufacturers, but also as technologists and developers, and this shift will help to facilitate a more natural environment for automobile technology to thrive.
Artificial Intelligence is the key catalyst to creating the all-inclusive autonomous driving experience. Far beyond the automotive industry, AI is now all around us. As we start to better trust the capabilities and sophistication of AI, we will find more valuable use cases. That trust, in turn, will spur development of all kinds of new technologies, and autonomous driving is at the top of the list.