Making Self-Driving Cars Safer

hengyang

Aviation became a reality in the early 20th century, but it took 20 years before the proper safety precautions enabled widespread adoption of air travel. Today, the future of fully autonomous vehicles is similarly cloudy, due in large part to safety concerns. To accelerate that timeline, graduate student Heng “Hank” Yang and his collaborators have developed the first set of “certifiable perception” algorithms, which could help protect the next generation of self-driving vehicles — and the vehicles they share the road with.

Yang is a graduate student in the Laboratory for Information and Decision Systems (LIDS), where he works with Luca Carlone, the Leonardo Career Development Associate Professor in Engineering, on the challenge of certifiable perception. When robots sense their surroundings, they must use algorithms to make estimations about the environment and their location. “But these perception algorithms are designed to be fast, with little guarantee of whether the robot has succeeded in gaining a correct understanding of its surroundings,” says Yang. “That’s one of the biggest existing problems. Our lab is working to design ‘certified’ algorithms that can tell you if these estimations are correct.”

For example, robot perception begins with the robot capturing an image, such as a self-driving car taking a snapshot of an approaching car. The image goes through a machine-learning system called a neural network, which generates key points within the image about the approaching car’s mirrors, wheels, doors, etc. From there, lines are drawn that seek to trace the detected keypoints on the 2D car image to the labeled 3D keypoints in a 3D car model. “We must then solve an optimization problem to rotate and translate the 3D model to align with the key points on the image,” Yang says. “This 3D model will help the robot understand the real-world environment.”

Each traced line must be analyzed to see if it has created a correct match. Since there are many key points that could be matched incorrectly (for example, the neural network could mistakenly recognize a mirror as a door handle), this problem is “non-convex” and hard to solve. Yang says that his team’s algorithm, which won the Best Paper Award in Robot Vision at the International Conference on Robotics and Automation (ICRA), smooths the non-convex problem to become convex, and finds successful matches. “If the match isn’t correct, our algorithm will know how to continue trying until it finds the best solution, known as the global minimum. A certificate is given when there are no better solutions,” he explains. “These certifiable algorithms have a huge potential impact, because tools like self-driving cars must be robust and trustworthy. Our goal is to make it so a driver will receive an alert to take over the steering wheel if the perception system has failed.”

For more, please click here.

Professor Luca Carlone
Professor Luca Carlone
hengyang