Humans are innately able to adapt their behavior and actions according to the movements of other humans in their surroundings. For instance, human drivers may suddenly stop, slow down, steer or start their car based on the actions of other drivers, pedestrians or cyclists, as they have a sense of which maneuvers are risky in specific scenarios.
However, developing robots and autonomous vehicles that can similarly predict human movements and assess the risk of performing different actions in a given scenario has so far proved highly challenging. This has resulted in a number of accidents, including the tragic death of a pedestrian who was struck by a self-driving Uber vehicle in March 2018.
Researchers at Stanford University and Toyota Research Institute (TRI) have recently developed a framework that it could prevent these accidents in the future, increasing the safety of autonomous vehicles and other robotic systems operating in crowded environments. This framework, presented in a paper pre-published on arXiv, combines two tools, a machine learning(ML) algorithm and a technique to achieve risk-sensitive control.
“The main goal of our work is to enable self-driving cars and other robots to operate safely among humans (i.e., human drivers, pedestrians, bicyclists, etc.), by being mindful of what these humans intend to do in the future,” Haruki Nishimura and Boris Ivanovic, lead authors of the paper, told TechXplore via email.
Nishimura, Ivanovic and their colleagues developed a machine learning(ML) model and trained it to predict the future actions of humans in a robots surroundings. Using this model, they then created an algorithm that can estimate the risk of collision associated with each of the robot’s potential maneuvers at a given time. This algorithm can automatically select the optimal maneuver for the robot, which should minimize the risk of colliding with other humans or cars, while also allowing the robot to move towards completing its mission or goal.
“Existing methods for allowing autonomous cars and other robots to navigate among humans generally suffer from two important oversimplifications,” the researchers told TechXplore via email. “Firstly, they will make simplistic assumptions about what the humans will do in the future; secondly, they do not consider a trade-off between collision risk and progress for the robot. In contrast, our method uses a rich, stochastic model of human motion that is will learned from data of real human motion.”
The stochastic model that the researchers’ framework is based on does not offer a single prediction of future human movements, but rather a distribution of predictions. Moreover, the way in which the team used this model differs significantly from the way in which previously developed robot navigation techniques integrated stochastic models.