For the fifth consecutive year, more millennials died on American roads than any other age group.
Roughly 42,915 people died in vehicle crashes or from crash injuries in 2021. That’s around 117 people every day; one traffic-related death every 15 minutes, according to the National Highway Transportation Safety Administration. One out of every five people killed is between 25 and 34 years old.
We’ve seen thousands of headlines about autonomous vehicles (AVs) since 2016. Tech companies say AVs will improve traffic flow, add convenience and of course, save lives. Who wouldn’t want that? Between 2010 and 2020, a total of 387,674 people were killed in vehicle accidents and 27.5 million people were injured.
More From LX News
The Pandemic Made Us Drive More Recklessly in a Deadly 2020 on the Road
How Worried Should You Be About High Drivers?
But the road to fully autonomous vehicles is long. The Society of Automotive Engineers defines six levels of vehicle autonomy; ranging from Level 0, the cars our grandparents drove, to Level 5, the cars our grandchildren might drive. At Level 0 the driver performs all driving functions, while at Level 5, the car has no need for a steering wheel and navigates without human interaction. Most vehicles today that offer self-driving features use advanced driver assistance systems (ADAS), and operate at Level 2 autonomy.
Today’s latest cars offer smart features like sensors to monitor nearby vehicles, cameras that neutralize blind spots, adaptive cruise control with lane guidance and communications tech like 5G and WiFi. But these systems only support human drivers. They don’t replace them.
Building safer transportation systems is the immediate objective; the long-term goal is to replace human drivers. To get there, AV systems can’t be designed to drive just as well as humans, but better. That means scientists and engineers have to solve a list of first-and-last-mile problems — issues that drivers face off the typically pedestrian-free, wide-laned highways, on smaller neighborhood roads crowded with parked vehicles and people.
Researchers like John Dolan, a scientist at the Argo AI Center for Autonomous Vehicle Research, part of Carnegie Mellon University’s prestigious Robotics Institute, believe there are plenty of first-and-last-mile problems to solve, and Level 5 AV systems won’t be here anytime soon.
“I guess I'm going to say not in my lifetime,” says Dolan. “But that's a binary answer whereas I really lean more toward saying we're going to gradually see increasing capabilities in different domains.”
So what exactly are these first-and-last-mile problems?
An Impromptu Game of Chicken
In Dolan’s lab, his team develops algorithms that allow machines to mimic human behavior. Their work routinely addresses tricky first-and-last-mile problems in the AV world.
“We had a conversation once,” says Dolan, recalling a moment in his lab. “I mentioned that when I drove home every day through some of these neighborhoods in Pittsburgh, there are fairly narrow streets, and they’re made more narrow by the fact that people park on both sides.”
Dolan is describing a common headache, finding yourself on a narrow road with cars parked on either side, when an oncoming driver appears in front of you. What happens next is not typically covered in driver’s ed, and the standoff isn’t governed by any traffic rules.
“There’s this negotiation that goes on when you see a car at the other end of the corridor that’s created [and] there’s only width enough for one car,” says Dolan. “Somebody needs to wait, possibly as you’re going along you might have to duck into an open parking spot to let the car go by — and there aren’t any hard and fast rules that govern the whole situation.”
For Christoph Killing, a robotics PhD student visiting from Germany’s Technical University of Munich, those narrow Pittsburgh streets reminded him of roads in old European cities. “There is no safe behavior,” he says. “When you drive on the highway you just stay in your lane, don’t crash into the guy in front of you, and you are OK. Versus here, you have to actually make a call and either commit to the scenario and drive through or pull over.”
Can Machine Learning and Game Theory Fix These Problems?
Killing used Multi-Agent Reinforcement Learning to craft a solution. He created a game — two vehicles in a narrow-lane standoff, flanked by parked cars. He then taught the machine some basic rules like ‘get to the other end of the street’ and ‘don’t crash’ Lastly, he developed an algorithm to govern each driver’s behavior in a decentralized system — meaning the drivers in the game were unaware of what other drivers planned to do. Killing programmed some drivers to be aggressive and some defensive.
“We were looking into the very cooperative drivers that are essentially ambivalent to who goes first, and very aggressive drivers that are only looking for their own progress — and had the algorithm figure out everything between,” says Killing. “With that, we could go to a kind of game interaction where each driver is in turn reacting to what the other one is doing. I found it very interesting because it's new.”
Killing and his team believe he’s the first to tackle this narrow lane standoff; a common headache for drivers. Limited to simple controls like steering and acceleration, Killing’s algorithm, called DASAC, was successful in 99.09 percent of cases. The algorithm could unlock new roads for AVs, ones previously thought too challenging to navigate.
Systems That Predict Human Behavior
In Killing’s system, drivers reacted to each other and achieved their goal. But some scenarios require a different skill, like behavior prediction.
“Imagine you have a person walking along the sidewalk,” says Tarasha Khurana, a Ph.D student working on machine vision applications at CMU’s Robotics Institute. “Imagine that person gets occluded by a car that’s parked next to the sidewalk, but there’s a pedestrian crossing where your AVs about to reach. For this duration, where you cannot see the person behind this parked car, how do you still account for their motion in the future?”
Khurana’s talking about machine perception. Her 2021 research paper, Detecting Invisible People describes a system that continues to track objects after the camera can no longer see them, and can predict where they’re headed. Her approach was a blend of motion tracking, using depth of field to estimate movement in three dimensions, and the concept of object permanence.
“You have to think about where cars are going to go in the future and how they are perceiving different actors in the environment,” says Khurana. “Are they [AV systems] able to recognize, for example, every pixel that they see from the cameras or every data point that they scan?”
Safer Machine-Human Interactions
AV systems have to react to pedestrians while they’re hidden behind other objects, like parked cars. Khurana and Killing’s research, though not available in today’s vehicles, scales beyond self-driving cars and could enable other machines, like service robots, to make predictions about the people moving around them. Improving an AV’s reaction time is a step toward safer machine-to-human interactions.
“This is very safety critical,” says Deva Ramanan, director at the Argo AI Lab for Autonomous Vehicle Research. “That also poses immense challenges and stress and you know, at some level keeps you up at night making sure you’re doing responsible things.”
Dolan is also focused on safety. “The reduction, significant reduction of the number of deaths that result from driving, over a million worldwide,” he says. “When you add up the 200,000 in China, 40,000 in the U.S, etc.. there’s a lot of people that we’re losing.”