Autonomy for Whom?

By David Bruemmer, Chief Strategy Officer, NextDroid, USA

July 17, 2022

Large companies have been investing billions into self-driving cars. It is impressive work and from an engineering perspective we can admire them for demonstrating increasingly advanced self-driving capabilities. Recently, the State of California bestowed the right to drive autonomously without a safety driver on board. As of September 30, 2021, DMV authorized the deployment of driverless autonomous vehicles to three companies including Cruise LLC, Nuro Inc and Waymo LLC.

When I considered a job as CTO at one of the companies, I looked deeply into the approach being used. One of my concerns about the general strategy was dependence on off-board servers. Although the software architecture varies from company to company, off-board servers are often used not only to plan routes at a high level, but also perform ongoing tasks critical for operation. One of these tasks is localizing the vehicle in reference to high-definition maps stored off-board. Unlike maps that you and I would use, the system uses dense point clouds of laser data that have been specially processed by servers and humans who validate and massage the data. Once the maps have been verified the cars can use them to reference their own current location against the centralized map stored in the off-board database. In theory, this is optimal and allows all the cars to benefit from the best map possible. In practice, it inherits the many problems associated with centralized control and dependency on network connections and off-board computing. 

This is not just an engineering issue. We stand at a crossroad, and need to understand what is at stake. The question is not just where control lies, but with whom. Does it lie with you or with the corporations? You may think that question is one for the future, but you’d be wrong. It’s a question for today. In San Francisco self-driving vehicles have been creating quite a nuisance, remaining stock-still while they try to communicate with the remote servers that they depend on. From an ethical perspective it is interesting that we as citizens are now expected to surrender our individual rights because a corporation wants to exercise their new business model. Does this sound familiar?

The marketing is that these vehicles will prevent accidents and frustration while the reality is that confused human drivers get sandwiched between autonomous cars that won’t move for fairly long periods of time. The right kind of autonomy could make driving effortless and fun, but the reality is that we might get stuck waiting for large corporations to get their act together. In the past, you could direct your road rage at that smug Porsche driver who just cut you off. Now you’re not even sure who to get upset with. Indeed, one might wonder if the relentless, multi-billion dollar push for autonomy will grant greater autonomy to the individual human or to corporations. In the first case you get to control a robot. In the second case you become the robot.

We should be clear about what the word autonomy means. It means the ability to perform effectively even without input or help from the outside world. Let’s look in an example. A toddler is highly intelligent, but not very autonomous whereas a washing machine is highly autonomous but not intelligent. Many believe autonomy means eliminating the human. That can be one axis by which we look at autonomy. However, another axis is the independence of on-board systems and their ability to function without help from off-board servers. It is not wrong to use off-board servers, just like it is not wrong to use human input. However, we should be honest about how dependence impacts the reliability of the system. 

When I was sending robots deep into caves and tunnels or behind thick concrete walls that blocked radiation, we could not rely on remote human operation or remote “servers.” Likewise, we could not count on the system receiving the comforting pulses of GPS. Autonomy is what you can do when left to your own devices. Those robots were truly autonomous because they could not get help from anyone or anything. 

If you think about autonomy in terms of onboard or offboard intelligence, self-driving cars that require connection to an off-board server have a lower overall autonomy than an old-school car with a human driver. From this view the human is part of the onboard system and adds to the reliability whereas the off-board autonomous software is off board and therefore detracts from autonomy. The Wired Magazine article highlights vividly how this impacts San Francisco.

Large corporations are trying to convince you that the operative issue is the car’s independence from you. I want to convince you that the most important thing is the safety, efficiency, and reliability of the on-board human-robot team. Those focused on off-board servers really want to have an AI sales associate in the backseat of your car. They want to own that ecosystem and to capitalize on the bubble of buying power represented by you and your passengers while in their vehicle. You may buy the vehicle, but it will be controlled by their servers. 

They nailed the business model, but the engineering may not work well. If current approaches were working well, we would already see a reduction in deaths and congestion. What we see instead is a growing fight for control between human drivers and their advanced driver assistance systems. In June 2021, the US National Highway Traffic Safety Administration (NHTSA) issued a Standing General Order which required manufacturers and operators to track and report crashes related to SAE Level 2 Advanced Driver Assistance Systems (ADAS). In June of 2022, crash data from the recent past was released and resulted in journalists suddenly reporting how the lack of performance-based government regulation has created a wild-west for self-driving., The report showed that almost 400 crashes occurred in the previous 10 months directly related to advanced driver assistance systems.

The worst part is there’s growing evidence that the vehicles are being programmed not to keep you safe so much as to escape blame. So right when you need help the most, the self-driving gives up and tells you to take control. That way it’s your fault and the car company does not technically have to report an ADAS crash. In that last moment before the crash, you must stop watching Netflix and steer a safe path. In the early 2000’s I worked with the largest group of human factors experts in the DOE, employing their help to deploy autonomous systems in the DOE and military. They found it takes a minimum of several seconds to mode switch… and that’s too late.

If we are not careful, our intelligent driving capabilities may cause more problems than they solve. Unfortunately, the problem is not just a question of engineering proficiency. The real question is are we even on the right development track? The right development track encourages independence fault tolerance and reliability. We are on a track for increased autonomy, but it may be advantageous to ask ourselves whether the self-driving we inherit will increase our individual autonomy or wedge us into a tighter space within which corporations can control us.

So what’s the solution? Perhaps the control should derive neither from the individual car nor from the off-board computers, but rather from the smart roadway. In this model there is a balance of control that includes 1) the reactive responses from the individual car 2) the optimizations possible via centralized systems that coordinate map based route planning and 3) the swarm intelligence that modulates peer to peer interactions to control spacing and flow.  The first and second have received the most attention thus far, but the third could be a key to progress. The problem is that for option three to pay dividends, we need more reliable connectivity and more accurate positioning.

Another key element to a safer, brighter future is an emphasis on measurable safety – a key ingredient to increasing performance and holding corporations and individuals accountable. If autonomy is much safer than human drivers then we absolutely should employ it, but to know this we need ground truth and better metrics. We would need to measure relative motion in millimeters rather than meters and with milliseconds rather than seconds. Most of all, we need to demand that you the individual human remain front and center, not necessarily in control of everything, but at least the focus of attention.

This article was authored by David Bruemmer, Chief Strategy Officer, NextDroid, USA