Sunday, April 28, 2024

I support self-driving cars, but Tesla is not ready

Should Tesla be allowed to beta-test its full self-driving feature on our public roads? I don’t think so, and I believe they are in violation of state regulations in Massachusetts and Cambridge. I fully believe in the promise of self-driving technology, but Tesla is jumping the gun and putting our residents’ lives at risk. This week the City Council passed a policy order I introduced asking the city manager to enforce the law. Until we have federally certified and truly safe autonomous vehicles, Tesla should not be allowed to let them loose on our streets!

In November 2022, Tesla chief executive Elon Musk released the full self-driving capability in North America as a “beta” program. In software engineering, a beta version is not ready for mission-critical use and is released to volunteer customers who want to try it and report problems (called “bugs”) so the software company can fix them. While this is perhaps an okay way to fine-tune a new word processor or spreadsheet application, it is a horrible way to test self-driving cars. Cars on our public roads are by definition engaged in mission-critical operations; failure could mean serious injury or death, not only to the driver but especially to the vulnerable road users not surrounded by steel armor and deployable airbags. 

While I’ve never worked on self-driving vehicles, as a software engineer and technology enthusiast I’ve been observing their development since joining the electrical engineering program as a graduate student at MIT in 1992. Back then the pinnacle of self-driving vehicles was a van stuffed full of PCs driving around campus at 5 mph. In the early 2000s I had the opportunity to observe modified Toyota RAV4 EVs navigating a California parking lot at 2 mph, guided by a massive LiDAR system mounted on the roof. Nobody was in the vehicle, which had been turned into a life-size toy, but the radio controller had a large red emergency “stop” button installed at the center allowing the operator to halt the vehicle immediately if necessary.

As the technology kept improving it became clear to me that eventually self-driving vehicles would become safer than human drivers. But that point hasn’t been reached, especially not for Tesla, which stubbornly refuses to use the industry-standard LiDAR technology, which offers three-dimensional vision and the best hope for safer operation than a human driver. Musk’s drive-first, ask-questions-later approach puts a lot of people in danger, including right here in Cambridge.

I recently had the opportunity to experience the Tesla full self-driving beta in Cambridge, and it confirmed my intuition that the technology is not safe. While the car’s ability to navigate complex urban traffic situations autonomously while relying only on computer vision is very impressive (especially at night!), it is of course the moments when it failed that stick out and create a potential safety hazard – especially because its default failure mode does not appear to be stopping the car or disengaging the self-driving system. The driver was extremely competent and clearly an expert in monitoring and overriding the system, but therein lies the irony, and the proof that the Tesla self-driving system is simply not ready for wide-scale deployment. Without a highly attentive and expert driver behind the wheel, it is bound to cause accidents.

The driver had to disengage the system several times, including – most harrowingly – when the car got confused by a Jersey barrier on Massachusetts Avenue separating the car lane from the bike lane. The bike lane juts out into the car lane at this point to navigate around outdoor on-street dining that was set up in response to Covid. The car correctly recognized the Jersey barrier as an obstacle in front of it, but it could not decide whether to go around it on the left (correctly, staying in the car lane, but blocked by passing traffic) or on the right (incorrectly, driving in the bike lane that is anyway too narrow for the car to navigate). It turned the wheels left, then right, then left and right again as we were heading, on average, for a head-on collision with the Jersey barrier, causing the driver to disengage the system. It made several smaller errors, including reading the speed limit incorrectly and failing to obey “no turn on red” signs at traffic lights (because the software had not yet been updated to recognize those). It did, impressively, recognize and stop for pedestrians and cross traffic, including stopping for a car that made a highly illegal and surprising left turn without signaling or yielding.

In August 2017, Massachusetts issued regulations for safely testing highly automated vehicles on our roads. Under the regulations, municipalities could opt in to allowing testing on local roads and issue further local regulations. I worked with then city manager Louis A. DePasquale and then head of Traffic and Transportation Joe Barr to develop regulations for allowing such testing to take place on Cambridge’s busy roads, with their many vulnerable pedestrian and bicycle road users. My requirements included two safety drivers in the vehicle at all times and strict obedience of our then recently lowered speed limits.

For reasons that continue to elude me, federal regulators have done little more than complain about Tesla’s false advertising of its self-driving feature. In my opinion, unlocking the full promise of improved safety offered by self-driving cars requires a similar level of regulatory oversight that is provided for pharmaceuticals. (Which is not perfect, but regulatory oversight never is.) In particular, the thorny issues of liability in case of a malfunction is resolved easily if the federal government takes on the responsibility of certifying self-driving systems in the same way it approves novel medicines.

The basic problem is that if a human is not operating the vehicle, who is responsible for the crash? If this isn’t sorted out, manufacturers of self-driving cars could be on the hook for many multimillion-dollar liability lawsuits, suppressing innovation and potentially denying us the improved safety and sustainability of self-driving cars. If, as with pharmaceuticals, the federal government takes on the liability if it has approved a particular self-driving system, the liability problem goes away and innovation can proceed. As we saw with the rapid development of novel Covid-19 vaccines during the pandemic, this can confer great societal benefits.

In the long run, fully autonomous vehicles will be much safer than most human drivers and will create opportunities for more convenient and equitable public transportation, leading to a much smaller fleet of private vehicles, reducing the environmental footprint of our current automobile manufacturing and transportation systems. But we’re not there yet, and as we’re seeing with other areas of artificial intelligence, letting these systems run wild among us is a dangerous and irresponsible approach. We need our local, state and federal governments to regulate this technology to ensure a safe and orderly rollout.


The writer is a Cambridge city councillor.