The Tesla crash and the future of autonomous cars

The inevitable has happened, while in autonomous mode a car has caused a fatal accident. Details have only just emerged on an accident that happened back in May and an inquiry has been opened by the National Highway Traffic Safety Administration (NHTSA). As you are probably aware, the car in question was a Tesla Model S. Of any of the companies testing autonomous systems or vehicles on the road Tesla may be one of the best placed to weather this tragic incident, but their belated response to what has happened will not do them any favours.

The willingness of Tesla drivers to test out its experimental features like autopilot has given them an edge over a lot of the competition. They get a lot of data about how the autopilot feature operates in the real world with real people. This is invaluable for understanding how the technology will operate in the wild and how people use and react to the technology more broadly. But this doesn’t come without risks for both the company and its drivers.

The question was never if something like this would happen but when. What really matters now is how NHTSA, Tesla, their drivers and the public react. How Tesla or any other car company (or tech company) deal with the fallout will have a big impact on public trust and possibly the future of AVs.

It is not perfect

This incident is an important reminder of two things; firstly, it is still really important that drivers continue to pay attention even when the car is in autonomous mode and secondly, the limitations of the technology. While the technology can perform as well as a human driver (if not a bit better) in many respects, ‘it is not perfect’ (as the Tesla blog points out). The car’s inability to recognise the difference between the white tractor trailer and the sky shows we still have a way to go before we can completely rely on systems like machine vision.

The systems that are used drive the car and recognise objects on the road (primarily machine learning algorithms) rely on example data or images to build their understanding of the world. You will never be able to capture all possible situations, no matter how many tests you perform. There will always be ‘extremely rare circumstances’ where a situation arises that the computer cannot deal with (and perhaps a human driver couldn’t either).

We need to be realistic about what is possible and where the limitations of this technology lie. The AV narrative has always relied heavily on the safety argument. What’s important where car companies are concerned is to make sure this and other incidents in the future don’t overshadow the actual benefits of driverless cars. Reiterating their own faith and trust in these systems will be an important step in rebuilding wider trust.

Could anything have prevented this death?

Very little I expect. Had the incident involved a head on collision Tesla believe it would not have been fatal (it is one of the safest cars ever tested). The incident itself involved the top of the car being severely damaged, with most of the car and its operations left intact as it drove underneath the trailer. As neither the driver nor the car apparently saw the trailer no action was taken to avoid the accident. Had the autopilot not been on, would the driver have seen the truck? It’s impossible to know for sure but from what we’ve seen on YouTube is it definitely possible the driver was not focused on the road or maybe in a position to take over from the car. It only takes a split second for something to go wrong.

If this was the case does the driver hold ultimate responsibility for what happened? I think this is a really important question. When does the car or the car company take over responsibility from the human? Who will be brave enough to take this step? The CEO of Volvo, Håkan Samuelsson, believes that when the car is driving autonomously it should be the manufacturer that is liable- ‘If you're not ready to make such a statement, you're not ready to develop autonomous solutions’.

It is a little disappointing that Tesla have not been brave enough to come out and accept some explicit responsibility for what happened, rather than just reiterating that the technology is experimental and the driver should always be ready to take over. This would have not only shown real leadership but also faith in their system and the general capabilities of AVs. The fact it took so long for a statement to come out and seemingly only in response to the inquiry does not engender any more trust in the way they have handled the situation.

I think there is a real question of whether Tesla are doing enough to ensure drivers are using autopilot responsibly. They are fully aware of the way people use autopilot which would suggest their current approaches to prevent complacency described in the blog are not really adequate. Effective human computer interactions are vital. Until we can rely on completely autonomous cars, we are faced with very complex relationships between people and technology - both in and outside the car. Attention assist cameras are finding their way into cars today and this sort of technology and the data it captures is only going to become more important.  

Will it damage customer trust?

I expect the response may do more harm than the incident itself. Depending on the nature of the inquiry, it could also damage trust. But in reality I don’t think much will change for Tesla’s autopilot uses. They have always been very willing to try out experimental technology and this incident may make them re-consider how they use it but I don’t think it will stop them using it. No matter what companies try to do to, more accidents will happen. Other people will probably be injured and perhaps even lose their lives. The real test is still going to be how those companies react and what they can do to show leadership, build trust and illustrate the real value of autonomous cars.

Photo Credit: benjie castillo via Flickr Creative Commons 2.0 license 

Author

Harry Armstrong

Harry Armstrong

Harry Armstrong

Head of Technology Futures

Harry led Nesta’s futures and emerging technology work.

View profile