Are we there yet? The journey towards driverless cars

Trials are underway around the world to test driverless cars and how the world around them will react. In the last blog I talked about the dangers of not dealing with the growing cyber security threat. Here we are going to look at three other pieces of the puzzle; technology readiness, people’s relationship with the technology and the more sticky issues of regulation and legislation.

Technology

Fully autonomous cars are on their way and much of the important technology will be a standard feature of every car in the next few years. But to reach level 5 or full autonomy we still need to make a few key advances. Two of the most critical systems of the car- machine vision and decision making rely on machine learning algorithms. While the capabilities of learning algorithms have come on leaps and bounds in recent years they are not yet at the point of replacing a human behind the wheel.

Most of the driverless cars being trialed rely primarily on Light Detection And Ranging (LiDAR) to get information about their environment. Both the Google car and the LUTZ pathfinder pods (pictured) use LiDAR as their primary sensors but need machine vision to see colour and recognise road signs. LiDAR also isn’t particularly good in bad weather like snow or fog but probably its biggest issue is cost. The hardware is very expensive and will push up the cost of any car that uses it. Mass production will drive down this expense but it’s still going to be a significant part of the overall cost.

Machine vision on the other hand has the opposite problem. The hardware is cheap and easily adaptable (could use visual light or even infrared based cameras) but the analytics behind it are not yet accurate enough. The machine learning algorithms that are used to identify objects from the camera are already very powerful but not as good as a human. Some groups are focusing on creating a car that only relies on machine vision. It's not clear how long it will be before the right advances make this possible.

Safety will always be paramount. Knowing, or at least trusting, the machine will be able to accurately recognise people on the road and react in the right way is vital for people to use  driverless cars. Learning in noisy real world environments is extremely difficult. Where machines really fall down is their (current) inability to apply common sense reasoning or prior learning to problems. John Leonard at MIT used a dashboard camera to illustrate how many events experienced everyday are unusual by an algorithm's standards and would be difficult for an autonomous car to cope with.

People will need to be the fail safe system but making the relationship between car and driver work might not be so easy

Google’s solution so far has been to try and capture as many of these rare events as possible by doing lots and lots of on road testing. Using this data, algorithms can then come up with responses which can be tested in simulations. It’s hoped that with enough testing the software will be at least as safe as a human but there will always be rare events that get missed. Here people will need to be the fail safe system but making the relationship between car and driver work might not be so easy.

Coping with people and helping people cope

It is more difficult to design technology that keeps people in the loop rather than automate them out. This is the challenge before we reach fully autonomous cars and will require better communication and understanding from both the car and the driver. From the driver’s side they will need to trust the autonomous functions of the car which may require a greater level of transparency in how it operates and what it is about to do next. But equally the car will have to understand the driver’s actions. With the introduction of Tesla’s new Autopilot option the importance of this relationship is becoming clearer than ever.

Cameras and sensors are not just being stuck on the outside of the car. It feels a little unnerving but cameras and sensors are increasingly being used to watch the person behind the wheel. This is not just a ploy by spy agencies but comes from a genuine need to understand what the driver is doing, and their concentration levels, so the car knows if they are ready to take over (if needed). Companies are already starting to introduce features like Mercedes-Benz’s Attention Assist to see when the driver is drowsy and then suggest they might need to take a break.  

As control begins to shift more between the car and the driver, knowing when to intervene, or what the car is capable of, may start to become a bigger issue for whoever is behind the wheel. These kinds of problems have already resulted in accidents. You only have to look at the way people are using Tesla’s Autopilot to its limits or the semi autonomous Volvo accident earlier in the year.

The better the car can anticipate what the driver is about to do the better it can help the driver to avoid making a mistake. Researchers are already training computers to recognise driver behaviour using algorithms so they can predict what the driver might do next. It was able to predict with an accuracy of about 90% what manoeuvre the driver was about to make just from patterns of facial expression and movement. In response, the car could prevent the manoeuvre if it thought it might cause an accident. Getting this interaction between the car and driver right is crucial. For many this will be their first interaction with a ‘driverless’ car and if it doesn’t go well, the industry will have a bumpy ride.

The better the car can anticipate what the driver is about to do the better it can help the driver to avoid making a mistake

Regulation, policy and legislation

This is potentially the biggest challenge of the three. Regulators will need to know driverless cars are safe before they are allowed on the road. New ways of measuring the abilities and safety aspects of the car are an important part of this but will not be easy. Learning algorithms might need to be put through testing simulations to prove they will act appropriately on roads but as we have learnt from the Volkswagen emissions scandal simulations can be gamed.

Understanding who is liable for any collisions when the car is driving is also a big question. The CEO of Volvo, Håkan Samuelsson, believes that when the car is driving autonomously it should be the manufacturer that is liable- ‘If you're not ready to make such a statement, you're not ready to develop autonomous solutions’. Whether other car manufacturers or regulators share the same sentiments is yet to be seen.

The LUTZ pathfinder project (pictured) led by the Transport Systems Catapult will be evaluating possible risks to help inform future legislation, regulation and the thorny issue of accountability. Work has started here and across the pond to establish how the law can change so non-competent drivers could use public roads with driverless cars. This is one of the important promises of the driverless car revolution.

The technology for full autonomy will be ready soon, and will probably be ready far sooner than people or the law are for it. This is why trials like those in Greenwich, Milton Keynes and Bristol are so important. They will help us understand these issues in much greater depth and find the right solutions.

This is the second in a series of blogs on driverless cars, the next will explore their exciting and mundane future. The first in the series looked at the growing issue of cyber security.

Pictures courtesy of Transport Systems Catapult

 

Author

Harry Armstrong

Harry Armstrong

Harry Armstrong

Head of Technology Futures

Harry led Nesta’s futures and emerging technology work.

View profile