Google has been working on self-driving car technology for some time. I jokingly mentioned to a colleague that they were probably doing this to automate their Google Street View images instead of hiring fleets of drivers whose sole job it is to drive around in specially equipped cars. At the end of May though, the company announced their intention to bring an actual self-driving car prototype to the streets of Silicon Valley this summer. The cars are very different from past conversions in that they intend the vehicles to be fully autonomous except for a start button and some form of voice input so the passenger can state their destination. The vehicles are limited to just 25 mph in speed for use on city streets not unlike the special city cars that are based upon golf carts. But is this something that consumers will want to use?
The Benefits
As Google stated in their announcing the new prototype, there are a number of potential applications. Those people that lack the physical ability to drive a car could have access to improved mobility. A friend of mine who is legally blind could potentially be able to take a self-driving car places rather than relying on others or public transportation. The elderly who have difficult driving can now move about without the fear of their reaction speeds. Those who went out on the town drinking can get home safely without endangering the lives of others.
I think one of the best potential uses though is for an automated taxi service. I’m almost reminded of Johnny Cab from the Total Recall back in 1990. No more having to deal with drivers or services such as Uber and Lyft. Instead, you just call up a car through a smartphone application which automatically notifies an available car of your location. You get in, tell it your destination and it automatically bills you for the usage when you get to your destination. You thought taxi drivers were opposed to the ride sharing services, just wait till someone tries to introduce something like this to New York City.
The Legal Issues
Of course there is a lot of issues before these cars will even reach the roads in many states. For instance, must there be a licensed driver in the self-driving car when it is operated? The idea behind this is that if there is a failure by the car in some manner, then the license drivers would be able to take over manual control from the vehicle. The problem here is that Google’s proposed car would have no controls. The computer would entirely be in control. These leads to an even larger legal issue.
Drivers on the road are required to have liability insurance in the case of an accident. This is because the majority of accidents are caused by human error and it helps protect both the victim and driver from costs resulting from an accident. If the car is entirely autonomous, who becomes liable in the event of an accident? The passengers in the proposed car would have no way to wrest away control from the vehicle other than maybe shutting it down which could also cause issues. Does this mean the the manufacturer of the vehicle is going to be at fault? The company that makes the software? The passengers?
These and other potential legal issues must be dealt with on a state by state basis before a self-driving vehicle can even be on the road. Only California and a couple other states have passed laws regarding the vehicles and those are primarily designed around the testing of the vehicles and not for consumers. It may be some time before actual legal consumer self-driving cars would be available.
The Human Factor
The biggest obstacle to adoption of the self-driving vehicles though is going to be the human factor. Most people want to have some control in their lives. This also pertains to driving a car. With Google’s proposed vehicle, that control is completely removed. Instead you are entirely reliant upon the vehicle. While the vehicle will certainly be able to drive technically just fine, there are still things like judgement and reactions where the passenger may want to have some control.
Take for instance the ethical dilemma of a potential accident. Say if the vehicle senses that the car will be involved in a serious accident or it could potentially avoid it but at the risk of hitting a pedestrian. If it doesn’t avoid the crash, the passengers could be killed but if it avoids it, it may seriously hurt or kill another? At least with a driver, that ethical dilemma has to be decided by the person in control. If it is up to the car to do it, it will depend upon how it was programmed. This once again brings up the issue of liability.
Then there is the whole single control system. The computer is completely reliant upon some form of input and its database for knowing its destination. The input may be keyboard input or voice. But what if the car was unable to understand your directions? Maybe you know your destination but not its GPS coordinates or street address. How does the car know where to go? In addition, what if it can’t understand the input you give it. You will be stuck where you are unless you have some way to communicate properly with the car.
Finally, there is the fact that Google Maps is not always right. I know many cases where I live that you give it the correct street address and Google still is unable to locate that address properly. In some cases, it will show results in a completely different part of the county roughly 20 miles away. It certainly would be frustrating to get in a self-driving car, tell it the destination only to find out it brought you to where it thinks you want to be but you have no way to actually direct the car to the actual location. Just take that thought and then think of someone with disabilities using such a vehicle, getting to that destination and then being disoriented and confused such that they don’t know what to do.