Self driving cars: Regulate or free-market?
Artificial intelligence will become the ultimate reflection of who we are, both individually and as a society. With this in mind, how should it be managed?
- Should there be Government oversight within a legal structure?
- Will industry self-regulation work?
- Should the free-market decide the boundaries of this technology?
While there is no easy answer, or one size fits all, the topic needs to be widely discussed now and by all parties to stay ahead of the curve.
The decisions that need to be made as this technology is built, and implemented depend in part as to how we want the technology to function. While it will not be possible to put an embargo on how software can be used, we do have the collective power to influence how decisions are made and at what cost.
When AI solutions are offered we should always ask what the ranges of possibilities are so we can think about how to make any outcome more humane.
Humanity can be baked into any software if such an approach is considered to be important. In critical or life affecting situations, I would argue that this is important.
What is currently built depends almost entirely upon two inputs.
1. What is the ultimate aim of the software, what should it achieve and what are the decisions that ultimately lead to that outcome?
2. What is the data or information used to power software and what decisions were made when it was collected?
These decisions are usually considered as the primary driver or influence as to the final outcome, more so than any legal, ethical or humanistic considerations.
AI is built by engineers, not lawyers.
Taking the example of the self-driving car, a simple question can be asked in a crash type scenario where there are only two possible outcomes.
Outcome A. The self-driving car crashes into a group of people by the side of the road causing fatalities to that group but with no lasting effect to the car occupant.
Outcome B. The self-driving car misses a group by the side of the road that now results in the fatality of the occupant.
The decision taken for either outcome will be designed into the car software. It will be a man-made decision as to what outcome is preferable. In this sense, the outcome from a practical standpoint is binary and should be considered in terms of overall impact. As the car is now also the driver, at what point is all responsibility ceded to the software?
While such events over hundreds of thousands of miles driven will be rare, they will occur as the number of self-driving cars on the road increases. What course of action should be taken? Is outcome A or B preferable?
Government regulation will naturally play a part in self-driving cars, as will be self-interest of automobile manufacturers. Currently, the US has modern automobile safety standards in place, mandated by the government. On the whole, safety standards are weaker than Canada and Europe, but stronger than developing countries. The US automobile industry has lobbied hard for less restrictive safety standards. This is just one of the factors attributed to a fatality rate in the US of 10.6 per 100,000 inhabitants. (WHO Report 2015)
The free market approach discourages restrictions on trade or business. To give an example, the metal Ball bars that are legally sold to SUV owners to protect their car from damage has the side-effect of causing greater injury or death to any pedestrian who is struck. However, the SUV driver is now better protected and less likely to suffer harm. This free-market approach put company profits ahead of pedestrian safety. What will be the outcome if a similar model is used for all self-driving cars, and by extension all Artificial Intelligence software?
The conversation around the type of driving environment that we would like in the future can and should be held by everyone. If we do not engage then the decisions around safety will continue to be largely made by the automobile manufacturers.
"Adults drank too much and got behind the wheel about 112 million times in 2010." - CDC Report 2011
"Each day, people drive drunk almost 300,000 times, but fewer than 4,000 are arrested." - FBI Report 2010
An area where there are long established laws is drunk driving. However, existing drink driving laws are often flouted and hard to enforce. A large section of the general public continues to drive while inebriated, while the police do not have the manpower to enforce this law.
This is an area where self-driving cars will immediately reduce the effects of drunk driving. However, more broadly speaking, when the public decides to act outside of established rules or laws then it is can be a challenge to enforce those laws.
How will Artificial Intelligence systems be policed when they are out in the world, in the hands of the general public? Without debate, there is a risk that rules, regulation, and laws will come too late to effectively manage an environment where man and machine need to co-exist. The time is right to discuss how we want AI to act on our roads and in the wider world.
This article was originally posted on LinkedIn Pulse. Self Driving Cars: Regulate or Free-market?, . Accessed 18 Aug. 2017
Oliver Christie is an artificial intelligence advisor and expert.
This article was orginally posted on LinkedIn Pulse and is reproduced with permission from the author.