Is the principle of ethics in AI asleep at the wheel?

Add bookmark
Ken  Mulvany
Ken Mulvany
09/18/2017

In an age where you could soon be legally (and safely) asleep at the wheel thanks to driverless cars, will morality in AI go the same way?

wheel hero
Photo by Hermes Rivera on Unsplash

The Institute of Electrical and Electronics Engineers (IEEE) recently published the first draft of a framework report exploring the ethical challenges posed by AI. In particular this statement from the report caught my attention:

“We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles.  AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems... …By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age."[1]

Ethics and morality have dominated much of what was been written about AI in 2016, often negatively given some recent exposures of AI generated bigotry and bias. What makes this 136-page report interesting, is that it marks the beginning of large ‘collective’[2] thinking on how to build benevolent and societally beneficial artificial intelligencethe principle we founded BenevolentAI on when we established the company in 2013.

So two immediate thoughts crossed my mind when I read the report and the quote above.  1. Yes, I couldn’t agree more!  2. More importantly… …but how? How do you ‘build morality’ and altruism into AI? How do you qualify, benchmark or even define what morality in AI is? Is AI morally programmable with, as the IEEE describes, “the values of its users and society”?

How do you ‘build morality’ and altruism into AI?   

It is incredibly complex to build ethics and morality into AI.  Here’s how we as a company have begun to answer some of the questions posed above and approached building ‘benevolent AI’ at BenevolentAIit rests on two ideas:

 'You are what you eat': Morality as a data precept 

The information used, its source and how you teach the algorithms is vital.  What you feed the AI and the level of its morally nutritional content, dictates the morality of the technology and the knowledge you create. 

We are feeding our self learning system with information that benefits humanity - scientific data that focusses on curing disease, improving energy efficiency, better food development etc.  The result is that as the technology’s intelligence expands it is denied access to the types of information that instructs it to behave immorally.

'Eureka AI': Morality in the context of existential risk 

We are not trying to create a super human intelligence or “solve intelligence”. This is an important distinction. We are trying to save human lives by enhancing human intelligence using machine intelligence. We are not trying to supplant human intelligence altogether by creating a sentient machine. 

Our AI mission is creating the process of finding that seminal inventive step (the Eureka moment) in the scientific discovery process. Looking at how a particular scientist made a discovery based on information that wasn’t known before and then creating a replicable process for this.

Read more: Self driving cars: Regulate or free-market?

So we are trying to teach our AI to track inventive steps such as; what the information environment was for that inventive step; what information and data allowed the scientist to put two-and-two together; the hypotheses that derives from that; how those hypotheses were broken down, tested and a new discovery made.

Understanding this process, creating technology for it, and then applying it in tandem with humans and humanity means it is possible to advance the scientific method and bring it into the 21st century. By doing so we can transition from a chaotic highly variable process of scientific discovery to one with greater predictability and greater scale by allowing humanity to see things we may not otherwise have been looking at.

Let’s finish where we started, with the IEEE report, and where I hope the AI industry is going next. Konstantinos Karachalios, IEES managing director commented that by building ethical AI…

“…we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future.”

Exactly!

Ken Mulvany is a serial entrepreneur, experienced business leader, and director at BenevolentAI.

This article was orginally posted on the BenevolentAI blog and is reproduced with permission from the author.


[1] ‘Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems (AI/AS)’ http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf

[2] (Around 100 thought leaders in AI, law, ethics, philosophy and policy contributed to the report)

RECOMMENDED