Ethical AI: “How do we go forward without screwing things up?” [Podcast]

Add bookmark
Katie Sadler
Katie Sadler
07/03/2018

“Instead of ethics being a constraint, I want people to see it as an opportunity for innovation,” believes Aimee van Wynsberghe

“I think what we're seeing around the world is a rush to invest in AI because we know that it's powerful. We know that it's going to be a game changer but we just don't know exactly how yet. Its been compared with the PC, it's compared with the smart phone, it's going to be monumental,” said Aimee van Wynsberghe, president and co-founder of The Foundation for Responsible Robotics and AIIA Network advisory board member.

“We are trying to bring academics together with NGOs, industry and the people who are making and using the robots and to talk about the real world issues,” she explains to Podcast host Seth Adler.

“Part of my role as an ethicist is dedicated to shifting the rhetoric. So, instead of ethics being a hassle or being a constraint i.e., ‘Ah, why do we have to think about ethics?’ I want people to see it as an opportunity for innovation.

“This is where ethics plays an important and crucial role”

However, van Wynsberghe believes it is important to consider how we should be developing AI: “We should have standards for the training data. To be fair to the industry, we're still trying to figure out what we're going to do with AI, how we're going to develop it, but this is where ethics plays an important and crucial role—looking at how to reduce things like bias, how to protect dignity of individuals so that they don't become a part of the bias, and to do that you need rigorous standards for the training data.”

See also: The bottom line of ethics and governance in AI

“The fascinating thing about AI is that we don't actually really know how it comes to an answer,” said van Wynsberghe. This is powerful because if we did know then we would probably be able to do it ourselves. The AI is doing something much more complicated than we are capable of doing right now. However, this means that the AI becomes a kind of a black box. We input and we get outputs but we don't know what happens in the middle.

“We input and we get outputs but we don't know what happens in the middle”

“This is a problem, because it also means that there's no explanation. You can't say, "Alright, well we have this output, but how exactly did we arrive at this? We would never accept this from people or a company but for some reason we are willing to accept it from the technology, from the AI,” she added.

Join Adler as van Wynsberghe explains why it is essential that the public is educated about AI future uses and what this might yield.

[Listen]

Key takeaway from this week’s AIIA Network Podcast

Guiding best practice

“We're in this situation where we don't have adequate testing or standards. By standards I'm not talking about regulations, I'm talking about guidelines for best practice. It’s not how we really slow things down but how do we go forward without screwing things up at an enormous level.

“It might not be the best idea to try and enforce opening up this black box. What might be more productive will be understanding more about the inputs and the outputs and the relationship between them.”

Aimee van Wynsberghe was a key speaker at AI LIVE 2018. Watch the presentation:



RECOMMENDED