“AI is an extension of human potential”: Q&A with global futurist Aric Dromi

Megan Wright bw
Posted: 12/05/2017

Technology is a blessing—and a curse—depending on how we think about it and what we do about it

worldview hero

“I don’t want to make smart things—I want to make things that are smarter than they are today.” This is Aric Dromi’s guiding message—and one that continues to drive him forward as a global innovator and agitator in a world where he says, not enough leaders understand the challenge we’re up against when it comes to AI.

What started as a childhood fascination with science fiction has grown into something the self-confessed “professional troublemaker” describes as feeling “detached from physical reality, imagining the impossible much more often that I did as a kid”.

A true philosopher, Dromi now works as in-house futurologist at Volvo—an automotive brand that’s known for taking the bull by the horns when it comes to innovative technology.

In addition, he sits on the advisory boards for the AIIA Network and NTT Innovation Institute, supports the UNLEASH innovation team and runs his own company, TEMPUS.MOTU—a social and technology strategic think-tank.

Dromi’s made his mark on the global tech sector by driving people out of their comfort zones; rethinking the way we imagine, reimagining the way we think, and asking just about everyone to question just about everything.

“Where do we go next if natural evolution has reached its peak?” he asks. The answer, he says, is in finding a way to work with—rather than against—technology to reach higher potential that we can’t even imagine yet.

This month, as we welcome Dromi as a member of the Artificial Intelligence and Intelligent Automation Advisory Board, we sat down with him to find out where his love of AI comes from, how we can harness its potential for good, and why we aren’t quite ready for the onset of AI…

AIIA Network (AIIA): There are those who believe that technology is enabling us to embrace individuality and personalization, and others who say that it’s streamlining our lives and homogenizing large aspects of society and culture. What’s your take on this? 

Aric Dromi (AD): I will try to take the philosophical approach to this answer.  I think no other species in the history of the known universe was capable of repurposing and creating its own jail, and I think we are locked in a technological jail. 

We have the technology that can solve a lot of the challenges and frictions we have in the world—and I’m talking about the big challenges of friction, like world hunger, war, famine, diseases—but instead of unleashing the technology and setting it free, we enslaved ourselves to it. And slaves have the tendency to follow the footsteps of their masters.

Technology somehow triggers the loss of human imagination. Just look at Hollywood—we’re remaking all of the great science fiction from the 50s and 70s, but we don’t have any single original superhero script today. We are repeating.  We are not creating any more.

And I’m generalizing, but for the sake of this discussion, I will generalize because I believe our education system is flawed, and our ability to create new models is flawed.  It’s like we’re focusing on hardware rather than interactions.  Like how everyone is drooling to see the new screen from Apple, but no one is talking about the same old operating system behind that screen.

AIIA: You describe yourself as a “professional troublemaker”—have you always been a bit of a disruptor?

AD: First time I skipped school, I was seven years old, so I would say yes.  For a long period of my childhood, I chose to go left because everyone else wanted to go right—not because I wanted to go against society, but because I didn’t want to compromise my moral values. 

In a sense that makes me a troublemaker because I show people that there is such a bright, big world outside of their comfort zone.

And for as long as I can remember, I’ve loved technology. I took it apart.  I reassembled it.  I’ve always been really fascinated by it. And with all of this fascination, on the current path we’re treading, technology will mark our demise, because we create technology, but we forgot to implement humanity in technology.

Read more: “A robot is a sophisticated toaster—why would you ask it for mortgage advice?”

I think the discussion should not be about technology any more.  It should be about a set of moral values, ethics, and trust moving ahead.

AIIA: You’ve hit on a very important public discussion there, which is the idea that machines are being programmed to exhibit the very worst traits of human morality—a discussion that’s especially relevant in the context of, for example, autonomous vehicle development. Do you think we’re in danger of creating machines that are ethically and morally corrupt?

AD: We are afraid because we’re trying to build AI and robots as we understand reality. But the way we understand is really f***ed up, let’s be honest. 

We need to co-create with technology rather than create technology. Every time I hear about that they’re shutting down a bot because the bot was neo-Nazi or became chauvinist, all I can think is that the bot is learning from what it sees.  It didn’t turn neo-Nazi by itself.  It turned neo-Nazi because of the way humans interacted with it, so instead of shutting it off, let’s show it another way and let’s learn from it.

speed automation hero

A Facebook engineer recently disconnected an AI that developed its own language—but isn’t that the purpose of intelligence? To pave new roads, to find new ways of doing things, to improve on what exists? Instead we shut it down because we don’t understand the new language, which is ridiculous. 

AIIA: A recent report on the current state of the autonomous vehicle (AV) market hit on an interesting point about the development of AI and innovation. The report found that, aside from the technology, infrastructure and groundwork needed to prepare for the mass adoption of AVs, society is largely underprepared for the rise of this sort of technology. Would you agree?

AD: I very much agree, and I think it’s on few levels. The first level is very practical—we don’t have any infrastructure that accommodates this type of technology.

The second part—we don’t have the moral infrastructure or the ethical infrastructure to handle that. Our legal system is anchored in the physical world as we know it, but moving into digital interactions, AI, machine learning and so on, will completely shake the foundations of our legal and monetary systems.

I recently had a chat with a judge in Las Vegas, and the question he asked was, “What happens if you get a case in front of you of someone claiming that his AI avatar was murdered in a digital world?” He said, “I’d have no idea what to do”.

We know to do everything with data integrity, but we don’t know how to handle this type of question—there is not even a plan to handle these questions. And this is very worrying.

AIIA: So who do you think is responsible for setting the moral and ethical AI agenda? Governments, organizations or individuals?

AD: I think it’s a combination of the three. When the first AI-driven airplane falls from the sky, are you going to sue the company that built the plane, or are you going to sue the programmer that programed the plane to fly, or are you going to sue your insurance company?  Who has liability?

Read more: What should governments be doing about the rise of artificial intelligence?

In Germany, for example, the government is saying there should always be a driver behind the steering wheel of an autonomous vehicle. Instances like this show that we’re trying to fit a digital world into a physical reality. It simply doesn’t work.

AIIA: Tell me about how your children interact with technology—and what they’ve taught you to that end, because they’re growing up in a world where technology has always existed. How is that altering their creativity and relationship to the world around them?

AD: Take my daughter, for instance, she’s writing a book… on her phone. She has a computer but she has decided to write a book on the phone, so I definitely think that they have a different set of perceptions and perspectives on the world.

Someone coined the term digital native and I think it’s correct to understand that their moral values are anchored in code and their loyalty system is anchored in code and their ability to bond with experiences is much more flexible than my generation.

But it’s my grandkids that will be much more interesting in this case—more than my kids—because I think my kids mark the beginning of the change. My grandkids will already be the change, and there is a massive shift in the social DNA structures. 

The simple fact is that my son, who’s 14 years old, can simultaneously have 20 Snapchat conversations. How many of my generation are capable of that? I’m not even talking about digital—I’m talking about physical.

Their ability to process data and to cope with multitasking is much higher than any other previous generation.  And because of that, they can have new sets of relationships. 

We are very fast to condemn them and criticize them: “You never go out; this is not a relationship”.  But going out is not a relationship for them, so we should not judge it—we should understand it.

AIIA: Do you find yourself having one conversation time and again when it comes to artificial intelligence?

AD: There is not a universal definition of AI. If I were to walk down the street and I ask a thousand people “What is AI?”, I would get a thousand different answers.

We tend to look at AI as technology. We forget it’s a philosophical umbrella that becomes real because of certain types of technology under it, like machine learning, like computer vision, like self-adjusted algorithms, and so on. 

Read more: Imagining a global security framework for artificial intelligence

And artificial and intelligence are literally contradicting terms, so I’ve started to call it inorganic intelligence.

So I don’t see AI—or inorganic intelligence—as something external to me. I see it as an extension of human potential. I think we can use these technologies to create better doctors, better lawyers, better accountants, better drivers—a better society. 

AIIA: So the fear of robots replacing us all is an irrational myth that we can finally put to bed?

AD: Robots carry certain promises to perform some aspects of our tasks in a much more efficient way than our human body is capable of doing it. But it doesn’t mean they need to replace our jobs, they are there to make our jobs better.

We need machines to complete or cater for our needs and wants so we can repurpose our brains to focus on our wishes.


Megan Wright bw
Posted: 12/05/2017