A Conversation With Our 2018 Keynote Speaker: Michael Rogers, MSNBC’s “The Practical Futurist” (Part 1)
What are some of the major, world-changing challenges and opportunities associated with intelligent automation, AI and other cognitive technologies? And considering many of these innovations are still in their infancy, why do we need to start talking about them now?
I think what's interesting is that AI and automation are already really entrenched in our lives. Whether we're aware of it or not, we use it everyday. Waze, the traffic management app, relies on AI to find the fastest possible route. Another example is mobile banking check deposits. There’s a powerful piece of AI behind that. I received a paper check yesterday that I could barely read. However, using my smartphone, I uploaded it and the app read it perfectly. That really is an amazing example of AI at work.
AI is also changing credit card fraud detection. False declines used to be a huge, costly problem, but have recently gone way down; again, because of artificial intelligence.
And then routing- UPS, FedEx. One of the reasons that shipping is still a reasonably priced service, despite the increased operations costs, is the enormous efficiencies AI has introduced into the supply chain.
So AI is already around us and it's simply going to extend into more and more areas of business. What's unusual about this new technology is that, despite being relatively new, it's already widely available to even smaller businesses because it's cloud-based. The barrier to entry is much lower than it was for sophisticated enterprise technologies in the past. This is why virtually every business has to start thinking about cognitive technology - AI is going to be the next spreadsheet, in a sense. It's going to be that prevalent.
Elon Musk’s opinions on AI (that it poses an existential risk, needs to be regulated, etc.) have generated a significant amount of press lately. Do you agree with him? Why or why not?
I think Elon Musk's concerns are very grand but I think they give AI too much credit at this point. We have a lot to worry about when it comes to how AI gets implemented in society, but the notion of a general evil intelligence is far in the future, if possible at all. We truly do not understand the nature of either general intelligence or human consciousness.
But digital technologies have led people to believe that everything continues to improve exponentially, as suggested long ago by Moore's Law. Technologists like to extend that speed of evolution forward and say, "Well then, AI will reach effectively human intelligence not that far in the future." But I think that undercuts how complex general intelligence and consciousness are. So it's a little early to focus on the evil-AI-consciousness problem. We have many other issues to deal with in AI before we get there.
We recently conducted a survey of over 100 IA leaders and found that two of the top reasons IA initiatives fail is change resistance and lack of leadership buy-in. What advice do you have for the forward-thinking executive who may be struggling with these issues? What do companies risk by falling behind on IA adoption?
This is a traditional pattern, I think, with new technology, and it's a kind of resistance that often turns out badly. I do a lot of work now in cybersecurity, an area where technologists have been telling their boards and senior management for well over a decade that companies need to spend more money. But cybersecurity is a hard sell; it's an abstract idea. Money tends to go toward things that senior management better understands and has more experience with. And of course, as we’ve seen in the case of cybersecurity, this has turned out poorly for a lot of big companies.
I think AI is in the same position. The first thing to do when you’re pitching to senior leadership or the board is to include an educational element. Make sure that the technology is understood by decision makers and then use stories to illustrate exactly how powerful this technology is at other companies. AI is reality, not hype. As far as what companies risk by falling behind, well, they’re potentially risking everything. As I said, I think that AI, automation and cognitive computing are going to be so fundamental to businesses that for a company to say, "Well, we really don’t need to do this. We don't see how relevant that is." is rather like a student saying, "Well, I don't really understand this reading thing. I'm not positive that I want to learn how to read." It's that fundamental. I'm not trying to use scare tactics here, but I do think that this is a watershed moment in society and business and certainly not a time to hold back.
*This Q&A was originally promoted for the 2018 Intelligent Automation Week.