The Value of AI

Flawed Reality: A monthly column by Tyrone Grandison

The definition of artificial intelligence (AI)

Everyone uses the term artificial intelligence (AI) generically, and they mix it up with Machine Learning, Data Science, and Predictive Analytics. The definitions have changed over time, and everyone seems to inherently assume that they know what they're talking about when they say AI.

From my perspective, Artificial Intelligence is essentially the ability of machines to perform those targeted functions that you normally associate with a human being. These are things that humans do naturally, like reasoning, learning, and interacting with the environment. Anything that falls in that realm right now is AI. If you asked me that 20 years ago, AI would just be anything that is human-like that has to do with software. Now, you have AI solutions that span the gambit. There is everything from computer vision to chatbots to autonomous vehicles to robots. All of that falls under the AI umbrella right now because they're either interacting with the environment, reasoning, or learning.

The value of artificial intelligence (AI)

The value of AI is essentially the potential of AI. In other words, what is the hope? What are we wanting or intending for AI to do or to benefit for society generally? The well known cases where AI is applied focus on business scenarios in order to improve operational efficiency. The New York Fire Department figured out how to use machine learning techniques to predict where the fire was going to be, using Firecast. That is a great use of AI.

The city of Pittsburgh did something similar where they implemented a machine learning algorithm that allowed them to optimize and automate traffic flow. The United States Center for Immigration Services (USCIS) uses chatbots to reduce the workload on its existing staff because it has resource constraints. They are able to get chatbots to answer the majority of questions for people asking for comments and for requests coming in. This is the value of AI. The promise of making things easier, safer, more convenient, more efficient.

The promise of artificial intelligence (AI)

The value of AI is the promise of AI improving your everyday work life, and hopefully improving your everyday social life. AI walks the tight rope of enabling and taking away work-life balance. The current driving force for applying Artificial Intelligence solutions, i.e. current AI value propositions – tend to solely focus on the optimization of business functions. How do I either maximize revenue or reduce costs? Or manipulate some factor or indicator that influences either one of those things? In the case of Pittsburgh, the question was how to optimize traffic flow. That issue had budgetary implications with regards to their inventory and their backend systems. It also made people's commute easier. Thus, the primary driving force is an optimization function that builds upon business objectives.

The emerging paradigm

The emerging paradigm is creating more work-life balance which will, in effect, make for better lives. The current paradigm is focused on doing something for business that has an offshoot of making your life easier. I'm hoping that over time it gets to a point where you have AI that is actually focused on the social good aspects and on work-life balance. It's slowly getting uptake, and at the very end of it– you basically have to figure out what the financial models underpinning AI construction and deployment. Currently, those financial models are very difficult to create and defend. They are also not often seen in the wild when you look at social enterprises and nonprofits.

Democratized artificial intelligence (AI)

As a final thought, how can we get to the point where we democratize the value of AI, where the value is not just for businesses, but the value is for everyone in society? Can we actually get to the point where we have AI frameworks that get us to applications for social good? Applications that actually improve our lives as the first objective, and not necessarily has a happy consequence?