The nature of artificial intelligence with Rajeshwari Ganesan: Part I

Add bookmark
Seth Adler
Seth Adler
10/11/2018

Rajeshwari Ganesan is an Associate Vice President in charge engineering of Nia - EdgeVerve’s AI platform. The industry expert shares the oncoming right brain ability of machines, language processing, compensatory networks, parallel neural pathways and the relationship between programming machine intelligence and the understanding of the human brain and mind. 

It’s Raji’s history with technology which has informed her ability to take us all to an AI informed future. “I've seen technology evolve from mainframes to web systems in 1999. Prior to that it was client service systems — then it became distributed computing and now we have AI and machine learning which is dominating quite a bit.”

Each of these step-change technologies in her past have essentially explained to her the vastness of capability possibility.

“AI is going to be there everywhere. It's going to sit quietly inside and, for our applications, constantly watch what we're doing, make our lives easier — so it's a purposeful AI platform.”

 

MACHINES HAVE ALWAYS BEEN LEFT BRAIN

And so, Raji explains to us how AI actually works: “Today, if you look at machines, they were extremely good in doing logical operations, which is the left brain activities, right? Fast computations. But imagine machines being able to understand the language we speak. A lot of times, if you tell machines, they don't understand the language, they don't have the context of what you're speaking. Something which is very artistic.”

Left brain functionality is how society has benefited from computing since it’s industrial revolution. But language processing, image recognition, intent, nuance, etc- is new, “which is entirely a right brain activity, machines could never fathom it. Perception in terms of vision, in terms of language, machines could never do it. You teach machines with lots and lots of data and the machines program themselves. So I think it's a totally, fundamentally, very different concept.”

EVERY WORD YOU SPEAK BECOMES A MULTI-DIMENSIONAL NUMBER

Totally, fundamentally a very different concept from the programmer herself. Diving back in to how it works, Raji explains, “the way your human brain processes, your neocortex understands the contextable language versus how the machines understand the language is entirely different. So under the bonnet of the machine, what you see is a lot of numbers.”

“Every word you speak becomes a number and multidimensional number - the cosign. So, if I use a word — your name [Seth], for example — it would not just be a four-letter first name. It could actually mean a vector space of multi-dimension which describes who you are and what you do.”

All of which is very helpful in having machine learning understand contentious clauses in legal documents — currently highlighting those to a human to ensure that contracting, or perhaps discovery, goes lightning quick versus today’s belabored human-based process.

THE HUMAN MIND HAS COMPENSATORY NETWORKS

Saving time, cutting cost, improving efficiency — all great stuff. But with these abilities, AI can take the next step.

“Take us, as humans," says Raji. "Assume that you see a person with disabilities. The human mind has compensatory networks, right? His hearing becomes very high which compensates for what he doesn't have.”

That compensatory network is in place to counter the deficiency.

“If you really look at the language processing of machine learning, it's not where it exactly ought to be. There are deficiencies but the deficiencies can be made up with compensatory network. What I'm saying is, today when I'm reading a legal document, just as in your human mind, your vision precedes your understanding of the semantics. I'm eyeballing the document. You see it first, then and what you see, you follow it by a text. Here too in a machine, the visual perception precedes the semantic understanding.”

ML IS 'EYEBALLING THE DOCUMENT'

Artificial intelligence is accomplishing high value goals with the potential to provide much higher value in return.

“It's been a challenge but today, if you really see the way humans work, and the way the human brain works, there's a lot of it that can inspire us — such as in the design of that orgen — to assimilate or mimic in our architecture.”

NETWORKS WILL PROVIDE PARALLEL NEURAL PATHWAYS OF DIFFERENT ABILITIES

Raji and her team are taking inspiration from the human brain, itself something which is not well understood by the most talented neuroscientists.

“It's a bio inspired architecture. So vision precedes the text understanding. With time, there would be more and more networks which provide parallel neural pathways, per se, of different abilities which humans are currently doing — the sum total can achieve a certain objective.”

Which is all to say that our understanding of the human brain is right about where AI’s understanding currently is and how it should improve.

“I'll give you another example that I encountered a few days back. I had a conversation with my daughter and I just blew up. I stepped back and said, ‘Can I explain myself as to what biases I had in order to respond in a certain way?’ At the same time, I was trying to make my machine learning models explain themselves.

"Sometimes, when they make a certain prediction, it cannot be a black box. The machines need to explain themselves and I was trying to put this through as architecture. I realized then that it's much easier for me to write those algorithms in machines to explain themselves rather than trying to reason why I was behaving in a certain way!”

Did you catch that? Raji feels it’s easier to teach the machine to understand the world than it is to understand herself.

Like this article? Discover more insight from EdgeVerve.


RECOMMENDED