The Nature of Artificial Intelligence: A Three-Part Interview

How AI gives us insight into the design of the human brain

Add bookmark
Seth Adler
Seth Adler
11/15/2018

Rajeshwari Ganesan
Associate Vice President
Infosys

Rajeshwari Ganesan is the Associate Vice President and Head of Engineering of Nia Knowledge – Infosys Nia, Infosys’ AI platform. She shares the advancements in the area of machine learning with language processing, vision-based computing, and bio-inspired architecture with compensatory networks, parallel neural pathways, which mimic the human brain and mind.

It’s Raji’s history with technology, which has enhanced her ability to take us all to an AI informed future.

She says, “I’ve seen technology evolve from mainframes to client-server systems prior to 1999. Post that it was web-based systems - then it became distributed computing and now we have AI and machine learning, both of which are dominating every aspect of computing.”

Each of these step-change technologies in her past have essentially educated her about the vastness of capabilities and possibilities.

“AI and ML are going to be ubiquitous," she adds. "With machine learning, computers can program themselves instead of having to be programmed by us. ML methods look at the data we generate, they figure out what we do to set their goals”.

Therefore, setting purposeful goals is the key. 

Computing in the mainstream has been leveraged primarily for analytical and logic-based operations 

Raji, as she is fondly called, explains how AI actually works.

“Today, if you look at computing systems, they are extremely good in doing logical operations, repetitive tasks at high speed, which works well and augments what humans aren’t naturally good at. But imagine a scenario where machine intelligence could mimic biological human intelligence. It could be language learning, perception or process of learning, or even the fuzziness we demonstrate, which makes us survive and evolve in adverse and new environments. Even a simple objective of making machines understand the context and semantics of natural language or its generation is a non-trivial task – machines’ intelligence can still not fathom it. So instead of being disappointed, how can we still leverage machine learning as it continues to evolve is the key question.”

Every word we speak becomes a multi-dimensional vector

How do machines understand the language? Diving back into how it works, Raji explains, “the way human brain processes, our neocortex understands the context and semantic language is entirely different from the way machines understand the language. So, under the bonnet of the machine, what we see is a lot of numbers for each word.

“A famous scientist once said, a word is worth a thousand vectors for the machines. Every word we speak becomes a number and multidimensional vector – a word embedding. This form of representation of language makes language computable. We can use this representation to find similarity between words or understand the context. This technology is applied to contracts analysis system, wherein it enables machines to understand contentious clauses in legal documents, redline or highlight them. This way machine aids humans to ensure contracts are done quickly.”

 

The human mind has compensatory networks

Saving time, cutting cost, improving efficiency are some of them. With these abilities, AI can take the leap.

“For instance, assume a person with visual disabilities. As the human mind has compensatory cognitive abilities, this person’s hearing becomes very developed, which compensates for the vision he or she doesn’t have. This ability to have compensatory perception is the way to counter the deficiency”.

Today, if we really look at the language processing of machine learning, it’s not where it exactly ought to be. There are deficiencies, but the question is whether these deficiencies could be offset?

Raji and her team are taking inspiration from the human brain to create their architecture.

“Today when we read a document, our vision precedes our understanding of the text, she says. “So, we see it first then read and comprehend. In case of a machine too, by adding the visual perception prior to language can help it better understand the context of what is being read. With time, there would be more and more networks, which would provide parallel neural pathways of different abilities, which humans are currently doing- the sum total of which can achieve a certain objective.”

Functioning of the brain is so complex and not very well understood by the most talented neuroscientists.

“I’ll give you another example,” Raji says, “which I encountered a few days back. I had a conversation with my daughter and I just blew it up. I stepped back and said to myself, ‘Can I explain myself as to why I responded in a certain way?’ Simultaneously, I was trying to make my machine learning models explain themselves. Sometimes, when they make a certain prediction, it cannot be a black box. The machines need to explain themselves and I was trying to put it through as architecture. And I realized, it’s a lot easier for me to write those algorithms in machines to explain themselves rather than trying to reason why I was behaving in a certain way.”


RECOMMENDED