A short history of artificial intelligence

Add bookmark

As the field of artificial intelligence evolves, so too does the definition of this powerful breed of thought-wielding machines

these hands hero
Photo by Samuel Zeller on Unsplash

When we speak about artificial intelligence, it's critical to distinguish this field from the evolution of ‘computers’ that has its own history and timelines.

Moreover, the definition for the term that John McCarthy presented in 1973 is critical to understanding the field, because the phrase “distinct from the study of how human and animal brains work” is significant; we are not attempting to emulate the human mind, but rather build a system that we, as human beings, would recognize as ‘intelligent’.

Here is a short, but by no means definitive, history of the evolution of this fascinating field: 

1688

The history of AI dates back to Gottfried W. Leibniz—the man who invented calculus.  In 1673 Leibniz designed and built the world’s first 4-function numerical calculator. This inspired him to have a vision of a universal language “Characteristica Universalis” and a machine that would automate reasoning “Calculus Ratiocinator,” much like a numerical calculator automates arithmetical operations.

1929

The famous logician Kurt Gödel enters the picture. Gödel spent many years studying the research papers left behind by Leibniz, and he is convinced that the vision Leibniz had for creating a universal language and a reasoning machine was a viable project.

Read more: Could images be the universal language of the future? 

Kurt Gödel was a friend of Albert Einstein at the Institute of Advanced Study in Princeton. Einstein once mentioned to a friend that his own work no longer meant much, and that he only travelled to the Institute so that he would have “the privilege of being able to walk home with Gödel”.  This quotation exposes the intellectual calibre of this quiet reclusive man.

1936

Alan Turing enters the scene and according to biographer Andrew Hodges, he began thinking about AI during the period that he was working on his famous paper: On Computable Numbers, with an application to the Entscheidungsproblem.  Turing introduces the concept of a “Turing Machine” in this paper. 

The earliest evidence we have of Turing’s involvement in AI are the recollections from those who worked with Turing at Bletchley Park. For example, Donald Michie recalls:

"Arriving at Bletchley Park in 1942 I formed a friendship with Alan Turing, and in April 1943 with Jack Good. The three of us formed a sort of discussion club focused around Turing's astonishing "child machine" concept. His proposal was to use our knowledge of how the brain acquires its intelligence as a model for designing a teachable intelligent machine." [1]

1950

Turing publishes his seminal paper entitled Computing Machinery and Intelligence [2]. In this paper Turing states:

 “Most of the programmes which we can put into the machine will result in its doing something that we cannot make sense (if at all, or which we regard as completely random behaviour. Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation”

Moreover:

“It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child.”

This last paragraph is significant, because it indicates that Turing was not thinking in terms of intelligent algorithms, or intelligent software, but intelligent machines that at the very least possessed “sensors,” which it could use to “observe” its environment.

Read more: How algorithms and human journalists will need to work together 

1956

John McCarthy together with Marvin Minsky, Claude Shannon and Nathan Rochester organize the infamous Dartmouth Conference, during which McCarthy coins the term “Artificial Intelligence”.  Here is the summary for the proposal that was submitted for this conference:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

1973

During the Lighthill Debate [3] John McCarthy provides the following definition for the term “Artificial Intelligence”:

"Artificial Intelligence [AI] is a science; it is the study of problem solving and goal achieving processes in complex situations." 

McCarthy further states,

“It [AI] is a basic science, like mathematics or physics, and has problems distinct from applications and distinct from the study of how human and animal brains work.”

2017

The current managing director of Microsoft Research’s main Redmond Lab, Eric Horvitz, provides a more modern definition of AI, as

“… the scientific study of the computational principles behind thought and intelligent behaviour” [4]

Moreover, Horvitz states the four main pillars of AI as:

  1. Perception
  2. Learning
  3. Natural language processing
  4. Reasoning

While it is very common to find articles and papers that mention the first three of these pillars, it is quite rare to find popular articles that mention reasoning, despite the fact that it is now considered to be the next big challenge.

[inlinedad]

Felix's latest book, The 4th Industrial Revolution: Responding to the Impact of Artificial Intelligence on Business is due out in late 2017. Pre-order your copy today. 

References:

[1] http://www.aiai.ed.ac.uk/events/ccs2002/CCS-early-british-ai-dmichie.pdf

[2] A. M. Turing(1950) Computing Machinery and Intelligence, Mind, New Series, Vol. 59, No. 236, pp. 433-460

[3] The Lighthill Debate (1973) - part 3 of 6 https://youtu.be/RnZghm0rRlI?t=10m8s

[4] Great Debate - Artificial Intelligence: Who is in control? https://youtu.be/rZe-A2aDOgA


RECOMMENDED