Artificial intelligence glossary

Add bookmark

Artificial Intelligence or AI is the simulation of human intelligence processes by machines, especially computer systems. Here are the key terms that will help you navigate the AI world.

Click on the letter to jump to that section:

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


A


Affective computing
Affective computing is a computer’s ability to detect and appropriately respond to human emotions and other stimuli.


Agents
These might be the droids you are looking for. Also known as bots or intelligent agents, they are autonomous software programs that are tasked to “sense” through sensors and “act” through actuators, according to their target function. They are often used to mimic human behaviour as assistants in a variety of functions. According to Russell & Norvig, there are five classes of agents:

  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • Learning agents

Algorithm
Based on Algoritmi, the latinised name of 8th century Persian polymath, Muhammad ibn Musa Al-Khwarizmi, an algorithm is a step-by-step procedure for calculations. First used by Dr Alan Turing in 1952, algorithms form the point of departure from which computers follow instructions. In the AI realm, algorithms can be designed to facilitate machine learning and acquire knowledge by themselves, rather than relying on strict pre-programming.

See more: How algorithms and human journalists will need to work together


Artificial intelligence (AI)
Artificial Intelligence or (AI) is the simulation of human intelligence processes by machines, especially computer systems.

See more: A short history of artificial intelligence


Analogical reasoning
Analogical reasoning is the process where people draw conclusions based on past findings. It is applied to forecast the results of any process or experiment.


Artificial neural networks (ANN)
Artificial Neural Networks are a variety of deep-learning technologies focused on solving complex signal processing or pattern recognition problems.

See more: Was that a question? Neural networks need to know


Augmented intelligence
Augmented Intelligence is an alternative conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing the fact that it is designed to enhance human intelligence rather than replace it.

See more: "AI is an extension of human potential": Q&A with global futurologist Aric Dromi


B


Black box
A black box describes the methodology used for testing software in which the inner workings of the program being tested are unknown or “opaque”. Generally, the only parts of the black box process that can be identified are its so-called transfer characteristics, that is to say, its input stimuli and output reaction(s). In AI, a black box can also refer to the section of the program environment that is constantly in flux and is, therefore, usually too opaque to be tested by programmers.


Bayesian optimisation
Put simply, Bayesian optimisation is an algorithm that is able to learn and predict the path to take for the best chance of improvement in any given scenario. Named after the 18th century statistician, Reverend Thomas Bayes, and coined by applied mathematician Harold J. Kushner in 1964, this is a model that can be trained using few data points for maximum effectiveness. Bayesian statistics give you models that don’t just give you a prediction, but also an uncertainty as well; in essence, a model that knows what it does not know.


C


Chatbots (bots)
Chatbots are programs that interact directly with customers via natural language processing (NLP). Chatbots can be text-based or voice-based and use machine-learning algorithms to improve the accuracy of their natural language and voice recognition capabilities.

See more: Intelligent chatbots: How AI Is revolutionising contact centers


Cognitive computing (CC)
Cognitive computing involves self-learning systems that use data mining, pattern recognition, natural language processing (NLP) and speech recognition to mimic the way the human brain works.

See more: The case for computers, creativity and natural language generation


Cluster analysis
Cluster analysis or clustering aims to classify a set of objects in the same group (defined as cluster) on the basis of a set of measured variables into a number of different groups (clusters). It is a main task of exploratory data mining, used in many fields, including machine learning.


D


Data mining
The process by which patterns are discovered within large sets of data with the goal of extracting useful information from it.

See more: The future of CX: Advanced analytics and artificial intelligence


Deep learning
Deep learning is a subset of machine learning that has networks which are capable of imitating the workings of the human brain in processing data and creating patterns for use in decision making. 


Deep science
The analysis of large amounts of complex structured and unstructured data to identify patterns for the purpose of optimised decision making. An upshot of the big data revolution, data science is multidisciplinary field incorporating elements of:

  • Analytics
  • Data mining
  • Machine learning
  • Programming
  • Statistics

Digital ethics
If ethics can be defined as “the guiding moral principles that govern the behaviour of a person/group in a particular activity”, digital ethics is how that moral ideology applies to conduct across digital mediums and the promulgation of novel digital technologies. Digital ethics consists of two main branches: Roboethics (the morality around how humans design, construct, use and treat robots and AI applications) and Machine ethics (the moral principles that govern the way that artificially intelligent programs and beings act).

See more: Is the principle of ethics in AI asleep at the wheel?


E


Ensemble learning
In machine learning, ensemble methods make the use of multiple learning algorithms working in concert for better predictive performance than could be obtained from single algorithms working in isolation. Ensemble learning consists of two strands:

  • Bagging (Bootstrap Aggregation) – Each model is built independently in a parallel ensemble with the aim to decrease variance. This methodology is suitable for high variance, low bias models.
  • Boosting – Arranged in a sequential ensemble, new models are added where previous ones are deficient. This methodology aims to decrease bias and is suitable for low variance, high bias models.

F


Forward chaining
A process where the machines study forward from a given point – using a sequence of if-then sub-processes to reach the required goal. The aim is to figure out a system that works for a given set of problems.


Fuzzy logic
Studied since the 1920s, under the term “infinite-valued logic”, fuzzy logic was coined by Azeri-born mathematician, Lotfi Aliasker Zadeh, in 1965. The term “fuzzy” refers to the fact that, unlike classical logic – which permits only the values of true or false - there are also partial truths. Fuzzy logic uses degrees of truth as a mathematical model of vagueness and allows for all intermediate possibilities between digital values of YES and NO, much like how a human will assess the nature of a situation in full colour and multi-polar fashion, rather than a bi-polar, monochrome way.


Frame language
First used by American cognitive scientist, Marvin Minsky, to understand visual reasoning and natural language processing, this is the term for the underlying representation of knowledge in artificial intelligence. A frame is an ontology of sets that orders the variables and relationships between them to limit complexity and to organise information for the ease of problem solving.


G


Glowworm swarm optimization (GSO)
The somewhat sinister-sounding algorithm was introduced by K.N. Krishnanand and Debasish Ghose in 2006 and is an example of an artificial intelligence system that is based on behaviour observed in the natural world. In this case, agents of differing values are represented by brighter or weaker “glows”, the same as fireflies are able to control the intensity of their bioluminescence through the release of the chemical luciferin in their abdomens. Glowworms in the algorithm attract each other and then automatically subdivide into subgroups.


Google Deepmind
Founded in 2010 and acquired by Google in 2014 for $500m, DeepMind Technologies is attempting to “solve intelligence” by collating “the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms”. DeepMind has been learning how to play early computer games more efficiently than a human using only raw pixels as data input. In 2017, DeepMind’s AlphaGo program beat the world’s highest ranked player at the traditional Chinese game of Go.


H


Heuristics
From the ancient Greek εὑρίσκω, meaning “I find”, a heuristic is a function designed for solving a problem more quickly when classic methods are too slow or prove to be inexact. The solution provided by a heuristic process may not represent the best or even the most accurate solution to a particular problem, but is able to fill the knowledge gap with an approximation that fits the needs of the situation in an acceptable timeframe. Heuristics are widely used in artificial intelligence for their ability to bridge a void where there are no algorithms.


Human-in-the-loop (HITL) machine learning
According to some, machine learning can be greatly enhanced and better guided with the interaction of humans, as opposed to just relying on the algorithms developed for it. This is where HITL comes in. Many tech companies are now adopting an 80:20 rule when it comes to machine learning. The human 20 per cent provides a semi-supervised element which a program’s learning algorithm can put questions to, in order to recalibrate its goals and gather new data points.


I


IBM Watson 

Watson is an intelligent computer system capable of answering questions posed in natural language. Developed in IBM’s DeepQA project by a research team led by principal investigator David Ferrucci, Watson is named after IBM’s first CEO, industrialist Thomas J. Watson. The computer was initially developed to answer questions on the quiz show Jeopardy! but has been used in commercial applications since 2013. 

See more: IBM launches blockchain partnership for improved cross-border payments

Inference engine
This is a component of that applies logical rules to the knowledge base of a system in order to infer new information.Inference engines can work in one of two ways: Forward chaining (the engine starts with the gamut of known facts and asserts new facts based on these) and Backward chaining (the engine begins with a raft of end goals, and works its way backward to decipher the facts that must be asserted to achieve those goals)


Inductive reasoning
Inductive reasoning is the ability to derive key generalized conclusions or theories by analyzing patterns in a large data set. It uses evidence and data to create statements and rules.


K


KL-ONE
This was a type of frame language that was first used in 1978 at Bolt, Beranek and Newman, the US R&D company behind such innovations as ARPANET, the earliest incarnation of the internet. KL-ONE pioneered the use of a deductive classifier, an automated reasoning engine that could validate a frame ontology and infer new information about it based on initial information provided to it. KL-ONE spawned a whole family of frame languages that grew in sophistication across the years.


L


Lisp
Lisp was a language for algorithms suitable for any programmable computer and spawned a family of computer programming languages. It was invented by American computer scientist, John McCarthy, in 1958, the man who coined the phrase “artificial intelligence”. Taking its name from the constriction of LISt Processor, Lisp was inextricably linked to the early efforts in AI, and was the language in which SHRDLU was programmed, one of the first programs with natural language understanding capabilities.


M


Machine learning
Machine Learning refers to computers acting without being fed programs to perform tasks. Machines "learn" from patterns they recognize and adjust their behavior accordingly.

See more: Machine Learning reaches into the unknown


N


Natural language processing (NLP)
Algorithms that understand and process human language and convert it into understandable representations.

See more: The case for computers, creativity and natural language generation


P


Pruning
Pruning is the process of cleaning up code so that unwanted solutions can be eliminated. But with the cutting down of code (pruning), the number of decisions that can be made by machines is restricted.


R


Reinforcement learning
Reinforcement learning is a type of machine learning in which machines are “taught” to achieve their target function through a process of experimentation and reward. The machine receives positive reinforcement when its processes produce the desired result and negative reinforcement when they do not.


Robotic process automation
The automation of repetitive tasks and common processes such as IT, customer servicing and sales without the need to transform existing IT system maps.

See more: The future of robotic process automation and artificial intelligence


S


Strong AI

An area of AI development that is working toward the goal of making AI systems that are as useful and skilled as the human mind.


Supervised learning
Supervised learning is a type of machine learning in which human input and supervision are an integral part of the machine learning process on an ongoing basis; there is a clear outcome to the machine’s data mining and its target function is to achieve this outcome.


Swarm intelligence
Swarm Intelligence is based on the idea that when individual agents come together, the interactions between them lead to the emergence of a more ‘intelligent’ collective behavior – such as a swarm of bees.


T


Turing test
A test developed by Alan Turing that tests the ability of a machine to mimic human behavior. The test involves a human evaluator who undertakes natural language conversations with another human and a machine and rates the conversations.

See more: MacGyver challenges Turing with a new benchmark for 'creativity' in AI


U


Unsupervised learning
Unsupervised learning is a type of machine learning in which human input and supervision are extremely limited, or absent altogether, throughout the process. The machine is left to identify patterns and draw its own conclusions from the data sets it is given.


V


Validation data set
In machine learning, the validation data set is the data given to the machine after the initial learning phase has been completed. The validation data is used to identify which of the relationships identified during the learning phase will be the most effective to use in predicting future performance


W


Weak AI
Also known as narrow AI, weak AI refers to a non-sentient computer system that operates within a predetermined range of skills and usually focuses on a singular task or small set of tasks. Most AI in use today is weak AI.



RECOMMENDED