A Quick Guide to Deep Learning

Add bookmark

What is Deep Learning?

Deep learning is an approach to machine learning (ML), which falls under artificial intelligence (AI), which is most commonly used to label complex data. Machine learning can use neural networks, a computer system modeled after the human brain, to processes information. It is an algorithm designed to recognize patterns, calculate the probability of a certain outcome occurring, and “learn” through error and successes using a feedback loop.

The algorithms that are utilized in deep learning are best suited for cases where the outputs will always be the same. For example, deep learning algorithms can be trained to classify a cat at a high rate of success, because a cat is a static output. The strength in deep learning lies in its ability to extract features from data to classify. Approaching predictive analytics with artificial neural networks is less successful due in part to the changing seas of prediction. Predictive analytics takes historical data and garners insight from that data to predict future outcomes. This is why deep learning is best deployed in computer vision, speech recognition, medical diagnostics, and other categorization applications.

The concept of deep learning—albeit not the term, which came later—originated in the 1950s when artificial neural networks took shape. These algorithms were simple and populated by hand. As computing power grew, the field of big data began to feed these deep learning algorithms with larger and larger datasets, leading to faster and more accurate results.


Cloud Computing

The immense amount of data, or big data that goes into deep learning has outgrown personal computer or even local server capacities. Cloud computing remotely connects multiple servers. Advancements in artificial intelligence have stalled several times in history. These periods are known as AI Winter. Lack of processing power was one of the main causes of AI Winters. The power of cloud computing helped kickstart the next round of artificial intelligence advancements. Some predict a long AI Spring.


Google’s Yuval Dvir explores the opportunities that machine learning and cloud services are enabling.

Source: AIIA.net: AI and Intelligent Automation


What is an Artificial Neural Network (ANN)?

Deep learning relies on artificial neural networks (ANN) to receive, process, and deliver data. An artificial neural network is a framework modeled after the human brain. This framework consists of inputs, outputs, and hidden layers. Artificial neural networks have artificial neurons, or nodes, that weigh input data and categorize aspects of that data, connect to other nodes, and feed it to the next hidden layer until an output is achieved.

  • Input Layer - The input node in an artificial neural network model is simply the information provided to the network. If these inputs are labeled, the learning model is considered supervised. If the inputs are unlabeled, the machine learning algorithm must categorize through pattern analysis, which is called unsupervised learning.

 Preprocessing input data is a critical step not to be undervalued. When training a deep network, accuracy is sacrificed if the input data isn’t first processed. If an image of a pig is fed into the input layer when training the algorithm to distinguish between cats and dogs, the neural network becomes polluted with unexpected data in the supervised learning model. Similarly, if hairless cats aren’t accounted for in the input data, the cat classification algorithm is missing an input that could help it correctly classify hairless cats in the future. 

  • Hidden Layers - It is in the hidden layers that all the “thinking” happens. Hidden layers can be complex, plentiful, and—thanks to modern-day computing power—are able to process high amounts of data. Each connection between nodes in a hidden layer garners a numbered weight. The heavier the weight, the stronger the connection to the next node. For example, if an ANN is classifying cats, an input node with a curly tail garners a lower weight than an input node with a long, fuzzy tail. 
  • Output Layer - The output layer in an ANN is the conclusion of the data. The algorithm has now taken the input data, weighed and dispersed it to nodes throughout the hidden layers, and disseminated the information into output nodes. 

There are several different types of artificial neural networks, each with specific use cases. Additionally, ANNs can work on top of one another and feed into one another, correcting mistakes and reweighing outputs on a large scale and to a granular level.


Feedforward Neural Network

The feedforward neural network is the simplest neural network where data only flows through connected nodes in one direction. There is no looping, dead ends, or backflow. The feedforward neural network is the foundation for other neural network frameworks. Due to their simplistic nature, FNNs work fine for a general classification of datasets. While the science behind FNN is still applicable, with current computing and data analytics power, layering them with other neural networks or adding additional features into the algorithm is now commonplace.


Convolutional Neural Network

Just as neural networks are loosely modeled after the brain, convolutional neural networks are inspired by the visual cortex. A CNN sees images as RGB pixels represented as numbers, and processes width, height, and depth. Hidden layers examine features and create a feature map which is then passed on to the next hidden layer. These hidden layers eventually merge with their newfound knowledge to create an output, or conclusion.

CNNs require large amounts of data in order to avoid overfitting. Overfitting is where the deep learning model produces outputs too similar to the inputs. The goal is to provide CNNs with enough data to allow it to uncover, recognize, and apply general patterns. For example, if just a few cat and dog images are fed into a CNN, overfitting can result in a narrow output of cat versus dog. With enough data, a CNN can discriminate between a cat and a dog based on unique and highly specific features.

Convolutional neural networks are responsible for image recognition, video analysis, and even drug discovery through the creation of 3D molecule and treatment models. Where humans have difficulty with granular classifications, such as dog breeds, CNN excels. However, CNN struggles to process small or thin images in a way humans don’t.


Recurrent Neural Network

As its name implies, RNNs are able to take output data and loops it back into the ANN as input data. Because of its memory, RNNs are able to see the bigger picture and make data generalizations in a more predictable way.

Long short-term memory networks (LSTM) is a type of RNN whose memory is efficiently directed and looped throughout the hidden layers. Simply put, a LSTM opens and closes gates between nodes to allow relevant information from previous nodes in and filter out unnecessary noise, allowing a RNN to extend its memory for longer periods of time.

Language modeling and prediction is a use case of RNN. The input is usually a sequence of words, and the output is a predicted sequence of words that matches the style of the input. For example, this Shakespeare text was actually written by an RNN after being fed large sets of Shakespeare works as its input.

Speech recognition is another use of RNN when audio is used as the input. Just like the human brain learns and memorizes certain rules like the letter q is nearly always followed with the letter u, RNN working with speech recognition can learn syntax and use this knowledge to recognize and predict speech.


Ted Graham, head of open innovation at General Motors, talks about transparency within Deep Learning

Source: The AI Network Podcast


Activation Functions

In order to process information quickly and accurately, the artificial neural network needs to decide if a particular feature of its input is relevant or irrelevant. Activation functions work as on/off switches for a particular node, deciding whether or not to weigh it and connect it to another node or to ignore the node completely. Continuing with the cat/dog example, an activation function turns off the node that categorizes inputs as “four legged.” Since both cats and dogs have four legs, continuing to progress that node through the ANN layers dilutes the processing power of the ANN.


Backprop Algorithm

To account for errors discovered within an ANN, the backward propagation of errors, or backprop algorithm was born. This is a supervised learning model where incorrect outputs are corrected and pushed back into the previous layer. For example, if an ANN weighed a dog tail too heavily when working to classify cats, the model supervisor adjusts the weight accordingly until the correct output is achieved.


The Future of Deep learning

Deep learning is evolving exponentially. Smart healthcare, education, smart cities, IoT and autonomous vehicles are all fields where the application of Deep learning will potentially change these fields as we know it.

The Internet of Things (IoT) connects the world around us to the internet through sensors, communication devices, and the cloud. Autonomous cars, medical innovations like ingestible sensors, and advanced robotics are slowly becoming a reality in part due to Deep learning. If artificial neural networks are modeled after the human neural network, it can be said that sensor technology will become the eyes and ears that feed inputs to the artificial neural networks.  



Deep learning is highly complex both conceptually and mathematically. Technological advancements and software as a service (SaaS) has opened up the world of Deep learning to businesses and laypeople alike. As computing power and cloud computing advances, the only limit to the power of Deep learning and artificial intelligence in general will be the imagination of the human mind.



A.I. Wiki. (n.d.). Artificial Intelligence (AI) vs. Machine Learning vs. Deep Learning. Skymind.ai.

Beissinger, M. (2013, November 13). Deep Learning 101. Markus.com.

Beqari, E. (2018, January 23). A Very Basic Introduction to Feed-Forward Neural Networks. DZone.com

Brownlee, J. (2018, July 23). When to Use MLP, CNN, and RNN Neural Networks. Machinelearningmastery.com.

Deshpande, M. (2017, January 11). Recurrent Neural Networks for Language Modeling. Pythonmachinelearning.pro.

Donges, N. (2018, February 25). Recurrent Neural Networks and LSTM. Towardsdatascience.com

Fogg, A. (2018, May 30). A History of Deep Learning. Import.io.

Forrest, C. (2018, May 2). How to use machine learning to accelerate your IoT initiatives. ZDNet.com.

Gupta, D.S. (2017, October 23). Fundamentals of Deep Learning – Activation Functions and When to Use Them? Analyticsvidhya.com.

Linthicum, D. (2017, October 10). Now’s the time to do deep learning in the cloud. InfoWorld.com.

Maini, V. (2017, August 19). Machine Learning for Humans, Part 2.1: Supervised Learning. Medium.com.

Maini, V. (2017, August 19). Machine Learning for Humans, Part 3: Unsupervised Learning. Medium.com.

Maladkar, K. (2018, January 17). Overview of Recurrent Neural Networks And Their Applications. AnalyticsIndiaMag.com.

Marr, B. (2018, September 24). What Are Artificial Neural Networks - A Simple Explanation For Absolutely Anyone. Forbes.com.

Montañez, A. (2016, May 20). Unveiling the Hidden Layers of Deep Learning. ScientificAmerican.com.

Mozilla Research. (n.d.). Speech & Machine Learning. Research.mozilla.org.

Nielsen, M. (2015). “Neural Networks and Deep Learning.” Determination Press. Neuralnetworksanddeeplearning.com.

Overfitting in Machine Learning: What It Is and How to Prevent It. (2017, September 7). elitedatascience.com.

Pietrasik, A. (2018, August 8). Machine Learning 101: Convolutional Neural Networks, Simply Explained. Netguru.com.

Saxena, S. (2017, October 16). Artificial Neuron Networks (Basics) | Introduction to Neural Networks. Becominghuman.ai.

Silver, R. (2018, August 24). The Secret Behind the New AI Spring: Transfer Learning. Twdi.org.

Taulli, T. (2019, March 16). Is AI Headed For Another Winter? Forbes.com.

Techopedia. (n.d.). Input Layer. Techopedia.com.

Thompson, W. (2019, January 10). Deep Learning: The Confluence of Big Data, Big Models, Big Compute. Datanami.com.