Skillsoft Blog

Digging into Deep Learning

Digging into Deep Learning

Everyone is talking about deep learning.

Wondering why?

Well, it’s because artificial intelligence, which conjures up images of driverless cars, virtual assistants like Amazon’s Alexa, text translation, and chatbots, is everywhere and enjoying a resurgence of sorts thanks to the subfield of machine learning known as deep learning.

It is deep learning that is responsible for advancing the areas of natural language processing and machine perception involved in image classification, computer vision, speech recognition and more. It’s also making strides in medical image analysis involving computer-aided analysis and medical image detection, segmentation and registration.

And then there’s robotics, where deep learning is helping robots understand and react to their surroundings without requiring step-by-step instructions. And of course, thanks to deep learning, as part of the AlphaGo program, a computer program defeated a professional human Go player, a Go world champion and the strongest Go player in history.

While artificial intelligence, machine learning, and deep learning are not new, it was not until 2012 that deep learning flourished.  This explosion in capabilities is a result of the advent of rapid, parallel-processing GPUs and better methods of collecting massive amounts of data. Some well-known examples of datasets enriching the advancement of deep learning are the MNIST (Modified National Institute of Standards and Technology) database of handwritten digits and ImageNet, a database of hundreds of thousands of images.

The basic components/elements of deep learning

Deep learning serves many purposes. It is often utilized for classification purposes, such as labeling what an image, text or sound is. A variety of neural network architectures exist for implementing deep learning models/architectures: convolutional neural networks (CNNs), recurrent neural networks (RNNs), deep belief networks (DNBs), and others.

The common theme of these neural network architectures is the idea of sandwiching hidden layers between an input layer taking input data, and an output layer providing a prediction.  You can think of each hidden layer, which passes its output to the next layer’s input, as magically extracting the necessary information for the next layer. Deep learning is much like supervised learning in machine learning, which trains on known data sets to develop a best mapping function — except that deep learning uses many layers, thereby making it “deep.” In deep learning, each successive layer fine-tunes its inputs to get closer to an accurate final output. There are additional complexities that revolve around finding optimal weights and other parameters for each layer to minimize the loss, which is how far the predicted output diverges from the actual label.

For more discussion about the inner-workings and implementation of deep learning models, I recommend checking out any of the following books:

  • Deep Learning with Python: A Hands-on Introduction
  • MATLAB Deep Learning: With Machine Learning, Neural Networks and Artificial Intelligence
  • Introduction to Deep Learning Using R: A Step-by-Step Guide to Learning and Implementing Deep Learning Models Using R
  • Deep Learning for Medical Image Analysis

What does the future hold for deep learning?

There are a vast number of ever-evolving deep learning models/architectures put forth, with new, innovative models continuing to emerge and join the party. Many models focus on the goal of classification. The LeNet architecture pioneered deep convolutional neural networks from as early as 1988 and surfaced as LeNet-5 in 1998, and the eight-layer AlexNet architecture extended LeNet and used GPUs to win the ImageNet competition in 2012. Today Python libraries such as Keras contain several modern models, which according to the Keras documentation, include: Xception, VGG16 and VGG19, ResNet50, InceptionV3, InceptionResNetV2, MobileNet, DenseNet and NASNet.

Researchers also have dozens of other deep learning tools available to them. Data Incubator set out to rank some of the open source libraries offered based on stack overflow, github and Google traffic. They found that 8 libraries ranked above average, where the top 3 are TensorFlow, Keras, and Caffe. Tensorflow is by far the most popular while Keras closely follows Caffe. I also want to call attention to Facebook’s PyTorch because of its growing adoption since launching in 2016. PyTorch, a Python library based on Torch, ranks high at #5 on The Data Incubator list.

For a deeper, pun intended, understanding of these libraries, I highly recommend:

  • Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python
  • Pro Deep Learning with TensorFlow: A Mathematical Approach to Advanced Artificial Intelligence in Python
  • TensorFlow for Machine Intelligence: A Hands-On Introduction to Learning Algorithms
  • TensorFlow For Dummies

Want to get started and not sure how or where?

Sign up for a free trial of Percipio, our award-winning intelligent learning platform, and select to watch, listen or read from over 3,000  pieces of content on deep learning.

 

Kimberly Lin is an IT Product Manager at Skillsoft.

Post a comment

Comments are moderated, and will not appear until the author has approved them.

(URLs automatically linked.)


Your Information

(Name and email address are required. Email address will not be displayed with the comment.)