10 Deep Learning Algorithms For Beginner

CNNs, also known as ConvNets, consist of multiple layers and are mainly used for image processing and object detection.

Convolutional Neural Networks

LSTMs are a type of Recurrent Neural Network (RNN) that can learn and memorize long-term dependencies.

Long Short Term Memory Networks 

RNNs have connections that form directed cycles, which allow the outputs from the LSTM to be fed as inputs to the current phase.

Recurrent Neural Networks

GANs are generative deep learning algorithms that create new data instances that resemble the training data.

Generative Adversarial Networks

RBFNs are special types of feedforward neural networks that use radial basis functions as activation functions.

Radial Basis Function Networks

MLPs belong to the class of feedforward neural networks with multiple layers of perceptrons that have activation functions.

Multilayer Perceptrons

Professor Teuvo Kohonen invented SOMs, which enable data visualization to reduce the dimensions of data through self-organizing artificial neural networks. 

Self Organizing Maps

DBNs are generative models that consist of multiple layers of stochastic, latent variables. The latent variables have binary values and are often called hidden units.

Deep Belief Networks

Developed by Geoffrey Hinton, RBMs are stochastic neural networks that can learn from a probability distribution over a set of inputs. 

Restricted Boltzmann Machines

Autoencoders are a specific type of feedforward neural network in which the input and output are identical.



Thanks For Reading!

Next: 10 Deep Learning Algorithms For Beginner