CNNs, also known as ConvNets, consist of multiple layers and are mainly used for image processing and object detection.
LSTMs are a type of Recurrent Neural Network (RNN) that can learn and memorize long-term dependencies.
RNNs have connections that form directed cycles, which allow the outputs from the LSTM to be fed as inputs to the current phase.
GANs are generative deep learning algorithms that create new data instances that resemble the training data.
RBFNs are special types of feedforward neural networks that use radial basis functions as activation functions.
MLPs belong to the class of feedforward neural networks with multiple layers of perceptrons that have activation functions.
Professor Teuvo Kohonen invented SOMs, which enable data visualization to reduce the dimensions of data through self-organizing artificial neural networks.
DBNs are generative models that consist of multiple layers of stochastic, latent variables. The latent variables have binary values and are often called hidden units.
Developed by Geoffrey Hinton, RBMs are stochastic neural networks that can learn from a probability distribution over a set of inputs.
Autoencoders are a specific type of feedforward neural network in which the input and output are identical.
Thanks For Reading!