Introduction to Autoencoders

Khushilyadav
3 min readJun 10, 2021

--

Auto encoders are type of neural networks that attempts to mimic its input as closely as possible to its output. It is used to learn data representation in an unsupervised manner by taking an input and transforming it into a reduced representation called code or embedding. Then, this code or embedding is transformed back into the original input. The code is also called the latent-space representation.

Formally we can say autoencoders describe a non-linear relationship of an input to an output through an intermediate code or embedding.

Some Applications of Autoencoders

  1. Data Compression : As data compression is one of the main advantages of autoencoders it can be implemented in data transmission problems since now, rather than sending the whole input data we can just send the latent space representation of the data which can be decoded back at the receivers end.
A representation of basic autoencoder architecture

2. Denoising the data : Due to connection status or bandwidth, data such as image and audio can lose its quality, therefore the problem of denoising data arises.

Ideally, we would like to have our autoencoder sensitive to reconstruct our latent space representation and insensitive enough to generalize the encoding of input data. One of the ways of such a method is to corrupt the input data, adding random noise to input data and then encode that corrupted data.

How do autoencoders work

Auto encoders are closely related to principal component analysis(PCA). If the autoencoder is linearly activated, the latent vector representation directly corresponds to output produced by PCA. However, generally activation function used in autoencoders is non linear (ReLU, sigmoid).

Basically the network is split into encoder and decoder.

The original data X is mapped to latent vector space F via the encoder function ϕ. The decoder function denoted by ψ then takes the latent representation and outputs the input to encoder function.

The loss function can then be written in terms of these network functions, and it is this loss function that we will use to train the neural network through the standard backpropagation procedure.

If the dimensionality of latent vector space is less then we have limited information to regenerate the input. Hence we might get a blurry output. However, if we use a very high dimensional latent vector space then there is little point in using compression at all.

--

--

Khushilyadav
Khushilyadav

Written by Khushilyadav

computer Vision and Deep Learning Enthusiast

No responses yet