Your Guide to Autoencoders

A brief introduction to Autoencoders: uses and architectures

Diego Lopez Yse
8 min readMay 21, 2023
Photo by Tamanna Rumee on Unsplash

Our minds extract and compress knowledge from the world, which we reuse to face other similar situations. One of the critical aspects of that process is that we don’t store all the details of the actual event: just the essential information that allows us to recreate it.

What if you could use Machine Learning to do the same thing? Could boil down knowledge into a reduced data space to be used later? That is what Autoencoders do.

An autoencoder is an Artificial Neural Network algorithm capable of discovering structure within data to develop a compressed representation of some input. It does this, in simple terms, by learning to copy its input to its output.

Autoencoders were designed to encode a data input into a compressed and meaningful representation and then decode it back such that the reconstructed output is as similar as possible to the original input. An autoencoder aims to learn a lower-dimensional representation of higher-dimensional data while maintaining the most crucial information from the initial input.

The Anatomy of Autoencoders

Autoencoders consist of three components:

1. Encoder: A module that compresses the train-validate-test set input data into an encoded representation that is typically several orders of magnitude smaller than the input data.

2. Bottleneck or Latent Representation: A module that contains the compressed knowledge representations and is, therefore, the most important part of the network.

3. Decoder: A module that helps the network “decompress” the knowledge representations and reconstruct the data from its encoded form. The output is then compared with the ground truth.

The anatomy of an autoencoder looks like this:

Autoencoders output a reconstruction of the input. The autoencoder consists of two smaller networks: an encoder and a decoder. During training, the encoder learns a set of features, known as a latent representation, from the data input. At the same time, the decoder is trained to reconstruct the data based on these features. The autoencoder can then be applied to predict inputs not previously seen. Source: MathWorks

This way, the encoder generates a reduced feature representation of an initial data input (e.g., an image), and the decoder is used to reconstruct that initial input from the encoder’s output. During this process, the dimensionality of the data input is reduced (you can see that the middle layers have fewer units compared to the input and output layers). These middle layers hold the compressed representation of the input, and the output is reconstructed from this reduced representation.

Autoencoders are trained by minimizing a reconstruction loss function, which measures how well the autoencoder can reconstruct the input data from the hidden representation.

In practical terms, autoencoders are used for:

Denoising images using the Fashion MNIST dataset. Source: TensorFlow
  • Anomaly detection: Through encoding and decoding, you’ll know how well you can generally reconstruct your data. If an autoencoder is presented with unusual data that shows something the model has never seen before, the error when reconstructing the input after the bottleneck will be much higher.
  • Dimensionality reduction: after training, the decoder can be discarded, and the output from the encoder can be used directly as the reduced dimensionality of the input. This output serves as a type of projection, and like other projection methods, there is no direct relationship between the bottleneck and the original input variables, making them challenging to interpret.
  • Data generation: Autoencoders can be used to generate both image and time series data. The parameterized distribution in the code of the autoencoder can be randomly sampled to generate discrete values for latent vectors, which can then be forwarded to the decoder, leading to the generation of new data.
Creation of a deepfake using an autoencoder and decoder. The same encoder-decoder pair is used to learn the latent features of the faces during training, while during generation, Decoders are swapped, such that latent face A is subjected to decoder B to generate face A with the features of face B. Source: ResearchGate
  • Recommendation tasks: the input and output vectors are typically a representation of the user. For example, in the case of video recommendation, each element of the vector refers to a video, and its value could be 1 if the user has played the video, and 0 otherwise. Besides binary vectors, continuous-valued ones may also be used, for example, to capture the time duration a user watched a video.

Autoencoders must deal with an intrinsic trade-off: they should reconstruct the input well enough (reducing the reconstruction error) while generalizing the low representation to something meaningful (so that the model doesn’t simply memorize or overfit the training data). Let’s see next how this is done.

Types of Autoencoders

Some popular architectures are undercomplete, sparse, denoising, and variational autoencoders.

Undercomplete Autoencoders

The simplest architecture for constructing an autoencoder is to constrain the number of nodes present in the hidden layer(s) of the network, limiting the amount of information that can flow through it.

Undercomplete autoencoders have a smaller dimension for the middle layers compared to the input layer, which helps to obtain essential features from the data. By penalizing the network according to the reconstruction error, the model can learn the most important attributes of the input data and how to best reconstruct the original input from an “encoded” state.

Undercomplete autoencoders work by limiting the capacity of the model as much as possible, minimizing the amount of information that flows through the network. Consequently, they are not versatile and tend to overfit since they are a simple model with limited capacity and reduced flexibility.

The architecture of an undercomplete autoencoder with a single encoding layer and a single decoding layer. Source: ResearchGate

Sparse Autoencoders

Sparse autoencoders represent an alternative method for introducing bottlenecks. Instead of constraining the number of nodes, it forces sparsity on the hidden layers. A sparse autoencoder has small numbers of simultaneously active neural nodes.

This type of autoencoder penalizes the use of hidden node connections, regularizing the model and keeping it from overfitting the data: only a reduced number of hidden units are allowed to be active simultaneously.

This way, even when the number of hidden units is large (perhaps even greater than the number of input units), we can still discover interesting structures by imposing sparsity constraints on them.

Simple schema of a single-layer sparse autoencoder. The hidden nodes in bright yellow are activated, while the light yellow ones are inactive. The activation depends on the input. Source: Wikiwand

On the downside, neuron activation depends on the input data, which means that even slight data variations will result in the activations of different nodes through the network.

Denoising Autoencoders

Approaches like undercomplete or sparse autoencoders rely on penalizing the network for being different from the original input. But another way to design an autoencoder is to perturb the input data but keep the pure data as the target output. With this approach, the model cannot simply create a mapping from input data to output data because they are no longer similar.

Denoising autoencoders take a partially corrupted input while training to recover the original undistorted input. The model learns a vector field for mapping the input data towards a lower dimensional manifold which describes the natural data to cancel out the added noise. Source: OpenGenusIQ

The goal of a denoising autoencoder is to remove those noises and yield a noise-free output. In doing so, the output of the autoencoder is meant to be de-noised and, therefore, different than the input. Noise removal is performed by mapping the input data into a lower-dimensional manifold (like in an undercomplete autoencoder), where this noise filtering becomes easier.

Denoising autoencoders are great at learning the latent representation in corrupted data while creating a robust representation, allowing the model to recover true features.

Unlike previously seen models, denoising autoencoders can’t create a mapping from input to output data because they are no longer similar.

Variational Autoencoders

Variational autoencoders (VAE) provide a probabilistic way of describing latent space observations. Rather than an encoder that outputs a single value to describe each latent state attribute, a VAE describes a probability distribution for each latent attribute.

Look at the example below. While the image attributes (smile, skin tone, etc.) obtained after training a standard autoencoder can be used to reconstruct it from the compressed latent space, they are not continuous and, in effect, might not be easy to interpolate.

While these attributes explain the image and can be used in reconstructing the image from the compressed latent space, they do not allow the latent attributes to be expressed in a probabilistic fashion. Source: V7 Labs

VAEs deal with this topic by expressing each latent attribute as a probability distribution, forming a continuous latent space that can be easily sampled and interpolated. When decoding from the latent space, VAEs will randomly sample from each latent state distribution to feed the decoder.

In a VAE, the latent attributes are sampled from the latent distribution and fed to the decoder, reconstructing the input. Source: V7 Labs

VAEs enforce a continuous, smooth latent space representation. For any sampling of the latent distributions, we expect the decoder model to reconstruct the input accurately. This way, values that are nearby to one another in the latent space should correspond with very similar reconstructions.

VAEs continuous latent space representation and sampling: Source: Jeremy Jordan

By sampling from the latent space, VAEs can be used as generative models capable of creating new data similar to what was observed during training.

In summary

Whether to create embeddings, reduce data dimensionality, or detect anomalies, Autoencoders can serve multiple purposes. They are not only powerful tools for data compression and analysis but also for data generation.

Different types of autoencoders. Source: The A! Dream

Besides this versatility, you should always note that:

  1. Autoencoders are data-specific, meaning they will only be able to compress data similar to what they have been trained on. An autoencoder trained on pictures of faces would do a poor job compressing pictures of trees because the features it would learn would be face-specific.
  2. Autoencoders are lossy, which means the decompressed outputs will be degraded compared to the original inputs (similar to MP3 or JPEG compression). This differs from lossless arithmetic compression.
  3. Autoencoders are learned automatically from data examples, which is a valuable property: it is easy to train specialized algorithm instances that will perform well on a specific type of input. It doesn’t require any new engineering, just appropriate training data.

Finally, remember that the ultimate goal of working with autoencoders is getting the model to learn a meaningful latent space representation.

Interested in these topics? Follow me on Linkedin or Twitter

--

--