Diffusion models are a new class of stateoftheart generative models that generate diverse highresolution images. They have already attracted a lot of attention after OpenAI, Nvidia and Google managed to train largescale models. Example architectures that are based on diffusion models are GLIDE, DALLE2, Imagen, and the full opensource stable diffusion.
But what is the main principle behind them?
In this blog post, we will dig our way up from the basic principles. There are already a bunch of different diffusionbased architectures. We will focus on the most prominent one, which is the Denoising Diffusion Probabilistic Models (DDPM) as initialized by SohlDickstein et al and then proposed by Ho. et al 2020. Various other approaches will be discussed to a smaller extent such as stable diffusion and scorebased models.
Diffusion models are fundamentally different from all the previous generative methods. Intuitively, they aim to decompose the image generation process (sampling) in many small “denoising” steps.
The intuition behind this is that the model can correct itself over these small steps and gradually produce a good sample. To some extent, this idea of refining the representation has already been used in models like alphafold. But hey, nothing comes at zerocost. This iterative process makes them slow at sampling, at least compared to GANs.
Diffusion process
The basic idea behind diffusion models is rather simple. They take the input image
$\mathbf{x}_0$$T$
steps. We will call this the forward process. Notably, this is unrelated to the forward pass of a neural network. If you’d like, this part is necessary to generate the targets for our neural network (the image after applying $t<T$
Afterward, a neural network is trained to recover the original data by reversing the noising process. By being able to model the reverse process, we can generate new data. This is the socalled reverse diffusion process or, in general, the sampling process of a generative model.
How? Let’s dive into the math to make it crystal clear.
Forward diffusion
Diffusion models can be seen as latent variable models. Latent means that we are referring to a hidden continuous feature space. In such a way, they may look similar to variational autoencoders (VAEs).
In practice, they are formulated using a Markov chain of
$T$steps. Here, a Markov chain means that each step only depends on the previous one, which is a mild assumption. Importantly, we are not constrained to using a specific type of neural network, unlike flowbased models.
Given a datapoint
$\textbf{x}_0$$q(x)$
( $\textbf{x}_0 \sim q(x)$
$\beta_{t}$
$\textbf{x}_{t1}$
$\textbf{x}_{t}$
$q(\textbf{x}_t \textbf{x}_{t1})$
Forward diffusion process. Image modified by Ho et al. 2020
Since we are in the multidimensional scenario
$\textbf{I}$is the identity matrix, indicating that each dimension has the same standard deviation $\beta_t$
$q(\mathbf{x}_t \vert \mathbf{x}_{t1})$
$\boldsymbol{\mu}$
and the variance $\boldsymbol{\Sigma}$
where $\boldsymbol{\mu}_t =\sqrt{1 – \beta_t} \mathbf{x}_{t1}$
$\boldsymbol{\Sigma}_t=\beta_t\mathbf{I}$
$\boldsymbol{\Sigma}$
will always be a diagonal matrix of variances (here $\beta_t$
Thus, we can go in a closed form from the input data
$\mathbf{x}_0$$\mathbf{x}_{T}$
The symbol
$:$in $q(\mathbf{x}_{1:T})$
$q$
repeatedly from timestep $1$
to $T$
. It’s also called trajectory.
So far, so good? Well, nah! For timestep
$t=500 < T$$q$
500 times in order to sample $\mathbf{x}_t$
The reparametrization trick provides a magic remedy to this.
The reparameterization trick: tractable closedform sampling at any timestep
If we define
$\alpha_t= 1 \beta_t$$\bar{\alpha}_t = \prod_{s=0}^t \alpha_s$
$\boldsymbol{\epsilon}_{0},…, \epsilon_{t2}, \epsilon_{t1} \sim \mathcal{N}(\textbf{0},\mathbf{I})$
Note: Since all timestep have the same Gaussian noise we will only use the symbol
$\boldsymbol{\epsilon}$from now on.
Thus to produce a sample
$\mathbf{x}_t$
Since
$\beta_t$$\alpha_t$
$\bar{\alpha}_t$
$t$
and get $\mathbf{x}_t$
$\mathbf{x}_t$
$L_t$
Variance schedule
The variance parameter
$\beta_t$$T$
timesteps. In fact, one can define a variance schedule, which can be linear, quadratic, cosine etc. The original DDPM authors utilized a linear schedule increasing from $\beta_1= 10^{4}$
$\beta_T = 0.02$
Latent samples from linear (top) and cosine (bottom)
schedules respectively. Source: Nichol & Dhariwal 2021
Reverse diffusion
As
$T \to \infty$$x_T$
$q(\mathbf{x}_{t1} \vert \mathbf{x}_{t})$
$x_T$
$\mathcal{N}(0,\mathbf{I})$
, run the reverse process and acquire a sample from $q(x_0)$
The question is how we can model the reverse diffusion process.
Approximating the reverse process with a neural network
In practical terms, we don’t know
$q(\mathbf{x}_{t1} \vert \mathbf{x}_{t})$$q(\mathbf{x}_{t1} \vert \mathbf{x}_{t})$
Instead, we approximate
$q(\mathbf{x}_{t1} \vert \mathbf{x}_{t})$$p_{\theta}$
$q(\mathbf{x}_{t1} \vert \mathbf{x}_{t})$
$\beta_t$
$p_{\theta}$
Reverse diffusion process. Image modified by Ho et al. 2020
If we apply the reverse formula for all timesteps (
$p_\theta(\mathbf{x}_{0:T})$$\mathbf{x}_T$
By additionally conditioning the model on timestep
$t$, it will learn to predict the Gaussian parameters (meaning the mean $\boldsymbol{\mu}_\theta(\mathbf{x}_t, t)$
$\boldsymbol{\Sigma}_\theta(\mathbf{x}_t, t)$
But how do we train such a model?
Training a diffusion model
If we take a step back, we can notice that the combination of
$q$and $p$
is very similar to a variational autoencoder (VAE). Thus, we can train it by optimizing the negative loglikelihood of the training data. After a series of calculations, which we won’t analyze here, we can write the evidence lower bound (ELBO) as follows:
Let’s analyze these terms:

The
$\mathbb{E}_{q(x_1 \vert x_0)} [log p_{\theta} (\mathbf{x}_0 \vert \mathbf{x}_1)]$ 
$D_{KL}(q(\mathbf{x}_T \vert \mathbf{x}_0) \vert\vert p(\mathbf{x}_T))$
$\mathbf{x}_T$

The third term
$\sum_{t=2}^T L_{t1}$$L_t$
$p_{\theta}(\mathbf{x}_{t1} \vert \mathbf{x}_t))$
$q(\mathbf{x}_{t1} \vert \mathbf{x}_t, \mathbf{x}_0)$
It is evident that through the ELBO, maximizing the likelihood boils down to learning the denoising steps
$L_t$
Important note: Even though
$q(\mathbf{x}_{t1} \vert \mathbf{x}_{t})$$\textbf{x}_0$
Intuitively, a painter (our generative model) needs a reference image (
$\textbf{x}_0$$q(\mathbf{x}_{t1} \vert \mathbf{x}_t, \mathbf{x}_0)$
$\textbf{x}_0$
In other words, we can sample
$\textbf{x}_t$$t$
conditioned on $\textbf{x}_0$
$\alpha_t= 1 \beta_t$
$\bar{\alpha}_t = \prod_{s=0}^t \alpha_s$
Note that
$\alpha_t$$\bar{\alpha}_t$
$\beta_t$
This little trick provides us with a fully tractable ELBO. The above property has one more important side effect, as we already saw in the reparameterization trick, we can represent
$\mathbf{x}_0$
where
$\boldsymbol{\epsilon} \sim \mathcal{N}(\textbf{0},\mathbf{I})$
By combining the last two equations, each timestep will now have a mean
$\tilde{\boldsymbol{\mu}}_t$$\mathbf{x}_t$
Therefore we can use a neural network
$\epsilon_{\theta}(\mathbf{x}_t,t)$$\boldsymbol{\epsilon}$
and consequently the mean:
Thus, the loss function (the denoising term in the ELBO) can be expressed as:
This effectively shows us that instead of predicting the mean of the distribution, the model will predict the noise
$\boldsymbol{\epsilon}$at each timestep $t$
.
Ho et.al 2020 made a few simplifications to the actual loss term as they ignore a weighting term. The simplified version outperforms the full objective:
The authors found that optimizing the above objective works better than optimizing the original ELBO. The proof for both equations can be found in this excellent post by Lillian Weng or in Luo et al. 2022.
Additionally, Ho et. al 2020 decide to keep the variance fixed and have the network learn only the mean. This was later improved by Nichol et al. 2021, who decide to let the network learn the covariance matrix
$(\boldsymbol{\Sigma})$as well (by modifying $L_t^\text{simple}$
Training and sampling algorithms of DDPMs. Source: Ho et al. 2020
Architecture
One thing that we haven’t mentioned so far is what the model’s architecture looks like. Notice that the model’s input and output should be of the same size.
To this end, Ho et al. employed a UNet. If you are unfamiliar with UNets, feel free to check out our past article on the major UNet architectures. In a few words, a UNet is a symmetric architecture with input and output of the same spatial size that uses skip connections between encoder and decoder blocks of corresponding feature dimension. Usually, the input image is first downsampled and then upsampled until reaching its initial size.
In the original implementation of DDPMs, the UNet consists of Wide ResNet blocks, group normalization as well as selfattention blocks.
The diffusion timestep
$t$is specified by adding a sinusoidal position embedding into each residual block. For more details, feel free to visit the official GitHub repository. For a detailed implementation of the diffusion model, check out this awesome post by Hugging Face.
The UNet architecture. Source: Ronneberger et al.
Conditional Image Generation: Guided Diffusion
A crucial aspect of image generation is conditioning the sampling process to manipulate the generated samples. Here, this is also referred to as guided diffusion.
There have even been methods that incorporate image embeddings into the diffusion in order to “guide” the generation. Mathematically, guidance refers to conditioning a prior data distribution
$p(\textbf{x})$with a condition $y$
, i.e. the class label or an image/text embedding, resulting in $p(\textbf{x}y)$
.
To turn a diffusion model
$p_\theta$$y$
at each diffusion step.
The fact that the conditioning is being seen at each timestep may be a good justification for the excellent samples from a text prompt.
In general, guided diffusion models aim to learn
$\nabla \log p_\theta( \mathbf{x}_t \vert y)$
$p_\theta(y)$
$\nabla_{\textbf{x}_{t}}$
$\textbf{x}_{t}$
$y$
. Moreover remember that $\log(a b)= \log(a) + \log(b)$
And by adding a guidance scalar term
$s$, we have:
Using this formulation, let’s make a distinction between classifier and classifierfree guidance. Next, we will present two family of methods aiming at injecting label information.
Classifier guidance
SohlDickstein et al. and later Dhariwal and Nichol showed that we can use a second model, a classifier
$f_\phi(y \vert \mathbf{x}_t, t)$$y$
during training. To achieve that, we can train a classifier $f_\phi(y \vert \mathbf{x}_t, t)$
$\mathbf{x}_t$
$y$
. Then we can use the gradients $\nabla \log (f_\phi( y \vert\mathbf{x}_t ))$
We can build a classconditional diffusion model with mean
$\mu_\theta(\mathbf{x}_ty)$$\boldsymbol{\Sigma}_\theta(\mathbf{x}_t y)$
Since
$p_\theta \sim \mathcal{N}(\mu_{\theta}, \Sigma_{\theta})$$\log f_\phi(y\mathbf{x}_t)$
$y$
, resulting in:
In the famous GLIDE paper by Nichol et al, the authors expanded on this idea and use CLIP embeddings to guide the diffusion. CLIP as proposed by Saharia et al., consists of an image encoder
$g$and a text encoder $h$
. It produces an image and text embeddings $g(\mathbf{x}_t)$
$h(c)$
, respectively, wherein $c$
is the text caption.
Therefore, we can perturb the gradients with their dot product:
As a result, they manage to “steer” the generation process toward a userdefined text caption.
Algorithm of classifier guided diffusion sampling. Source: Dhariwal & Nichol 2021
Classifierfree guidance
Using the same formulation as before we can define a classifierfree guided diffusion model as:
Guidance can be achieved without a second classifier model as proposed by Ho & Salimans. Instead of training a separate classifier, the authors trained a conditional diffusion model
$\boldsymbol{\epsilon}_\theta (\mathbf{x}_ty)$$\boldsymbol{\epsilon}_\theta (\mathbf{x}_t 0)$
$y$
to $0$
, so that the model is exposed to both the conditional and unconditional setup:
Note that this can also be used to “inject” text embeddings as we showed in classifier guidance.
This admittedly “weird” process has two major advantages:

It uses only a single model to guide the diffusion.

It simplifies guidance when conditioning on information that is difficult to predict with a classifier (such as text embeddings).
Imagen as proposed by Saharia et al. relies heavily on classifierfree guidance, as they find that it is a key contributor to generating samples with strong imagetext alignment. For more info on the approach of Imagen check out this video from AI Coffee Break with Letitia:
Scaling up diffusion models
You might be asking what is the problem with these models. Well, it’s computationally very expensive to scale these Unets into highresolution images. This brings us to two methods for scaling up diffusion models to higher resolutions: cascade diffusion models and latent diffusion models.
Cascade diffusion models
Ho et al. 2021 introduced cascade diffusion models in an effort to produce highfidelity images. A cascade diffusion model consists of a pipeline of many sequential diffusion models that generate images of increasing resolution. Each model generates a sample with superior quality than the previous one by successively upsampling the image and adding higher resolution details. To generate an image, we sample sequentially from each diffusion model.
Cascade diffusion model pipeline. Source: Ho & Saharia et al.
To acquire good results with cascaded architectures, strong data augmentations on the input of each superresolution model are crucial. Why? Because it alleviates compounding error from the previous cascaded models, as well as due to a traintest mismatch.
It was found that gaussian blurring is a critical transformation toward achieving high fidelity. They refer to this technique as conditioning augmentation.
Stable diffusion: Latent diffusion models
Latent diffusion models are based on a rather simple idea: instead of applying the diffusion process directly on a highdimensional input, we project the input into a smaller latent space and apply the diffusion there.
In more detail, Rombach et al. proposed to use an encoder network to encode the input into a latent representation i.e.
$\mathbf{z}_t = g(\mathbf{x}_t)$
If the loss for a typical diffusion model (DM) is formulated as:
then given an encoder
$\mathcal{E}$and a latent representation $z$
, the loss for a latent diffusion model (LDM) is:
Latent diffusion models. Source: Rombach et al
For more information check out this video:
Scorebased generative models
Around the same time as the DDPM paper, Song and Ermon proposed a different type of generative model that appears to have many similarities with diffusion models. Scorebased models tackle generative learning using score matching and Langevin dynamics.
Scorematching refers to the process of modeling the gradient of the log probability density function, also known as the score function. Langevin dynamics is an iterative process that can draw samples from a distribution using only its score function.
where
$\delta$is the step size.
Suppose that we have a probability density
$p(x)$and that we define the score function to be $\nabla_x \log p(x)$
$s_{\theta}$
$\nabla_x \log p(x)$
$p(x)$
first. The training objective can be formulated as follows:
Then by using Langevin dynamics, we can directly sample from
$p(x)$using the approximated score function.
In case you missed it, guided diffusion models use this formulation of scorebased models as they learn directly
$\nabla_x \log p(x)$
Adding noise to scorebased models: Noise Conditional Score Networks (NCSN)
The problem so far: the estimated score functions are usually inaccurate in lowdensity regions, where few data points are available. As a result, the quality of data sampled using Langevin dynamics is not good.
Their solution was to perturb the data points with noise and train scorebased models on the noisy data points instead. As a matter of fact, they used multiple scales of Gaussian noise perturbations.
Thus, adding noise is the key to make both DDPM and score based models work.
Scorebased generative modeling with score matching + Langevin dynamics. Source: Generative Modeling by Estimating Gradients of the Data Distribution
Mathematically, given the data distribution
$p(x)$, we perturb with Gaussian noise $\mathcal{N}(\textbf{0}, \sigma_i^2 I)$
$i=1,2,\cdots,L$
Then we train a network
$s_\theta(\mathbf{x},i)$$\nabla_\mathbf{x} \log d_{\sigma_i}(\mathbf{x})$
Scorebased generative modeling through stochastic differential equations (SDE)
Song et al. 2021 explored the connection of scorebased models with diffusion models. In an effort to encapsulate both NSCNs and DDPMs under the same umbrella, they proposed the following:
Instead of perturbing data with a finite number of noise distributions, we use a continuum of distributions that evolve over time according to a diffusion process. This process is modeled by a prescribed stochastic differential equation (SDE) that does not depend on the data and has no trainable parameters. By reversing the process, we can generate new samples.
Scorebased generative modeling through stochastic differential equations (SDE). Source: Song et al. 2021
We can define the diffusion process
$\{ \mathbf{x}$
Overview of scorebased generative modeling through SDEs. Source: Song et al. 2021
Summary
Let’s do a quick sumup of the main points we learned in this blogpost:

Diffusion models work by gradually adding gaussian noise through a series of
$T$steps into the original image, a process known as diffusion.

To sample new data, we approximate the reverse diffusion process using a neural network.

The training of the model is based on maximizing the evidence lower bound (ELBO).

We can condition the diffusion models on image labels or text embeddings in order to “guide” the diffusion process.

Cascade and Latent diffusion are two approaches to scale up models to highresolutions.

Cascade diffusion models are sequential diffusion models that generate images of increasing resolution.

Latent diffusion models (like stable diffusion) apply the diffusion process on a smaller latent space for computational efficiency using a variational autoencoder for the up and downsampling.

Scorebased models also apply a sequence of noise perturbations to the original image. But they are trained using scorematching and Langevin dynamics. Nonetheless, they end up in a similar objective.

The diffusion process can be formulated as an SDE. Solving the reverse SDE allows us to generate new samples.
Finally, for more associations between diffusion models and VAE or AE check out these really nice blogs.
Cite as
@article{karagiannakos2022diffusionmodels,
title = "Diffusion models: toward stateoftheart image generation",
author = "Karagiannakos, Sergios, Adaloglou, Nikolaos",
journal = "https://theaisummer.com/",
year = "2022",
howpublished = {https://theaisummer.com/diffusionmodels/},
}
References
[1] SohlDickstein, Jascha, et al. Deep Unsupervised Learning Using Nonequilibrium Thermodynamics. arXiv:1503.03585, arXiv, 18 Nov. 2015
[2] Ho, Jonathan, et al. Denoising Diffusion Probabilistic Models. arXiv:2006.11239, arXiv, 16 Dec. 2020
[3] Nichol, Alex, and Prafulla Dhariwal. Improved Denoising Diffusion Probabilistic Models. arXiv:2102.09672, arXiv, 18 Feb. 2021
[4] Dhariwal, Prafulla, and Alex Nichol. Diffusion Models Beat GANs on Image Synthesis. arXiv:2105.05233, arXiv, 1 June 2021
[5] Nichol, Alex, et al. GLIDE: Towards Photorealistic Image Generation and Editing with TextGuided Diffusion Models. arXiv:2112.10741, arXiv, 8 Mar. 2022
[6] Ho, Jonathan, and Tim Salimans. ClassifierFree Diffusion Guidance. 2021. openreview.net
[7] Ramesh, Aditya, et al. Hierarchical TextConditional Image Generation with CLIP Latents. arXiv:2204.06125, arXiv, 12 Apr. 2022
[8] Saharia, Chitwan, et al. Photorealistic TexttoImage Diffusion Models with Deep Language Understanding. arXiv:2205.11487, arXiv, 23 May 2022
[9] Rombach, Robin, et al. HighResolution Image Synthesis with Latent Diffusion Models. arXiv:2112.10752, arXiv, 13 Apr. 2022
[10] Ho, Jonathan, et al. Cascaded Diffusion Models for High Fidelity Image Generation. arXiv:2106.15282, arXiv, 17 Dec. 2021
[11] Weng, Lilian. What Are Diffusion Models? 11 July 2021
[12] O’Connor, Ryan. Introduction to Diffusion Models for Machine Learning AssemblyAI Blog, 12 May 2022
[13] Rogge, Niels and Rasul, Kashif. The Annotated Diffusion Model . Hugging Face Blog, 7 June 2022
[14] Das, Ayan. “An Introduction to Diffusion Probabilistic Models.” Ayan Das, 4 Dec. 2021
[15] Song, Yang, and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. arXiv:1907.05600, arXiv, 10 Oct. 2020
[16] Song, Yang, and Stefano Ermon. Improved Techniques for Training ScoreBased Generative Models. arXiv:2006.09011, arXiv, 23 Oct. 2020
[17] Song, Yang, et al. ScoreBased Generative Modeling through Stochastic Differential Equations. arXiv:2011.13456, arXiv, 10 Feb. 2021
[18] Song, Yang. Generative Modeling by Estimating Gradients of the Data Distribution, 5 May 2021
[19] Luo, Calvin. Understanding Diffusion Models: A Unified Perspective. 25 Aug. 2022
* Disclosure: Please note that some of the links above might be affiliate links, and at no additional cost to you, we will earn a commission if you decide to make a purchase after clicking through.