Statistical whitening transformations

5 minute read

Published:

Statistical “whitening” is a family of procedures for standardizing and decorrelating a set of variables. Here, we’ll review this concept in a general sense, and see two specific examples.

Introduction

Suppose data matrix $X \in \mathbb{R}^{n \times p}$ has covariance matrix $\Sigma$. A whitening transformation is one that transforms the data such that the transformed data’s covariance is the identity matrix $I_p$.

In particular, the whitening matrix $W$ satisfies

\[W^\top W = \Sigma^{-1}.\]

We can whiten the data as $Y = WX$. Notice that the covariance of the whitened data $Y$ is then

\begin{align} \text{cov}(Y) &= Y^\top Y \\ &= (WX)^\top WX \\ &= X^\top W^\top WX \\ &= X^\top \Sigma^{-1} X \\ &= I_p \\ \end{align}

However, the constraint above that $W^\top W = \Sigma^{-1}$ does not uniquely pin down a way to find $W$ when presented with data $X$. Various statistical whitening procedures have been proposed, and the central choice in all of these is how to compute $W$.

To see that $W$ is not unique, consider that taking the square root of $W^\top W$ yields

\[W = \Sigma^{-1/2}.\]

However, this solution for the square root is actually incomplete. In fact, there are many solutions:

\[W = Q\Sigma^{-1/2}\]

where $Q$ is an arbitrary orthonormal matrix ($Q^\top Q = I$). To see this note that

\begin{align} W^\top W &= (Q\Sigma^{-1/2})^\top Q\Sigma^{-1/2} \\ &= \Sigma^{-1/2\top} Q^\top Q\Sigma^{-1/2} \\ &= \Sigma^{-1/2\top} I \Sigma^{-1/2} \\ &= \Sigma^{-1/2\top} \Sigma^{-1/2} \\ &= \Sigma^{-1}. \\ \end{align}

Thus, any choice of an orthonormal matrix $Q$ satisfies our original constraint. So deciding on a value of $W$ actually boils down to deciding on a value for $Q$.

To dive more deeply, consider the eigendecomposition of $\Sigma$:

\[\Sigma = U \Lambda U^\top\]

where the columns of $U$ are $\Sigma$’s eigenvectors, and $\Lambda$ is a diagonal matrix containing $\Sigma$’s eigenvalues. Its inverse square root is then

\[\Sigma^{-1/2} = U \Lambda^{-1/2} U^\top.\]

This implies that

\[W = Q U \Lambda^{-1/2} U^\top.\]

Thus, the final whitening procedure can be written as

\[WX = Q U \Lambda^{-1/2} U^\top X.\]

Notice that $U^\top$ rotates the data to align with the eigenbasis, $\Lambda^{-1/2}$ scales the data so that each dimension has the same variance, and $QU$ rotates the data again.

The transformation $\Lambda^{-1/2} U^\top X$ already “whitens” the data, so the final rotation by $QU$ is what distinguishes whitening procedures.

Here, we’ll discuss two of the simplest whitening procedures: ZCA whitening and PCA whitening.

ZCA whitening

Even though whitening standardizes the variance of each dimension, one may still desire that the whitened data is correlated with the original data in each dimension. This is was ZCA-based whitening seeks to achieve. ZCA stands for zero-phase component analysis.

ZCA whitening seeks to find a transformation $W$ such that the distance between the original data $X$ and the whitened data $Y = WX$ is minimized. Consider a data vector $x \in \mathbb{R}^p$ and its whitened counterpart $y \in \mathbb{R}^d$. ZCA whitening seeks to minimize

\begin{align} (y - x)^\top (y - x) &= \textbf{tr}(y^\top y) - 2 \textbf{tr}(yx^\top) + \textbf {tr}(x^\top x) \\ &= \textbf{tr}(I_p) - 2 \textbf{tr}(xy^\top) + \textbf{tr}(x^\top x) \\ \end{align}

Since $\textbf{tr}(x^\top x)$ and $\textbf{tr}(I_p)$ don’t depend on $W$, minimizing the expression above is equivalent to maximizing

\[\textbf{tr}(xy^\top) = \sum\limits_{j = 1}^p x_{(j)}^\top y_{(j)} = \sum\limits_{j = 1}^p \text{cov}(x_{(j)}, y_{(j)}).\]

Recalling that $W = Q\Sigma^{-1/2}$, we have that

\begin{align} \textbf{tr}(yx^\top) &= \textbf{tr}(Wxx^\top) \\ &= \textbf{tr}(Q\Sigma^{-1/2}xx^\top) \\ &= \textbf{tr}(Q\Sigma^{-1/2}\Sigma) \\ &= \textbf{tr}(Q\Sigma^{1/2}) \\ \end{align}

It turns out (see Proposition 1 in Kessy et al. for a simple proof) that this is maximized when $Q$ is the identity matrix $I_p$. This means that the whitening transformation is simply the inverses square root of the covariance matrix:

\[W = \Sigma^{-1/2}.\]

Putting this all together, the transformation is

\begin{align} WX &= Q\Sigma^{-1/2} X \\ &= I_p \Sigma^{-1/2} X \\ &= \Sigma^{-1/2} X. \\ \end{align}

As we can see, ZCA whitening is essentially just decorrelating the variables, and scaling each by its inverse standard deviation.

The series of transformations can be seen in the figure below:

zca_transformations

PCA whitening

PCA (principal component analysis) whitening has a similar flavor to ZCA whitening, but its objective is slightly different. Rather than maximizing the cross-covariance between $X$ and $Y$ in each dimension separately, PCA whitening attempts to maximize the cross-covariance of each whitened dimension with all of the original dimensions.

In particular, for a whitened variable $y_{(i)}$, PCA whitening seeks to maximize $\sum\limits_{j = 1}^p \text{cov}(y_{(i)}, x_{(j)})^2$.

Across variables, this is equivalent to maximizing

\begin{align} \textbf{diag}(y x^\top x y^\top) &= \textbf{diag}(Wx x^\top x x^\top W^\top) \\ &= \textbf{diag}(W \Sigma \Sigma W^\top) \\ &= \textbf{diag}(Q \Sigma^{-1/2} \Sigma \Sigma \Sigma^{-1/2\top} Q^\top ) \\ &= \textbf{diag}(Q \Sigma Q^\top) \\ \end{align}

It turns out (again, see Proposition 1 in Kessy et al. for a proof) that this is maximized when we let the columns of $Q$ be the eigenvectors of $\Sigma$. Thus, PCA whitening rotates the standardized variables back into the original basis.

References

  • Kessy, Agnan, Alex Lewin, and Korbinian Strimmer. “Optimal whitening and decorrelation.” The American Statistician 72.4 (2018): 309-314.
  • Joe Marino’s blog post on statistical whitening.
  • PCA whitening tutorial from Stanford.