Fast Updating for Optimal Linear Model and Generalized Linear Model Experimental Designs

Experimental Design
Optimal Design
Optimization
Linear Algebra
GLMs

When things get big, refitting a GLM becomes too expensive, and instead you need to rely on some update functions that are in an expensive out-of-print Springer book or strewn across paywalled papers. Here, I derive (or motivate the derivation) of some useful updating identities for use in linear algebra for linear and generalized linear models.

Published

January 22, 2017

When I was doing some work on optimal design of experiments I frequently found references to matrix update schemes that gave an almost magical looking form for the update but gave no motivation and sometimes didn’t even get a reference. There was some cool linear algebra tricks being used for sure, but since the bulk of this work was done in the ’70s it made it difficult to figure out what was going on.

I had talked with my friend Sarah about this as she had just read a paper that said update methods were trivially simple to apply to GLMs, and we both laughed about it. I started trying to pull together the papers that explain how the updating works for experimental design, and decided to make a post about it in the hopes that some poor soul will be helped by it. If nothing else I think the linear algebra used by the folks that derived these results is pretty cool.

What follows is a description of the problem of finding the determinants and inverses for the expected Fisher information matrix (or a related quantity). Next, the updating schemes are discussed. This discussion is separated into rank-one updates of the inverse and determinant, followed by rank-two updates of the inverse and determinant. This post is concluded with a discussion of the application of these schemes, followed by an empty comments section because no one ever reads or comments on my web site. Enjoy!

Problem Setting

In linear modeling and generalized linear modeling we often speak of the design matrix \(\mathbf{X}\) composed of \(d\)-length rows \(\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_n\) each of which represents the conditions under which we’re going to run an experiment and get an observation. Obviously, we can make good or bad choices of \(\mathbf{X}\) which will affect our ability to estimate parameters and predict outcomes using our model.

Let there be a \(p\)-length parameter vector \(\mathbf{\beta}\). Traditionally we define a function \(\mathbf{f} : \mathbb{R}^d \rightarrow \mathbb{R}^p\) for which our model fit for experiment \(i\) will be some function of \(\mathbf{f}(\mathbf{x}_i)^\mathrm{T} \mathbf{\beta}\). Define the model matrix \(\mathbf{F}\) as being composed of rows \(\mathbf{f}(\mathbf{x}_1), \mathbf{f}(\mathbf{x}_2), \ldots, \mathbf{f}(\mathbf{x}_n)\). This matrix is important in describing the variance of our parameter estimates.

The \(D\)-criterion for optimal design concerns itself with the determinant of the expected Fisher information matrix \(\mathbf{M}\). By optimizing with respect to the \(D\)-criterion we help to minimize the variance of our parameter estimates. For the linear model, \(\mathbf{M}\) is proportional to \(\mathbf{F}^\mathrm{T} \mathbf{F}\) and does not vary based on \(\mathbf{\beta}\). For generalized linear models (with independent runs) the matrix \(\mathbf{M}\) is proportionate to \(\mathbf{F}^\mathrm{T} \mathbf{D} \mathbf{F}\) where \(\mathbf{D}\) is a diagonal matrix which is a function of the design matrix and \(\mathbf{\beta}\).

Numerical procedures exist to help speed along updates to \(\mathbf{M}\), \(|\mathbf{M}|\), and \(\mathbf{M}^{-1}\), which helps make computer search for designs feasible and efficient. The following is an excerpt from (Goos 2002):

The low computational cost of updating the information matrix, its inverse and its determinant after a design change is a direct consequence of the fact that the information matrix can be written as a sum of outer products:

\[ \mathbf{F}^\mathrm{T} \mathbf{F} = \sum_{i=1}^n \mathbf{f}(\mathbf{x}_i)\mathbf{f}(\mathbf{x}_i)^{\mathrm{T}} \]

It’s somewhat quirky of statistics to define \(\mathbf{F}\) by row, but it is easy to see that the identity given by Goos holds up. Note that the \(ij\)th element of \(\mathbf{F}^\mathrm{T} \mathbf{F}\) denoted \([\mathbf{F}^\mathrm{T} \mathbf{F}]_{ij}\) is

\[ [\mathbf{F}^\mathrm{T} \mathbf{F}]_{ij} = \sum_{k=1}^n [\mathbf{f}(\mathbf{x}_k)]_i [\mathbf{f}(\mathbf{x}_k)]_j = \sum_{k=1}^n [\mathbf{f}(\mathbf{x}_k) \mathbf{f}(\mathbf{x}_k)^\mathrm{T} ]_{ij}. \]

Most people remember from linear algebra the fact that for square matrices \(\mathbf{A}\) and \(\mathbf{B}\) the determinant

\[ |\mathbf{AB}| = |\mathbf{A}|~|\mathbf{B}|. \]

Recall that the information matrix for GLMs (that are independent) can always be expressed \(\mathbf{F}^\mathrm{T} \mathbf{D} \mathbf{F}\) where \(\mathbf{D}\) is a diagonal matrix. While matrix multiplication does not normally commute, in certain special cases it does. Note that when \(\mathbf{D}\) is diagonal then \[\begin{align*} \bigl| \mathbf{F}^\mathrm{T} \mathbf{D} \mathbf{F} \bigr| &= \bigl| \mathbf{D} \mathbf{F}^\mathrm{T} \mathbf{F} \bigr| \\ &= \bigl| \mathbf{D} \bigr| ~ \bigl|\mathbf{F}^\mathrm{T} \mathbf{F} \bigr| \end{align*}\] Thus, for GLMs we can use the updating schemes for linear models which focus on updating \(\mathbf{F}^\mathrm{T} \mathbf{F}\). Usually updating \(\mathbf{D}\) is trivially simple, so we’re in good shape to proceed.

Updating Schemes for \(\mathbf{F}^\mathrm{T} \mathbf{F}\)

I’m going to discuss rank 1 and rank 2 updating schemes for \((\mathbf{F}^\mathrm{T} \mathbf{F})^{-1}\) and \(|\mathbf{F}^\mathrm{T} \mathbf{F}|\). Here, rank 1 update means adding a point or removing a point, and rank 2 updates means swapping a point for a new one. Most of the results can be found on Wikipedia but you can also look at (Brodlie, Gourlay, and Greenstadt 1973) for much of this, and (Pearson 1969) gives in his Appendix B the last result.

Let’s say we’re removing a point \(\mathbf{x}\) from the design and adding a point \(\mathbf{y}\) to the design. Then the new information matrix \(\mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star\) is

\[ \mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star = \mathbf{F}^\mathrm{T} \mathbf{F} - f(\mathbf{x}) f(\mathbf{x})^\mathrm{T} + f(\mathbf{y}) f(\mathbf{y})^\mathrm{T}. \]

Next will be presented a rank 1 update for the inverse and determinant, followed by rank 2 updates for the inverse and determinant.

Rank 1 Inverse Update

We will show that the inverse can be updated using the equality

\[ (\mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T})^{-1} = \mathbf{I} - \frac{\mathbf{c} \mathbf{d}^\mathrm{T}}{1+\mathbf{d}^\mathrm{T} \mathbf{c}}. \]

To see this, note that when we solve for \(\alpha\) \[\begin{align*} \mathbf{I} &= (\mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T})(\mathbf{I}-\alpha \mathbf{c} \mathbf{d}^\mathrm{T}) \\ &= (\mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T})-\alpha (\mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T}) \mathbf{c} \mathbf{d}^\mathrm{T} \\ &= \mathbf{I}-(1-\alpha) \mathbf{c} \mathbf{d}^\mathrm{T} - \alpha \mathbf{c} \mathbf{d}^\mathrm{T} \mathbf{c} \mathbf{d}^\mathrm{T} \\ &= \mathbf{I}-(1-\alpha) \mathbf{c} \mathbf{d}^\mathrm{T} - \alpha (\mathbf{d}^\mathrm{T} \mathbf{c}) \mathbf{c} \mathbf{d}^\mathrm{T} \\ &= \mathbf{I} - (1-\alpha-\alpha \mathbf{d}^\mathrm{T} \mathbf{c}) \mathbf{c} \mathbf{d} \end{align*}\] which is the identity matrix only when \(1 = \alpha (1+\mathbf{d}^\mathrm{T} \mathbf{c})\) so that

\[ \alpha = \frac{1}{1+\mathbf{d}^\mathrm{T} \mathbf{c}}. \]

In reality we’re concerned not with updating an identity matrix, but with updating some other matrix \(\mathbf{A}\). But it is easy to see that when \(\mathbf{A}\) is invertible then \[\begin{align*} (\mathbf{A}+\mathbf{c}\mathbf{d}^\mathrm{T})^{-1} &= \bigl(\mathbf{I}+(\mathbf{A}^{-1}\mathbf{c})\mathbf{d}^\mathrm{T} \bigr)^{-1}\mathbf{A}^{-1} \\ &= \left(\mathbf{I}-\frac{(\mathbf{A}^{-1}\mathbf{c})\mathbf{d}^\mathrm{T}}{1+\mathbf{d}^\mathrm{T} (\mathbf{A}^{-1}\mathbf{c})}\right) \mathbf{A}^{-1} \\ &= \mathbf{A}^{-1}-\frac{\mathbf{A}^{-1}\mathbf{c}\mathbf{d}^\mathrm{T} \mathbf{A}^{-1}}{1+\mathbf{d}^\mathrm{T} \mathbf{A}^{-1}\mathbf{c}}. \end{align*}\]

Applying this to the problem at hand, it stands that we can remove point \(\mathbf{x}\) to get

\[ (\mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star)^{-1} = (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} + \frac{ (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} \mathbf{f}(\mathbf{x})\mathbf{f}(\mathbf{x})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} }{ 1-\mathbf{f}(\mathbf{x})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} \mathbf{f}(\mathbf{x}) } \]

and we can add point \(\mathbf{y}\) to get

\[ (\mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star)^{-1} = (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} - \frac{ (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} \mathbf{f}(\mathbf{y})\mathbf{f}(\mathbf{y})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} }{ 1+\mathbf{f}(\mathbf{y})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} \mathbf{f}(\mathbf{y}) }. \]

One can chain these operations to generate rank 2 updates of the inverse, but there is a direct rank 2 way we’ll get to later.

Rank 1 Determinant Update

The rank-one update of the determinant is slightly less intuitive, but still fun. Begin by noting that we can express a block matrix

\[ \left[ \begin{array}{cc} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{array} \right] = \left[ \begin{array}{cc} \mathbf{I} & \mathbf{0} \\ \mathbf{C} \mathbf{A^{-1}} & \mathbf{I} \end{array} \right] \left[ \begin{array}{cc} \mathbf{A} & \mathbf{B} \\ \mathbf{0} & \mathbf{D}-\mathbf{C}\mathbf{A}\mathbf{B} \end{array} \right] \]

when the inverse \(\mathbf{A}^{-1}\) exists.

We can use this to show that the determinant of a block matrix is

\[ \det\left( \left[ \begin{array}{cc} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{array} \right] \right) = \left\{ \begin{array}{cc} \det(\mathbf{A}) \det(\mathbf{D}-\mathbf{C}\mathbf{A}^{-1}\mathbf{D}) & \text{when}~\mathbf{A}^{-1}~\text{exists,}\\ \det(\mathbf{D}) \det(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C}) & \text{when}~\mathbf{D}^{-1}~\text{exists.} \end{array} \right. \]

Thus, since \(\det(\mathbf{0}) = 0\), the determinant of a block-triangular matrix is

\[ \det\left( \left[ \begin{array}{cc} \mathbf{A} & \mathbf{B} \\ \mathbf{0} & \mathbf{D} \end{array} \right] \right) = \det(\mathbf{A}) \det(\mathbf{D}). \]

Now, consider the left hand side of the following equation

\[ \left[ \begin{array}{cc} \mathbf{I} & \mathbf{0} \\ \mathbf{d}^\mathrm{T} & 1 \end{array} \right] \left[ \begin{array}{cc} \mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T} & \mathbf{c} \\ \mathbf{0}^\mathrm{T} & 1 \end{array} \right] \left[ \begin{array}{cc} \mathbf{I} & \mathbf{0} \\ -\mathbf{d}^\mathrm{T} & 1 \end{array} \right] = \left[ \begin{array}{cc} \mathbf{I} & \mathbf{c} \\ \mathbf{0}^\mathrm{T} & 1+\mathbf{d}^\mathrm{T} \mathbf{c} \end{array} \right]. \]

Note that the determinants are 1, \(\det(\mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T})\), and 1 respectively. The product of those determinants is equal to the determinant on the right, \(\det(\mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T})\). Now, the determinant of the right side is \(1 + \mathbf{d}^\mathrm{T} \mathbf{c}\), so it must be that

\[ \det(\mathbf{I}+\mathbf{c} \mathbf{d}^\mathrm{T}) = 1 + \mathbf{d}^\mathrm{T} \mathbf{c}. \]

Again, we find by factoring out an \(\mathbf{A}\) that

\[ \det(\mathbf{A}+\mathbf{c} \mathbf{d}^\mathrm{T}) = \det(\mathbf{A})(1+\mathbf{d}^\mathrm{T} \mathbf{A}^{-1} \mathbf{c}). \]

Applying this to the problem at hand we find that removing a point \(\mathbf{x}\) gives us

\[ |\mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star| = |\mathbf{F}^\mathrm{T} \mathbf{F}| \Bigl(1-\mathbf{f}(\mathbf{x})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F}) \mathbf{f}(\mathbf{x}) \Bigr), \]

and adding a point \(\mathbf{y}\) gives us

\[ |\mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star| = |\mathbf{F}^\mathrm{T} \mathbf{F}| \Bigl(1+\mathbf{f}(\mathbf{y})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F}) \mathbf{f}(\mathbf{y}) \Bigr). \]

Rank 2 Inverse Update

We can directly apply the Sherman-Morrison-Woodbery theorem to get this update. (Sidebar: when I took a course in linear statistical models we called this the Woody Harrelson theorem because we couldn’t remember all the names). The result states that:

\[ (\mathbf{A} + \mathbf{U} \mathbf{C} \mathbf{V})^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U}(\mathbf{C}^{-1} + \mathbf{V} \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{V} \mathbf{A}^{-1} \]

where of course \(\mathbf{A}\) and \(\mathbf{C}\) must be invertable. There are several good proofs in the link to Wikipedia so I won’t reproduce them here. But let’s spell out the application in our context here.

We want to find the matrix resulting from moving \(\mathbf{x}_i\) to \(\mathbf{y}_i\) in the design matrix, so in the model matrix we want:

\[ \mathbf{F}^\mathrm{T} \mathbf{F} + \bigl(\mathbf{f}(\mathbf{y}_i)\mathbf{f}(\mathbf{y}_i)^\mathrm{T} - \mathbf{f}(\mathbf{x}_i)\mathbf{f}(\mathbf{x}_i)^\mathrm{T}\bigr). \]

It’s easy to show that

\[ \mathbf{f}(\mathbf{y}_i)\mathbf{f}(\mathbf{y}_i)^\mathrm{T} - \mathbf{f}(\mathbf{x}_i)\mathbf{f}(\mathbf{x}_i)^\mathrm{T} = \left[ \begin{array}{c|c} \mathbf{f}(\mathbf{y}_i) & \mathbf{f}(\mathbf{x}_i) \end{array} \right] \underbrace{ \left[\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right] }_{ \mathbf{C} } \left[ \begin{array}{c} \mathbf{f}(\mathbf{y}_i)^\mathrm{T} \\ \hline \mathbf{f}(\mathbf{x}_i)^\mathrm{T} \end{array} \right] \]

and that \(\mathbf{C}^{-1} = \mathbf{C}\). That makes the rank-2 update of the inverse very simple indeed, only requiring the calculation of a \(2 \times 2\) inverse: \[\begin{align*} (\mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star)^{-1} &= (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} - (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} \left[\begin{array}{c|c} \mathbf{f}(\mathbf{y}) & \mathbf{f}(\mathbf{x}) \end{array}\right] \Biggl( \\ &\phantom{=}\quad \left[\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right] + \left[\begin{array}{c} \mathbf{f}(\mathbf{y})^\mathrm{T} \\ \hline \mathbf{f}(\mathbf{x})^\mathrm{T} \end{array}\right] (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} \left[\begin{array}{c|c} \mathbf{f}(\mathbf{y}) & \mathbf{f}(\mathbf{x}) \end{array}\right] \\ &\phantom{=} \Biggr)^{-1} \left[\begin{array}{c} \mathbf{f}(\mathbf{y})^\mathrm{T} \\ \hline \mathbf{f}(\mathbf{x})^\mathrm{T} \end{array}\right] (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1}. \end{align*}\]

Rank 2 Determinant Update

A rank 2 update of the determinant is given by (Goos 2002) as \[\begin{align*} \det(\mathbf{F}_\star^\mathrm{T} \mathbf{F}_\star) &= \det(\mathbf{F}^\mathrm{T} \mathbf{F})\biggl( \\ &\phantom{=}\qquad \bigl(1-f(\mathbf{x})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} f(\mathbf{x}) \bigr) \bigl(1+f(\mathbf{y})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} f(\mathbf{y}) \bigr) \\ &\phantom{=}\qquad+ \bigl(f(\mathbf{x})^\mathrm{T} (\mathbf{F}^\mathrm{T} \mathbf{F})^{-1} f(\mathbf{y})\bigr)^2 \\ &\phantom{=}\biggr). \end{align*}\] To demonstrate this we need a few intermediate results first. We begin by showing the existence of a particular linear transformation. Next, we Use this to show the determinant for a simpler form. Finally, we show that by factoring out \(\mathbf{F}^\mathrm{T} \mathbf{F}\) we can restate the problem in this form and arrive at the above equality.

Existence of Transformation Matrix \(\mathbf{T}\)

Demonstration of this first requires we establish a specific matrix \(\mathbf{T}\) representing a transformation of coordinate basis. We need \(\mathbf{T}\) to be full rank and thus invertable. We want a very specific \(\mathbf{T}\) satisfying the following properties: \[\begin{align*} \mathbf{T} \mathbf{x} &= \mathbf{e}_1,\quad\text{and} \\ \mathbf{T} \mathbf{y} &= \mathbf{e}_2 \end{align*}\] where \(\mathbf{e}_i\) is \(1\) in the \(i\)th coordinate and \(0\) everywhere else. (Pearson 1969) just states that such a matrix exists, and it clearly does, and this is sufficient for the argument, but let’s construct it anyway. Define \[\begin{align*} \mathbf{H}_{\mathbf{x}}(\mathbf{z}) &= \frac{\mathbf{y}^\mathrm{T} \mathbf{y}}{\mathbf{x}^\mathrm{T} \mathbf{x} \mathbf{y}^\mathrm{T} \mathbf{y} - \mathbf{x}^\mathrm{T} \mathbf{y}^2 } \mathbf{z} \mathbf{x}^\mathrm{T} - \frac{\mathbf{x}^\mathrm{T} \mathbf{y}}{\mathbf{x}^\mathrm{T} \mathbf{x} \mathbf{y}^\mathrm{T} \mathbf{y} - \mathbf{x}^\mathrm{T} \mathbf{y}^2 } \mathbf{z} \mathbf{y}^\mathrm{T}, \quad\text{and}\\ \mathbf{H}_{\mathbf{y}}(\mathbf{z}) &= \frac{\mathbf{x}^\mathrm{T} \mathbf{x}}{\mathbf{x}^\mathrm{T} \mathbf{x} \mathbf{y}^\mathrm{T} \mathbf{y} - \mathbf{x}^\mathrm{T} \mathbf{y}^2 } \mathbf{z} \mathbf{y}^\mathrm{T} - \frac{\mathbf{x}^\mathrm{T} \mathbf{y}}{\mathbf{x}^\mathrm{T} \mathbf{x} \mathbf{y}^\mathrm{T} \mathbf{y} - \mathbf{x}^\mathrm{T} \mathbf{y}^2 } \mathbf{z} \mathbf{x}^\mathrm{T}. \end{align*}\] Note that

\[ \begin{array}{ll} \mathbf{H}_\mathbf{x}(\mathbf{e}_1)\mathbf{x} = \mathbf{e}_1, & \mathbf{H}_\mathbf{x}(\mathbf{e}_1)\mathbf{y} = \mathbf{0}, \\ \mathbf{H}_\mathbf{y}(\mathbf{e}_2)\mathbf{x} = \mathbf{0}, & \mathbf{H}_\mathbf{y}(\mathbf{e}_2)\mathbf{y} = \mathbf{e}_2. \\ \end{array} \]

The transformation \(\mathbf{T}\) can then be assembled:

\[ \mathbf{T} = \mathbf{H}_\mathbf{x}(\mathbf{e}_1) + \mathbf{H}_\mathbf{y}(\mathbf{e}_2) + \Bigl( \mathbf{I} - \mathbf{H}_\mathbf{x}(\mathbf{x}) - \mathbf{H}_\mathbf{y}(\mathbf{y}) \Bigr) \]

which is (usually) full rank and thus \(\mathbf{T}^{-1}\) exists and \(|\mathbf{T}| \ne 0\).

We don’t need to worry about the specific form of \(\mathbf{T}^{-1}\) except to note that, necessarily, \(\mathbf{T}^{-1} \mathbf{e}_1 = \mathbf{x}\) and \(\mathbf{T}^{-1} \mathbf{e}_2 = \mathbf{y}\). That’s all we need to continue with the argument from (Pearson 1969).

Determinant of \(|\mathbf{I} + \mathbf{x} \mathbf{y}^\mathrm{T} + \mathbf{u} \mathbf{v}^\mathrm{T}|\)

Let \(\mathbf{T}\) be a non-singular transformation, and recall that in general the determinant of the inverse is the reciprocal of the determinant, then note that

\[ |\mathbf{T}^{-1} \mathbf{A} \mathbf{T}| = |\mathbf{T}^{-1}|| \mathbf{A} ||\mathbf{T}| = | \mathbf{A} |. \]

For any independent vector \(\mathbf{y}\), \(\mathbf{v}\) we have

\[ |\mathbf{I} + \mathbf{x} \mathbf{y}^\mathrm{T} + \mathbf{u} \mathbf{v}^\mathrm{T}| = \Bigl|\mathbf{T}^{-1}\bigl( \mathbf{I} + \mathbf{x} \mathbf{y}^\mathrm{T} + \mathbf{u} \mathbf{v}^\mathrm{T} \bigr) \mathbf{T} \Bigr| = |\mathbf{I} + \mathbf{T}^{-1} \mathbf{x} \mathbf{y}^\mathrm{T} \mathbf{T} + \mathbf{T}^{-1} \mathbf{u} \mathbf{v}^\mathrm{T} \mathbf{T}|. \]

Here we choose \(\mathbf{T}\) so that the following hold true: \[\begin{align*} \mathbf{y}^\mathrm{T} \mathbf{T} = \mathbf{e}_1^\mathrm{T}, \quad\text{and}\quad \mathbf{v}^\mathrm{T} \mathbf{T} = \mathbf{e}_2^\mathrm{T}. \end{align*}\] Then let \(\mathbf{T}^{-1} \mathbf{x} = \mathbf{a}\) and \(\mathbf{T}^{-1} \mathbf{u} = \mathbf{b}\). Then it is obvious looking at the determinant by cofactor expansion that the value of the determinant becomes

\[ \biggl| \left[\begin{array}{c|c|c|c|c|c} \mathbf{e}_1+\mathbf{a} & \mathbf{e}_2 + \mathbf{b} & \mathbf{e}_3 & \mathbf{e}_4 & \ldots & \mathbf{e}_n \end{array}\right]\biggr| = \bigl( (1+a_1)(1+b_2) - a_2 b_1 \bigr) \]

where \[\begin{align*} a_1 &= \mathbf{e}_1^\mathrm{T} \mathbf{a} = \mathbf{e}_1 \mathbf{T}^{-1} \mathbf{x} = \mathbf{y}^\mathrm{T} \mathbf{x}, \\ a_2 &= \mathbf{e}_2^\mathrm{T} \mathbf{a} = \mathbf{e}_2 \mathbf{T}^{-1} \mathbf{x} = \mathbf{v}^\mathrm{T} \mathbf{x}, \\ b_1 &= \mathbf{e}_1^\mathrm{T} \mathbf{b} = \mathbf{e}_1 \mathbf{T}^{-1} \mathbf{u} = \mathbf{y}^\mathrm{T} \mathbf{u}, \quad\text{and}\\ b_2 &= \mathbf{e}_2^\mathrm{T} \mathbf{b} = \mathbf{e}_2 \mathbf{T}^{-1} \mathbf{u} = \mathbf{v}^\mathrm{T} \mathbf{u}. \end{align*}\]

Thus,

\[ |\mathbf{I} + \mathbf{x} \mathbf{y}^\mathrm{T} + \mathbf{u} \mathbf{v}^\mathrm{T}| = (1+\mathbf{x}^\mathrm{T} \mathbf{y}) (1+\mathbf{u}^\mathrm{T} \mathbf{v}) - \mathbf{v}^\mathrm{T} \mathbf{x} \mathbf{y}^\mathrm{T} \mathbf{u}. \]

Applying to Update Determinant

Returning to our original labeling scheme, note that \[\begin{align*} \mathbf{F}^\mathrm{T} \mathbf{F} - \mathbf{f}(\mathbf{x})\mathbf{f}(\mathbf{x})^\mathrm{T} + \mathbf{f}(\mathbf{y})\mathbf{f}(\mathbf{y})^\mathrm{T} &= \bigl(\mathbf{F}^\mathrm{T} \mathbf{F}\bigr) \Bigl( \mathbf{I} \underbrace{ - \bigl(\mathbf{F}^\mathrm{T} \mathbf{F}\bigr)^{-1} \mathbf{f}(\mathbf{x}) }_{\mathbf{a}} \underbrace{ \mathbf{f}(\mathbf{x})^\mathrm{T} }_{\mathbf{b}^\mathrm{T}} + \underbrace{ \bigl(\mathbf{F}^\mathrm{T} \mathbf{F}\bigr)^{-1} \mathbf{f}(\mathbf{y}) }_{\mathbf{c}} \underbrace{ \mathbf{f}(\mathbf{y})^\mathrm{T} }_{\mathbf{d}^\mathrm{T}} \Bigr) \\ &= \bigl(\mathbf{F}^\mathrm{T} \mathbf{F}\bigr) \Bigl( \mathbf{I} + \mathbf{a} \mathbf{b}^\mathrm{T} + \mathbf{c} \mathbf{d}^\mathrm{T} \Bigr). \end{align*}\] From here applying Pearson’s result to achieve the result in Goos is straitforward.

How is this even used?

One caution about using updating formula that I’ve read frequently involved the accumulation of round-off error in the result. Authors have advised recalculating the inverse and determinant periodically to keep things sane. I’ve always done this so I’m not sure if it’s necessary and under what conditions.

In designing experiments using the \(D\)-optimality criterion we want to search across the design space to optimize the determinant. Recent literature prefers coordinate exchange from (Meyer and Nachtsheim 1995) which simply optimizes a single coordinate in the design matrix at a time, because of the fast updating schemes available. This does not however inform one how to do such updating.

I’ve had luck using Brent’s method for minimization. This method switches between fitting a quadratic to three local points and using that to guess the new minimum, and the method switches to regular golden section search when such inverse quadratic interpolation looks like it will fail. This can be very fast in finding an optima. However, such a method is very single-threaded in terms of computational requirements. A simple grid search is embarrassingly parallelizable and may be easy to offload onto a GPU for really fast performance.

References

Brodlie, Ken W, AR Gourlay, and John Greenstadt. 1973. “Rank-One and Rank-Two Corrections to Positive Definite Matrices Expressed in Product Form.” IMA Journal of Applied Mathematics 11 (1): 73–82.
Goos, P. 2002. The Optimal Design of Blocked and Split-Plot Experiments. Springer Verlag.
Meyer, Ruth K., and Christopher J. Nachtsheim. 1995. The Coordinate-Exchange Algorithm for Constructing Exact Optimal Experimental Designs.” Technometrics 37 (1): 60–69.
Pearson, John D. 1969. “Variable Metric Methods of Minimisation.” The Computer Journal 12 (2): 171–78.