next up previous
Next: Details of the Up: No Title Previous: Alignment and resampling

Derivation of the optimal linear estimator

 

The input data coming from FLIRT, which is to be used as the basis for the reconstruction, is incomplete; it contains only some of the dimensions of a complete head vector. In order to project this incomplete vector onto an eigenspace, we must estimate its unknown dimensions. We restrict the estimator to be linear, and here derive the form of the optimal linear estimator. The following notation will be used:

: observation data (known)
: missing data (unknown, but we have this in the example scanned heads)
: complete input vector (i.e., a head)
: covariance matrix of data
: eigenvector decomposition of
or : cross-covariance matrix
First, we show that without any assumptions on the probability density function (PDF) of , the minimum mean squared error criterion

leads to the conditional expectation as the optimal linear estimator. is given (observed). is a function of , and is the output of the estimator. We want to find the value of which minimizes the squared error over all choices of for a given .

Proof:

Now we derive the scalar form of the optimal linear estimator for given . We seek a to minimize the new criterion .

Now we consider the vector case, where and are vectors, and is a matrix. We seek to estimate the missing part of the vector, , given . The new criterion to be minimized is . Note that .

 

First we will simplify the last term using the product rule, grouping the with and the with , using the derivative rule above, and using the identity . Substituting variables,

 

It is obvious that the first term in Equation 4.2 reduces to . Simplifying the second term,

Substituting back into Equation 4.1,

 

We can simply replace m with c in the above equations to find the matrix which takes in the observed data and outputs the entire (missing plus observed data) head vector.

Now we will rewrite this equation to simplify it, by using the eigenvector decomposition of the covariance matrix . Our goal is to represent it in terms of the eigenvalues and eigenvectors of the covariance matrix (already found in the eigenspace computations) and the eigenvectors of an on-diagonal block of . Recall from the definition of a covariance matrix and the ordering we imposed on our at the start:

In the equations below, is a selector matrix, which selects particular rows or columns of . The selector matrix has the form

Now we consider the cross-covariance matrix (replacing in equation 4.4).

Combining the decompositions of and ,

Performing a change of variables,

 

The optimal linear estimator can thus be thought of as separate encoding and decoding phases. The decoding phase is, as before, simply taking the sum of scaled versions of the eigenvectors of the head data set. The encoding phase involves taking the dot product of the reduced input vector (containing only those dimensions actually observed by the tracking system) with the matrix above, which involves the pseudoinverse of the reduced eigenvector matrix, .

Note that if the observed dimensions of the input head change over time, this pseudoinverse must be recomputed. However, FLIRT does not store ``memory'' of past observations, but always provides texture data at the same coordinates in the texture map, so this computation may be done off line. Even if it had to be moved into the reconstruction loop, it would not slow down the system unacceptably (the reconstruction step takes roughly ten seconds, which is roughly the time for the computation of this pseudoinverse).



next up previous
Next: Details of the Up: No Title Previous: Alignment and resampling



Kenneth B Russell
Mon May 5 14:33:03 EDT 1997