next up previous
Next: Experiments Up: No Title Previous: Derivation of the

Details of the modular eigenspace algorithm

 

There are many ways to look at an eigenvector or eigenfunction analysis of a data set or linear system: it decorrelates the data as much as possible; it finds the fundamental modes of the system; it minimizes mean-squared error for reconstructing the original data points or vectors. However, since we are performing coding of the input data, we will consider it a coding problem.

All of the elements in our data set, the head models, can be considered points or vectors in a high-dimensional space by concatenating the rows of the range and texture maps to form one long vector per head; each vector contains roughly 65,000 dimensions for a subsampled head. We perform an eigenvector decomposition on this data set (a matrix containing these vectors as its columns) by finding the covariance matrix of this data set and then finding the eigenvectors of that covariance matrix. (Actually, we perform an well-known equivalent operation which prevents the need to compute the full 65,000 by 65,000 covariance matrix; this is described in, for example, [2], and is equivalent to the singular value decomposition.)

The eigenvectors of a covariance matrix can be chosen orthonormal, because a covariance matrix is by definition symmetric [7, p. 273,]. Let be the matrix of normalized eigenvectors of the covariance matrix of a data set, where each row is one eigenvector. Then we can consider the projection of a new column vector onto the subspace spanned by these eigenvectors, , as a coding operation () combined with a decoding operation (multiplication by ). Assuming we have used all of the eigenvectors of the covariance matrix, the following is true for all , where is an element of the original data set:

 

That is, the eigenspace projection of is an identity operation (modulo roundoff error). We want to verify that the modular eigenspace operation holds in a similar case.

Let be a diagonal matrix representing mask 1 (for example, the mask highlighting the region around the eyes), be the matrix of eigenvectors (as rows) of the data set of input heads multiplied by , and and be another mask and its eigenvectors (for example, the mouth mask). Then assuming we have kept all the eigenvectors of the two modular eigenspaces, the equation which we wish to verify for all in the original data set is

 

Because of the identity in Equation 5.1, the projection of into eigenspace 1 is an identity operation; the same is true for into eigenspace 2. Therefore this equation simplifies in this restricted case to

We have implemented the above formula; that is, the sum of the projections of an input vector onto the modular eigenspace is divided by the sum of the contributions of the masks. However, when is not in the original data set, or if we have dropped higher-frequency eigenvectors from the eigenspace, the projection of will not be an identity operation. Since the pseudoinverse of will in general not be equal to the inverse of , please note that this equation does not hold in general. It does hold when the modular eigenspaces are orthogonal to each other and the nonzero entries on the diagonal are unity, because it reduces to the case of a projection of onto an orthonormal basis by taking its dot product with the basis vectors. We can enforce this orthogonality constraint by making the masks orthogonal (i.e., no diffusion, and no overlapping regions).

The rationale behind using modular eigenspaces is that they manually delineate which regions of the face (eyes, nose, mouth) are approximately decorrelated. Separate, specialized eigenspaces can code high-resolution versions of these various features, and they can be combined independently, providing more reconstruction parameters. However, we found that when the modular eigenspaces were not orthogonal, the reconstructions sometimes had errors including large variations in the head shape (Figure 6.3). From experiments described in Chapter 6, we found that forcing the modular eigenspaces to be orthogonal caused other stability problems. We are considering techniques to reduce the modular eigenspace reconstruction error by interpolating nonlinearly among the modular eigenspaces' projections. Our current work involves using a search technique to automatically find masks which minimize cross-validation error on a given data set.



next up previous
Next: Experiments Up: No Title Previous: Derivation of the



Kenneth B Russell
Mon May 5 14:33:03 EDT 1997