4 Data
    This chapter is organized around the issue of deriving compact representation of input data. Two basic approaches are considered: projections and representation by prototype points. Most of the chapter is however a preparation for these two topics, covering coordinate systems, eigenvalues and eigenvectors and random vectors in an introductory fashion and with the aid of some examples. In addition, an introduction to Linear Algebra is outlined at the respective Appendix.
 
    Regarding dimension reduction by projection, by some unknown reason Ballard does not identify the outlined technique by its usual names of "Hotelling transform" or "Karhunen-Loève". This concept is illustrated in terms of the interesting application in face analysis developed in [Turk and Pentland ]. However, some important remarks about this elegant statistical approach are missing, including the fact that its is computationally demanding in the sense that it requires the determination of the covariance matrix and its respective eigenvalues and eigenvectors. Although the transformation becomes trivial once these steps are performed, they will also be restricted to the statistical situation used for the estimation of the covariance matrix. In other words, unlike the Fourier transform, which is general, the Karhunen-Loève approach is only statistically optimal (in the sense of energy concentration in the first spectral components) for the specific situation being modeled. The point is that the cosine transform, which presents a series of advantages, often produces performance very close to the Karhunen-Loève approach [Pratt ].
    The second alternative for data simplification is only briefly outlined in Section 4.6, under the name of clustering. However, this section ends abruptly without discussing or illustrating the respective mathematical developments.
 
    There are some additional problems in this chapter. Firstly, the definition of the product of a matrix A by a vector x in page 73 is incorrect . This mistake seems to be a consequence of an attempt to develop the immediately subsequent notion of this linear transformation as a linear combination of the colums of A. As a matter of fact, this interesting fact can be better explained as . Also, while discussing in page 81 the choice of the orientation which maximizes the variance as the best selection for classification, Ballard forgot to mention that this criterion is not foolproof, as can be illustrated by this counter-example. Moreover, the marginal distributions in Figure 4.4 seem to have been coarsely drawn by hand.