1 Natural Computation
    This first chapter works as a general introduction to Ballard's book, presenting the principal motivation and some basic concepts. It starts by briefly reviewing the mammals brain in a hierarchical fashion (from subsystems to neurons, passing through maps). Next, basic principles in computational theory, including Turing machines and complexity, are outlined and exemplified. The next section motivates and introduces some of the crucial concepts in natural computation, including minimum description length, learning, and architectures. The second example illustrating the former of these concepts regards the receptive field of a neuron, which is defined as the set of respective synaptic weights. It should be observed that, in neuroscience, receptive fields are typically defined as the region of the input space which can affect the activity of the neuron [Goldstein ]. As a matter of fact, neither of these definitions is quite accurate. While the more traditional definition, which is more geometrical, fails to specify the intensity of the modulatory input influences of the input over the neural activity, Ballard's alternative definition does not specify the spatial positioning and arrangement of the input and synapses.
    The penultimate section in this chapter presents an overview of the main objectives and general organization of the book. Though Ballard observes that the "overall perspective of this book is that learning algorithms develop behavioral programs", it is not immediately clear what is meant by "behavioral", "algorithms", and "programs", but this becomes clearer as the reader proceeds further into the book and re-read some sections. Such a feature is characteristic of many passages in this book. The last section in this chapter comments on how the great challenge of using computers as a means of better understanding the brain have met very limited success thus far. Ballards conclude this section and chapter by observing that the goal of his book is to further this process.