There are so many different books on Neural Networks: Amazon's Neural Network listing has over 1288 books! As you might expect, a lot of these aren't especially good. However, a few stand out from the rest. They are presented here...
The cream of the cream, in no particular order:
Fundamentals of Neural Networks Laurene V. Fausett
This book is a clear and well organised overview of Neural Networks. It contains all the sections that are to be expected from such a book, namely a biological explanation including activation function, analysis of training by back-propagation and error mesurement, self-adapting networks.
The book is solid and complete, and will help you make the first step into the field as well as remain a good reference for the future.
Neural Networks for Pattern Recognition Christopher M. Bishop
Taking an approach based on statistics, Bishop manages to do what few authors have done before him: reveal and explain the internal working of Neural Networks, beyond frail neural concepts. The benefits are tremendous when the problem is no longer treated as a big black box. The back-propagation learning process is analysed particularly well, over-fitting and error estimation using histograms likewise. Very little is missing from this book, it's a very complete tome.
The style is similar to that of an undergraduate text-book, and should be treated as such. Programmers will benefit from this book, if they can spend enough time on understanding the principles. Any researcher desiring a complete statistical background in connectivism should own this book.
Most books on the field give a very objective approach
to connectionism, and describe mathematical knowledge
for getting Neural Networks to train. This work of art
approaches the problem from a practical point of view,
discussing issues that anyone would be faced with when
working with Neural Networks.
Things like setting learning rates, using stochastic
approximation, adding momentum and deciding training
times are all key factors that are discussed in depth.
The visual aids are extremely helpful, and allow the
reader to develop a fell and intuition for Neural
Pattern Recognition and Neural Networks Brian D. Ripley
This book is not an introductory text to the field of Pattern Recognition. You need a fairly strong mathematical background to understand this book.
However, if this book is at your level, this is one book that shouldn't be missed. Care is taken to prove everything correctly, and provide enough background knowledge for working techniques. The analysis of overfitting, the error rate estimation, sections on bootstrapping and cross-validation are especially good.
The text does not hesitate to delve into heavy mathematic formula, and also cites many other useful papers in the field of Pattern Recognition.
As far as Neural Network books go, this is one of the most accessible. You'll only need a bit of mathematical background in linear algebra, possibly even calculus. The concepts of the neural model will then be simply explained. Hagan then discusses cases where Neural Networks can work, and where they don't. From this very broad overview, the book refines nicely into the usual content for such a book (structure discussions, supervised training, unsupervised learning). Simple and clear explanations are excellent throughout.
A very good book to accompany first steps with Neural Networks.
An Introduction to Neural Networks James A. Anderson
This book is a good read, funny at times, and most importantly taking a refreshing approach to the field. Biological plausibility is discussed, how visual feature extraction is performed compared with real animals. Anderson moves effectively among evolutionary biology, cognitive science, artificial intelligence, and behavioral psychology.
Source code is gently introduced, which makes this book a valuable addition to any collection.
Neural Networks: A Comprehensive Foundation Simon S. Haykin
This book has a good overview of most Neural Network issues to date. The detail can be overwhelming at first, but this assures that the book remains a useful resource during the progression of the reader through the field.
The first section place Neural Networks in the field of AI and discusses knowledge representation. Supervised learning is then covered extensively, including models such as the perceptron and the radial-basis function network. Unsupervised learning is also discussed in great depth, notably for vector quantization. The book ends with a hefty analysis of nonlinear techniques, and their dynamic properties.