In the 1980s and early 1990s, a great deal of research effort (both industrialand academic) was expended on the design and implementation of hardwareneurocomputers [5, 6, 7, 8]. But, on the whole, most efforts may be judgedto have been unsuccessful: at no time have have hardware neurocomputersbeen in wide use; indeed, the entire field was largely moribund by the end the1990s. This lack of success may be largely attributed to the fact that earlierwork was almost entirely based on ASIC technology but was never sufficientlydeveloped or competetive enough to justify large-scale adoption; gate-arraysof the period mentioned were never large enough nor fast enough for seriousneural-network applications.1 Nevertheless, the current literature shows thatASIC neurocomputers appear to be making some sort of a comeback [1, 2, 3];we shall argue below that these efforts are destined to fail for exactly the samereasons that earlier ones did. On the other hand, the capacity and performanceof current FPGAs are such that they present a much more realistic alternative.We shall in what follows give more detailed arguments to support these claims.The chapter is organized as follows. Section 2 is a review of the fundamentalsof neural networks; still, it is expected that most readers of the book will already be familiar with these. Section 3 briefly contrasts ASIC-neurocomputerswith FPGA-neurocomputers, with the aim of presenting a clear case for theformer; a more significant aspects of this argument will be found in [18]. Oneof the most repeated arguments for implementing neural networks in hardwareis the parallelism that the underlying models possess. Section 4 is a short sectionthat reviews this. In Section 5 we briefly describe the realization of astate-of-the art FPGA device. The objective there is to be able to put into aconcrete context certain following discussions and to be able to give groundeddiscussions of what can or cannot be achieved with current FPGAs. Section6 deals with certain aspects of computer arithmetic that are relevant to neuralnetwork implementations. Much of this is straightforward, and our main aimis to highlight certain subtle aspects. Section 7 nominally deals with activationfunctions, but is actually mostly devoted to the sigmoid function. Thereare two main reasons for this choice: first, the chapter contains a significantcontribution to the implementation of elementary or near-elementary activationfunctions, the nature of which contribution is not limited to the sigmoidfunction; second, the sigmoid function is the most important activation functionfor neural networks. In Section 8, we very briefly address an importantissue — performance evaluation. Our goal here is simple and can be statedquite succintly: as far as performance-evaluation goes, neurocomputer architecturecontinues to languish in the “Dark Ages", and this needs to change. Afinal section summarises the main points made in chapter and also serves as abrief introduction to subsequent chapters in the book.