Preface Acknowledgment Chapter 1—Introduction 1.1 Pattern Recognition Systems 1.2 Motivation For Artificial Neural Network Approach 1.3 A Prelude To Pattern Recognition 1.4 Statistical Pattern Recognition 1.5 Syntactic Pattern Recognition 1.6 The Character Recognition Problem 1.7 Organization Of Topics References And Bibliography Chapter 2—Neural Networks: An Overview 2.1 Motivation for Overviewing Biological Neural Networks 2.2 Background 2.3 Biological Neural Networks 2.4 Hierarchical Organization in the Brain 2.5 Historical Background 2.6 Artificial Neural Networks References and Bibliography Chapter 3—Preprocessing 3.1 General 3.2 Dealing with Input from a Scanned Image 3.3 Image Compression 3.3.1 Image Compression Example 3.4 Edge Detection 3.5 Skeletonizing 3.5.1 Thinning Example 3.6 Dealing with Input From a Tablet 3.7 Segmentation References and Bibliography Chapter 4—Feed-Forward Networks with Supervised Learning 4.1 Feed-Forward Multilayer Perceptron (FFMLP) Architecture 4.2 FFMLP in C++ 4.3 Training with Back Propagation 4.3.1 Back Propagation in C++ 4.4 A Primitive Example 4.5 Training Strategies and Avoiding Local Minima 4.6 Variations on Gradient Descent 4.6.1 Block Adaptive vs. Data Adaptive Gradient Descent 4.6.2 First-Order vs. Second-Order Gradient Descent 4.7 Topology 4.8 ACON vs. OCON 4.9 Overtraining and Generalization 4.10 Training Set Size and Network Size 4.11 Conjugate Gradient Method 4.12 ALOPEX References and Bibliography Chapter 5—Some Other Types of Neural Networks 5.1 General 5.2 Radial Basis Function Networks 5.2.1 Network Architecture 5.2.2 RBF Training 5.2.3 Applications of RBF Networks 5.3 Higher Order Neural Networks 5.3.1 Introduction 5.3.2 Architecture 5.3.3 Invariance to Geometric Transformations 5.3.4 An Example 5.3.5 Practical Applications References and Bibliography Chapter 6—Feature Extraction I: Geometric Features and Transformations 6.1 General 6.2 Geometric Features (Loops, Intersections, and Endpoints) 6.2.1 Intersections and Endpoints 6.2.2 Loops 6.3 Feature Maps 6.4 A Network Example Using Geometric Features 6.5 Feature Extraction Using Transformations 6.6 Fourier Descriptors 6.7 Gabor Transformations and Wavelets References And Bibliography Chapter 7—Feature Extraction II: Principal Component Analysis 7.1 Dimensionality Reduction 7.2 Principal Components 7.2.1 PCA Example 7.3 Karhunen-Loeve (K-L) Transformation 7.3.1 K-L Transformation Example 7.4 Principal Component Neural Networks 7.5 Applications References and Bibliography Chapter 8—Kohonen Networks and Learning Vector Quantization 8.1 General 8.2 The K-Means Algorithm 8.2.1 K-Means Example 8.3 An Introduction To The Kohonen Model 8.3.1 Kohonen Example 8.4 The Role Of Lateral Feedback 8.5 Kohonen Self-Organizing Feature Map 8.5.1 SOFM Example 8.6 Learning Vector Quantization 8.6.1 LVQ Example 8.7 Variations On LVQ 8.7.1 LVQ2 8.7.2 LVQ2.1 8.7.3 LVQ3 8.7.4 A Final Variation Of LVQ References And Bibliography Chapter 9—Neural Associative Memories and Hopfield Networks 9.1 General 9.2 Linear Associative Memory (LAM) 9.2.1 An Autoassociative LAM Example 9.3 Hopfield Networks 9.4 A Hopfield Example 9.5 Discussion 9.6 Bit Map Example 9.7 Bam Networks 9.8 A Bam Example References And Bibliography Chapter 10—Adaptive Resonance Theory (ART) 10.1 General 10.2 Discovering The Cluster Structure 10.3 Vector Quantization 10.3.1 VQ Example 1 10.3.2 VQ Example 2 10.3.3 VQ Example 3 10.4 Art Philosophy 10.5 The Stability-Plasticity Dilemma 10.6 ART1: Basic Operation 10.7 ART1: Algorithm 10.8 The Gain Control Mechanism 10.8.1 Gain Ccontrol Example 1 10.8.2 Gain Control Example 2 10.9 ART2 Model 10.10 Discussion 10.11 Applications References and Bibliography Chapter 11—Neocognitron 11.1 Introduction 11.2 Architecture 11.3 Example of a System with Sample Training Patterns References and Bibliography Chapter 12—Systems with Multiple Classifiers 12.1 General 12.2 A Framework for Combining Multiple Recognizers 12.3 Voting Schemes 12.4 The Confusion Matrix 12.5 Reliability 12.6 Some Empirical Approaches References and Bibliography Index 2.2 Background 2.3 Biological Neural Networks 2.4 Hierarchical Organization in the Brain 2.5 Historical Background 2.6 Artificial Neural Networks References and Bibliography Chapter 3—Preprocessing 3.1 General 3.2 Dealing with Input from a Scanned Image 3.3 Image Compression 3.3.1 Image Compression Example 3.4 Edge Detection 3.5 Skeletonizing 3.5.1 Thinning Example 3.6 Dealing with Input From a Tablet 3.7 Segmentation References and Bibliography Chapter 4—Feed-Forward Networks with Supervised Learning 4.1 Feed-Forward Multilayer Perceptron (FFMLP) Architecture 4.2 FFMLP in C++ 4.3 Training with Back Propagation 4.3.1 Back Propagation in C++ 4.4 A Primitive Example 4.5 Training Strategies and Avoiding Local Minima 4.6 Variations on Gradient Descent 4.6.1 Block Adaptive vs. Data Adaptive Gradient Descent 4.6.2 First-Order vs. Second-Order Gradient Descent 4.7 Topology 4.8 ACON vs. OCON 4.9 Overtraining and Generalization 4.10 Training Set Size and Network Size 4.11 Conjugate Gradient Method 4.12 ALOPEX References and Bibliography Chapter 5—Some Other Types of Neural Networks 5.1 General 5.2 Radial Basis Function Networks 5.2.1 Network Architecture 5.2.2 RBF Training 5.2.3 Applications of RBF Networks 5.3 Higher Order Neural Networks 5.3.1 Introduction 5.3.2 Architecture 5.3.3 Invariance to Geometric Transformations 5.3.4 An Example 5.3.5 Practical Applications References and Bibliography Chapter 6—Feature Extraction I: Geometric Features and Transformations 6.1 General 6.2 Geometric Features (Loops, Intersections, and Endpoints) 6.2.1 Intersections and Endpoints 6.2.2 Loops 6.3 Feature Maps 6.4 A Network Example Using Geometric Features 6.5 Feature Extraction Using Transformations 6.6 Fourier Descriptors 6.7 Gabor Transformations and Wavelets References And Bibliography Chapter 7—Feature Extraction II: Principal Component Analysis 7.1 Dimensionality Reduction 7.2 Principal Components 7.2.1 PCA Example 7.3 Karhunen-Loeve (K-L) Transformation 7.3.1 K-L Transformation Example 7.4 Principal Component Neural Networks 7.5 Applications References and Bibliography Chapter 8—Kohonen Networks and Learning Vector Quantization 8.1 General 8.2 The K-Means Algorithm 8.2.1 K-Means Example 8.3 An Introduction To The Kohonen Model 8.3.1 Kohonen Example 8.4 The Role Of Lateral Feedback 8.5 Kohonen Self-Organizing Feature Map 8.5.1 SOFM Example 8.6 Learning Vector Quantization 8.6.1 LVQ Example 8.7 Variations On LVQ 8.7.1 LVQ2 8.7.2 LVQ2.1 8.7.3 LVQ3 8.7.4 A Final Variation Of LVQ References And Bibliography Chapter 9—Neural Associative Memories and Hopfield Networks 9.1 General 9.2 Linear Associative Memory (LAM) 9.2.1 An Autoassociative LAM Example 9.3 Hopfield Networks 9.4 A Hopfield Example 9.5 Discussion 9.6 Bit Map Example 9.7 Bam Networks 9.8 A Bam Example References And Bibliography Chapter 10—Adaptive Resonance Theory (ART) 10.1 General 10.2 Discovering The Cluster Structure 10.3 Vector Quantization 10.3.1 VQ Example 1 10.3.2 VQ Example 2 10.3.3 VQ Example 3 10.4 Art Philosophy 10.5 The Stability-Plasticity Dilemma 10.6 ART1: Basic Operation 10.7 ART1: Algorithm 10.8 The Gain Control Mechanism 10.8.1 Gain Ccontrol Example 1 10.8.2 Gain Control Example 2 10.9 ART2 Model 10.10 Discussion 10.11 Applications References and Bibliography Chapter 11—Neocognitron 11.1 Introduction 11.2 Architecture 11.3 Example of a System with Sample Training Patterns References and Bibliography Chapter 12—Systems with Multiple Classifiers 12.1 General 12.2 A Framework for Combining Multiple Recognizers 12.3 Voting Schemes 12.4 The Confusion Matrix 12.5 Reliability 12.6 Some Empirical Approaches References and Bibliography Index