Simon Haykin - Neural networks. A Comprehensive Foundation. Second Edition
Просмотров: 2966
- Annotation
- Contents
- Preface
- ABBREVIATIONS
- IMPORTANT SYMBOLS
- Introduction
- 1.1 WHAT IS A NEURAL NETWORK?
- 1.2 HUMAN BRAIN
- 1.3 MODELS OF A NEURON
- 1.4 NEURAL NETWORKS VIEWED AS DIRECTED GRAPHS
- 1.5 FEEDBACK
- 1.6 NETWORK ARCHITECTURES
- 1.7 KNOWLEDGE REPRESENTATION
- 1.8 ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS
- 1.9 HISTORICAL NOTES
- Learning Processes
- 2.1 INTRODUCTION
- 2.2 ERROR-CORRECTION LEARNING
- 2.3 MEMORY-BASED LEARNING
- 2.4 HEBBIAN LEARNING
- 2.5 COMPETITIVE LEARNING
- 2.6 BOLTZMANN LEARNING
- 2.7 CREDIT-ASSIGNMENT PROBLEM
- 2.8 LEARNING WITH A TEACHER
- 2.9 LEARNING WITHOUT A TEACHER
- 2.10 LEARNING TASKS
- 2.11 MEMORY
- 2.12 ADAPTATION
- 2.13 STATISTICAL NATURE OF THE LEARNING PROCESS
- 2.14 STATISTICAL LEARNING THEORY
- 2.15 PROBABLY APPROXIMATELY CORRECT MODEL OF LEARNING
- 2.16 SUMMARY AND DISCUSSION
- Single-Layer Perceptrons
- 3.1 INTRODUCTION
- 3.2 UNCONSTRAINED OPTIMIZATION TECHNIQUES
- 3.3 Newton's Method
- 3.4 LINEAR LEAST-SQUARES FILTER
- 3.5 LEAST-MEAN-SQUARE ALGORITHM
- 3.6 LEARNING CURVES
- 3.7 LEARNING-RATE ANNEALING SCHEDULES
- 3.8 PERCEPTRON
- 3.9 PERCEPTRON CONVERGENCE THEOREM
- 3.10 RELATION BETWEEN THE PERCEPTRON AND BAYES CLASSIFIER FOR A GAUSSIAN ENVIRONMENT
- 3.11 SUMMARY AND DISCUSSION
- Multilayer Perceptrons
- 4.1 INTRODUCTION
- 4.2 SOME PRELIMINARIES
- 4.3 BACK-PROPAGATION ALGORITHM
- 4.4 SUMMARY OF THE BACK-PROPAGATION ALGORITHM
- 4.5 XOR PROBLEM
- 4.6 HEURISTICS FOR MAKING THE BACK-PROPAGATION ALGORITHM PERFORM BETTER
- 4.7 OUTPUT REPRESENTATION AND DEGSiON RULE
- 4.8 COMPUTER EXPERIMENT
- 4.9 FEATURE DETECTION
- 4.10 BACK-PROPAGATION AND DIFFERENTIATION
- 4.11 HESSIAN MATRIX
- 4.12 GENERALIZATION
- 4.13 APPROXIMATIONS OF FUNCTIONS
- 4.14 CROSS-VALIDATION
- 4.15 NETWORK PRUNING TECHNIQUES
- 4.16 VIRTUES AND LIMITATIONS OF BACK-PROPAGATION LEARNING
- 4.17 ACCELERATED CONVERGENCE OF BACK-PROPAGATION LEARNING
- 4.18 SUPERVISED LEARNING VIEWED AS AN OPTIMIZATION PROBLEM
- 4.19 CONVOLUTIONAL NETWORKS
- 4.20 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- 5.1 INTRODUCTION
- 5.2 COVER'S THEOREM ON THE SEPARABILITY OF PATTERNS
- 5.3 INTERPOLATION PROBLEM
- 5.4 SUPERVISED LEARNING AS AN ILL-POSED HYPERSURFACE RECONSTRUCTION PROBLEM
- 5.5 REGULARIZATION THEORY
- 5.6 REGULARIZATION NETWORKS
- 5.7 GENERALiZED RADIAL-BASIS FUNCTION NETWORKS
- 5.8 XOR PROBLEM (REVISITED)
- 5.9 ESTIMATION OF THE REGULARIZATION PARAMETER
- 5.10 APPROXIMATION PROPERTIES OF RBF NETWORKS
- 5.11 COMPARISON OF RBF NETWORKS AND MULTILAYER PERCEPTRONS
- 5.12 KERNEL REGRESSION AND ITS RELATION TO RBF NETWORKS
- 5.13 LEARNING STRATEGIES
- 1. Fixed Centers Selected at Random
- 2. Seff-Organized Selection of Centers
- 3. Supervised Selection of Centers
- 4. Strict Interpolation with Regularization
- 5.14 COMPUTER EXPERIMENT: PATTERN CLASSIFICATION
- 5.15 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 6.1 INTRODUCTION
- 6.2 OPTIMAL HYPERPLANE FOR LINEARLY SEPARABLE PATTERNS
- 6.3 OPTIMAL HYPERPLANE FOR NONSEPARABLE PATTERNS
- 6.4 HOW TO BUILD A SUPPORT VECTOR MACHINE FOR PATTERN RECOGNITION
- 6.5 EXAMPLE; XOR PROBLEM (REVISITED)
- 6.6 COMPUTER EXPERIMENT
- 6.7 €-INSENSITIVE LOSS FUNCTION
- 6.8 SUPPORT VECTOR MACHINES FOR NONLINEAR REGRESSION
- 6.9 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 7.1 INTRODUCTION
- 7.2 ENSEMBLE AVERAGING
- 7.3 COMPUTER EXPERIMENT I
- 7.4 BOOSTING
- 7.5 COMPUTER EXPERIMENT II
- 7.6 ASSOCIATIVE GAUSSIAN MIXTURE MODEL
- 7.7 HIERARCHICAL MIXTURE OF EXPERTS MODEL
- 7.8 MODEL SELECTION USING A STANDARD DECISION TREE
- 7.9 A PRIORI AND A POSTERIORI PROBABILITIES
- 7.10 MAXIMUM LIKELIHOOD ESTIMATION
- 7.11 LEARNING STRATEGIES FOR THE HME MODEL
- 7.12 ЕМ ALGORITHM
- 7.13 APPLICATION OF THE EM ALGORITHM TO THE HME MODEL
- 7.14 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 8.1 INTRODUCTION
- 8.2 SOME INTUITIVE PRINCIPLES OF SELF-ORGANIZATION
- 8.3 PRINgPAL COMPONENTS ANALYSIS
- 8.4 HEBBIAN-BASED MAXIMUM EIGENFILTER
- 8.5 HEBBIAN-BASED PRINCIPAL COMPONENTS ANALYSIS
- 8.6 COMPUTER EXPERIMENT: IMAGE CODING
- 8.7 ADAPTIVE PRINCIPAL COMPONENTS ANALYSIS USING LATERAL INHIBITION
- 8.8 TWO CLASSES OF PCA ALGORITHMS
- 8.9 BATCH AND ADAPTIVE METHODS OF COMPUTATION
- 8.10 KERNEL PRINCIPAL COMPONENTS ANALYSIS
- 8.11 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 9.1 INTRODUCTION
- 9.2 TWO BASIC FEATURE-MAPPING MODELS
- 9.3 SELF-ORGANIZING MAP
- 9.4 SUMMARY OF THE SOM ALGORITHM
- 9.5 PROPERTIES OF THE FEATURE MAP
- 9.6 COMPUTER SIMULATIONS
- 9.7 LEARNING VECTOR QUANTIZATION
- 9.8 COMPUTER EXPERIMENT: ADAPTIVE PATTERN CLASSIFICATION
- 9.9 hierarchical VECTOR QUANTIZATION
- 9.10 CONTEXTUAL MAPS
- 9.11 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 10.1 INTRODUCTION
- 10.2 ENTROPY
- 10.3 MAXIMUM ENTROPY PRINCIPLE
- 10.4 MUTUAL INFORMATION
- 10.5 KULLBACK-LEIBLER DIVERGENCE
- 10.6 MUTUAL INFORMATION AS AN OBJECTIVE FUNCTION TO BE OPTIMIZED
- 10.7 MAXIMUM MUTUAL INFORMATION PRINCIPLE
- 10.8 INFOMAX AND REDUNDANCY REDUCTION
- 10.9 SPATIALLY COHERENT FEATURES
- 10.10 SPATIALLY INCOHERENT FEATURES
- 10.11 INDEPENDENT COMPONENTS ANALYSIS
- 10.12 COMPUTER EXPERIMENT
- 10.13 MAXIMUM LiKELIHOOD ESTIMATION
- 10.14 MAXIMUM ENTROPY METHOD
- 10.15 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- 11.1 INTRODUCTION
- 11.2 STATISTICAL MECHANICS
- 11.3 MARKOV CHAINS
- 11.4 METROPOLIS ALGORITHM
- 11.5 SIMULATED ANNEALING
- 11.6 GIBBS SAMPUNG
- 11.7 BOLTZMANN MACHINE
- 11.8 SIGMOID BELIEF NETWORKS
- 11.9 HELMHOLTZ MACHINE
- 11.10 MEAN-FIELD THEORY
- 11.11 DETERMINISTIC BOLTZMANN MACHINE
- 11.12 DETERMINISTIC SIGMOID BELIEF NETWORKS
- 11.13 DETERMINISTIC ANNEALING
- 11.14 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 12.1 INTRODUCTION
- 12.2 MARKOVIAN DECISION PROCESS
- 12.3 BELLMAN'S OPTIMALITY CRITERION
- 12.4 POLICY ITERATION
- 12.5 VALUE ITERATION
- 12.6 NEURODYNAMIC PROGRAMMING
- 12.7 APPROXIMATE POLICY ITERATION
- 12.8 Q-LEARNING
- 12.9 COMPUTER EXPERIMENT
- 12.10 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 13.1 INTRODUCTION
- 13.2 SHORT-TERM MEMORY STRUCTURES
- 13.3 NETWORK ARCHITECTURES FOR TEMPORAL PROCESSING
- 13.4 FOCUSED TIME LAGGED FEEDFORWARD NETWORKS
- 13.5 COMPUTER EXPERIMENT
- 13.6 UNIVERSAL MYOPIC MAPPING THEOREM
- 13.7 SPATIO-TEMPORAL MODELS OF A NEURON
- 13.8 DISTRIBUTED TIME LAGGED FEEDFORWARD NETWORKS
- 13.9 TEMPORAL BACK-PROPAGATION ALGORITHM
- 13.10 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 14.1 INTRODUCTION
- 14.2 DYNAMICAL SYSTEMS
- 14.3 STABILITY OF EQUILIBRIUM STATES
- 14.4 ATTRACTORS
- 14.5 NEURODYNAMICAL MODELS
- 14.6 MANIPULATION OF ATTRACTORS AS A RECURRENT NETWORK PARADIGM
- 14.7 HOPFIELD MODEL
- 14.8 COMPUTER EXPERIMENT I
- 14.9 COHEN-GROSSBERG THEOREM
- 14.10 BRAIN-STATE-IN-A-BOX MODEL
- 14.11 COMPUTER EXPERIMENT II
- 14.12 STRANGE ATTRACTORS AND CHAOS
- 14.13 DYNAMIC RECONSTRUCTION
- 14.14 COMPUTER EXPERIMENT III
- 14.15 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- 15.1 INTRODUCTION
- 15.2 RECURRENT NETWORK ARCHITECTURES
- 15.3 STATE-SPACE MODEL
- 15.4 NONLINEAR AUTOGRESSIVE WITH EXOGENOUS INPUTS MODEL
- 15.5 COMPUTATIONAL POWER OF RECURRENT NETWORKS
- 15.6 LEARNING ALGORITHMS
- 15.7 BACK-PROPAGATION THROUGH TIME
- 15.8 REAL-TIME RECURRENT LEARNING
- 15.9 KALMAN FILTERS
- 15.10 DECOUPLED EXTENDED KALMAN FILTER
- 15.11 COMPUTER EXPERIMENT
- 15.12 VANISHING GRADIENTS IN RECURRENT NETWORKS
- 15.13 SYSTEM IDENTIFICATION
- 15.14 MODEL REFERENCE ADAPTIVE CONTROL
- 15.15 SUMMARY AND DISCUSSION
- NOTES AND REFERENCES
- PROBLEMS
- INTELLIGENT MACHINES
- NOTES AND REFERENCES
- Index
Похожие книги
Simon Haykin - Neural networks. A Comprehensive Foundation. Second Edition
Ляпа М.М., Трофименко П.Є., Пушкарьов Ю.І., Панченко О.В. - Дії взводу управління в бою
- Історія української культури – Н.В. Лобко
О. В. Зайцев - Податковий менеджмент. Конспект лекцій