AFirstCourseinMachineLearning(2nd)无水印pdf
A First Course in Machine Learning(2nd) 英文无水印pdf 第2版 pdf所有页面使用FoxitReader和PDF-XChangeViewer测试都可以打开 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或csdn删除Chapman Hall/crcMachine learning Pattern recognition SeriesSERIES EDITORSRalf herbrichThore graupelAmazon Development CenterMicrosoft research ltdBerlin germanyCambridge, UKAIMS AND SCOPEThis series reflects the latest advances and applications in machine learning and pattern recognitionthrough the publication of a broad range of reference works, textbooks, and handbooks. The inclusion ofconcrete examples, applications, and methods is highly encouraged. The scope of the series includes, butis not limited to, titles in the areas of machine learning, pattern recognition, computational intelligence,robotics, computational/statistical learning theory, natural language processing, computer vision, gameAl, game theory, neural networks, computational neuroscience, and other relevant topics, such as machinelearning applied to bioinformatics or cognitive science, which might be proposed by potential contribu-torsPUBLISHED TITLESBAYESIAN PROGRAMMINGPierre Bessiere Emmanuel Mazer Juan-Manuel Ahuactzin, and Kamel MekhnachaUTILITY-BASED LEARNING FROM DATACraig Friedman and Sven SandowHANDBOOK OF NATURAL LANGUAGE PROCESSING. SECOND EDITIONNitin Indurkhva and fred. damerauCOST-SENSITIVE MACHINE LEARNINGBalaji Krishnapuram, Shipeng Yu, and Bharat raoCOMPUTATIONAL TRUST MODELS AND MACHINE LEARNINGXin Liu, Anwitaman Datta, and Ee-Peng limMULTILINEAR SUBSPACE LEARNING: DIMENSIONALITY REDUCTION OFMULTIDIMENSIONAL DATAHaiping Lu, Konstantinos N Plataniotis, and Anastasios N. venetsanopoulosMACHINE LEARNING: An Algorithmic Perspective, Second editionStephen marslandSPARSE MODELING: THEORY ALGORITHMS AND APPLICATIONSIrina Rish and Genady Ya. GrabarnikA FIRST COURSE IN MACHINE LEARNING. SECOND EDITIONSimon rogers and Mark girolamiSTATISTICAL REINFORCEMENT LEARNING: MODERN MACHINE LEARNING APPROACHESMasashi sugiyamaMULTI-LABEL DIMENSIONALITY REDUCTIONLiang Sun, Shuiwang Ji, and Jieping YeREGULARIZATION OPTIMIZATION KERNELS AND SUPPORT VECTOR MACHINESJohan A.K. Suykens, Marco Signoretto, and Andreas argyriouENSEMBLE METHODS: FOUNDATIONS AND ALGORITHMSZhi-Hua zhouChapman hall/crcMachine learning Pattern recognition seriesA FIRST COURSEIN MACHINELEARNINGSecond editionSimon rogersUniversity of GlasgowUnited KingdomMark girolamiUniversity of WarwickUnited KingdomCRC)CRC PressTaylor Francis GroupBoca raton London New yorkCRC Press is an imprint of theTaylor Francis Group, an informa businessa chapman hall bookMATLAB is a trademark of The mathWorks, Inc. and is used with permission The Math Works doesnot warrant the accuracy of the text or exercises in this book. This books use or discussion of matLAB software or related products does not constitute endorsement or sponsorship by The MathWorksof a particular pedagogical approach or particular use of the MATLAB softwareCRC PressTaylor Francis Group6000 Broken Sound Parkway nw, Suite 300Boca raton FL 33487-2742o 2017 by Taylor Francis Group, LLCCRC Press is an imprint of Taylor& Francis group, an Informa businessNo claim to original U.S. Government worksVersion date: 20160524International Standard Book Number-13: 978-1-4987-3856-9(eBook- Book)This book contains information obtained from authentic and highly regarded sources. Reasonableefforts have been made to publish reliable data and information, but the author and publisher cannotassume responsibility for the validity of all materials or the consequences of their use. The authors andpublishers have attempted to trace the copyright holders of all material reproduced in this publicationand apologize to copyright holders if permission to publish in this form has not been obtained. If anycopyright material has not been acknowledged please write and let us know so we may rectify in anyfuture reprintExcept as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,transmitted, or utilized in any form by any electronic, mechanical, or other means, now known orhereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishersForpermissiontophotocopyorusematerialelectronicallyfromthisworkpleaseaccesswww.copyrightcom(http://www.copyright.com/)orcontacttheCopyrightClearanceCenter,Inc.(ccc),222Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photo-copy license by the CCC, a separate system of payment has been arrangedTrademark Notice: Product or corporate names may be trademarks or registered trademarks, and areused only for identification and explanation without intent to infringeVisit the taylor francis Web site athttp://www.taylorandfrancis.comand the crc press Web site athttp://www.crcpress.comContentsList of tablesList of FiguresXVIIPreface to the first editionPreface to the second editionsection Basic TopicsCHAPTER 1. Linear Modelling: A Least Squares Approach1.1 LINEAR MODELLING1.1. 1 Defining the model1. 1.2 Modelling assumptions1.1.3 Defining a good model3345681. 1.4 The least squares solution -a worked example1. 1.5 Worked example121.1.6 Least squares fit to the olympic data131. 1.7 Summary141.2 MAKING PREDICTIONS151.2. 1 A second olympic dataset151.2.2 Summary171.3 VECTOR/MATRIX NOTATION171.3.1 Example251.3.2 Numerical example261.3.3 Making predictions271.3.4 Summary271.4 NON-LINEAR RESPONSE FROM A LINEAR MODEL281.5 GENERALISATION AND OVER-FITTING31vi■ Contents1.5.1 Validation data311.5.2 Cross-validation321.5.3 Computational scaling of K-fold cross-validation341.6 REGULARISED LEAST SQUARES341.7EⅩERC|ISES371. 8 FURTHER READING39CHAPTER 2- Linear Modelling: A Maximum LikelihoodApproach2.1 ERRORS AS NOISE412.1.1 Thinking generatively422.2 RANDOM VARIABLES AND PROBABILITY432.2.1 Random variables432.2.2 Probability and distributions442.2.3 Adding probabilities462.2.4 Conditional probabilities462.2.5 Joint probabilities472.2.6 Marginalisation492.2.7 Aside- bayes rule512.2.8 Expectations522.3 POPULAR DISCRETE DISTRIBUTIONS552.3.1 Bernoulli distribution552.3.2 Binomial distribution552.3.3 Multinomial distribution562.4 CONTINUOUS RANDOM VARIABLES -DENSITYFUNCTIONS572.5 POPULAR CONTINUOUS DENSITY FUNCTIONS602.5. 1 The uniform density function602.5.2 The beta density function622.5.3 The gaussian density function632.5. 4 Multivariate Gaussian642.6 SUMMARY662.7 THINKING GENERATIVELY CONTINUED672.8 LIKELIHOOD682.8.1 Dataset likelihood692.8.2 Maximum likelihood70Contents■ⅴi2.8. 3 Characteristics of the maximum likelihood solution 732.8.4 Maximum likelihood favours complex models752. 9 THE BIAS-VARIANCE TRADE-OFF2.9.1 Summary762.10 EFFECT OF NOISE ON PARAMETER ESTIMATES772.10.1 Uncertainty in estimates782.10.2 Comparison with empirical values832. 10. 3 Variability in model parameters-Olympic data842.11 VARIABILITY IN PREDICTIONS842.11.1 Predictive variability -an example862. 11.2 Expected values of the estimators862.12 CHAPTER SUMMARY912.13 EXERCISES922.14 FURTHER READING93chaPter 3- The Bayesian Approach to Machine Learning953.1 A COIN GAME3.1.1 Counting heads973.1.2 The Bayesian way983.2 THE EXACT POSTERIOR1033.3 THE THREE SCENARIOS1043.3.1 No prior knowledge1043.3.2 The fair coin scenario1123.3.3 a biased coin1143.3.4 The three scenarios -a summary1163.3.5 Adding more data1173.4 MARGINAL LIKELIHOODS1173.4.1 Model comparison with the marginal likelihood1193.5 HYPERPARAMETERS1193.6 GRAPHICAL MODELS1203.7 SUMMARY1223. 8 A BAYESIAN TREATMENT OF THE OLYMPIC 100 mDATA1223.8.1 The model1223.8.2 The likelihood1243.8.3 The prior124vi■ Contents3.8.4 The posterior1243.8.5 a first-order polynomia1263.8.6 Making predictions1293.9 MARGINAL LIKELIHOOD FOR POLYNOMIAL MODELORDER SELECTION1303.10 CHAPTER SUMMARY1333.11 EXERCISES1333.12 FURTHER READING135ChaPter 4. Bayesian Inference1374.1 NON-CONJUGATE MODELS1374.2 BINARY RESPONSES1384.2.1 A model for binary responses1384.3 A POINT ESTIMATE-THE MAP SOLUTION1414.4 THE LAPLACE APPROXIMATION1474.4.1 Laplace approximation example: Approximating agamma density1484.4.2 Laplace approximation for the binary response model 1504.5 SAMPLING TECHNIQUES1524.5. 1 Playing darts1524.5.2 The Metropolis-Hastings algorith1544.5. 3 The art of sampling1624.6 CHAPTER SUMMARY1634.7 EXERCISES1634.8 FURTHER READING164CHAPTER 5- Classification1675.1 THE GENERAL PROBLEM1675.2 PROBABILISTIC CLASSIFIERS1685.2.1 The Bayes classifier1685.2.1.1 Likelihood class-conditional distributions 1695.2.1.2 Prior class distribution1695.2. 1.3 Example- Gaussian class-conditionals1705.2. 1. Making predictions1715.2.1.5 The naive-Bayes assumption172Contents■ix5.2.1.6 Example classifying text1745.2.1.7 Smoothing1765.2.2 Logistic regression1785.2.2.1 Motivation1785.2.2.2 Non-linear decision functions1795.2.2.3 Non-parametric models the gaussianprocess1805.3 NON-PROBABILISTIC CLASSIFIERS1815.3.1 K-nearest neighbours1815.3.1.1 Choosing1825.3.2 Support vector machines and other kernel methods 1855. 3.2. 1 The margin1855.3.2.2 Maximising the margin1865.3.2. 3 Making predictions1895. 3.2.4 Support vectors1895. 3.2.5 Soft margins1915.3.2.6 Kernels1935.3.3 Summary1965.4 ASSESSING CLASSIFICATION PERFORMANCE1965.4.1 Accuracy -0/1 loss1965.4.2 Sensitivity and specificity1975.4.3 The area under the roc curve1985.4.4 Confusion matrices2005.5 DISCRIMINATIVE AND GENERATIVE CLASSIFIERS2025.6 CHAPTER SUMMARY2025.7 EXERCISES2025. 8 FURTHER READING203CHAPTER 6 Clustering2056.1 THE GENERAL PROBLEM2056.2 K-MEANS CLUSTERING2066.2. 1 Choosing the number of clusters2086.2.2 Where K-means fails2106.2. 3 Kernelised K-means2106.2.4 Summary2126.3M|Ⅸ XTURE MODELS213
用户评论