computeragestatisticalinference
The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and in influence. 'Big data', 'data science', and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science aTo Donna and lyndavIllContentsPIefaceAcknowledgmentsNotationParti Classic Statistical Inference1 Algorithms and Inference1.1 A Regression FxampleHypothesis Testing34812 Frequentist Inference2.1 Frequentism in Practice142.2 Frequentist Optimality182. 3 Notes and details203 Bayesian Inference223.1 Two Examples243.2 Uninformative Prior Distributions3.3 Flaws in Frequentist Inference303.4 A Bayesian/Frequentist Comparison List333.5 Notes and details364 Fisherian inference and maximum Likelihood estimation4.1 Likelihood and maximum Likelihood384.2 Fisher information and the mle414.3 Conditional Inference454.4 Permutation and randomization4945 Notes and details5 Parametric Models and Exponential Families53ontents5.1 Univariate families5.2 The multivariate normal distribution5.3 Fishers Information Bound for Multiparameter Families5.4 The multinomial distribution615.5 Exponential Families5.6 Notes and detailsPart l Early Computer-Age methods6 Empirical Bayes756.1 Robbins’ Formula756.2 The Missing-Species Problem6. 3 A Medical example846.4 Indirect evidence6.5 Notes and details7 James-Stein Estimation and Ridge regression917.1 The james-Stein estil917.2 The Baseball Players94.3 Ridge Regression9774 Indirect evidence 27.5 Notes and detailsl048 Generalized Linear Models and Regression Trees1088.1 Logistic Regression1098.2 Generalized Linear models1168.3 Poisson regression1208.4 Regression trees248.5 Notes and detail1289 Survival Analysis and the em algorithm9.1 Life Tables and Hazard rates1319.2 Censored Data and the kaplan -Meier Estimate9.3 The Log-Rank Test1399. 4 The Proportional Hazards mode1439.5 Missing data and the em algorithm1469.6 Notes and Details15010 The jackknife and the bootstrap15510.1 The jack knife estimate of standard error15610.2 The Nonparametric Bootstrap15910.3 Resampling Plans162ontents10. 4 The Parametric Bootstrap1690.5 Influence functions and robust estimation17410.6 not11 Bootstrap Confid1811.1 Nevman's Construction for One-Parameter Problems1811.2 The Percentile method18511.3 Bias-Corrected Confidence intervals19011.4 Second-Order accuracy11.5 Bootstrap-t Intervals19511.6 Objective Bayes Intervals and the Confidence Distribution198d Details20412 Cross-Validation and cn estimates of prediction error12.1 Prediction rules12.2 Cross-Validation12.3 Covariance Penalti12.4 Training, Validation, and Ephemeral Predictors712.5 Notes and details23013 Objective bayes Inference and mCm23313.1 Objective Prior Distributions23413.2 Conjugate Prior D23713.3 Model Selection and the Bayesian Information Criterion24313.4 Gibbs Sampling and MCMC13.5 Example: Modeling Population Admixture13. 6 Notes and details26l14 Postwar Statistical Inference and MethodologyPart Ill Twenty-First-Century Topics15IScale hypothesis Testisld Fdrs27Large-Scale Testing15.2 False-Discovery Rates27515.3 Empirical Bayes Large-Scale Testing27815. Local False-Discovery rates28215.5 Choice of the null distribution28615.6 Relevance2905.7 Notes and Details16 Sparse Modeling and the lasso2ontents16.1 Forward Stepwise Regression16.2 The lasso30316.3 Fitting Lasso Mod16.4 Least-Angle Regression30916.5 Fitting generalized lasso models31316.6 Post-Selection Inference for the lasso16.7 Connections and extensions31916. 8 Notes and Detai32117 Random Forests and Boosting32417.1 Random forests32517.2 Boosting with Squared-Error Loss33317.3 Gradient Boosting33817.4 Adaboost: the Original Boosting algorithm34117.5 Connections and extensions34517.6 Notes and details34718 Neural Networks and Deep Learning35118.1 Neural Networks and the handwritten digit problerm18.2 Fitting a Neural Network18.3 Autoencoders18.4 Deep Learning36418.5 Learning a Deep Network36818.6 Notes and details3719 Support-Vector Machines and Kernel Methods37519.1 Optimal Separating Hyperplane19.2 Soft-Margin Classifier37819.3 SVM Criterion as loss plus penalt37919.4 Computations and the Kernel Trick38119.5 Function Fitting Using Kernels38419.6 Example: String Kernels for Protein Classification38519.7 SVMS: Concluding remarks38719.8 Kernel Smoothing and Local regression38719.9 Notes and details39020 Inference After Model selection39420.1 Simultaneous Confidence intervals39520.2 Accuracy After Model Selection40220.3 Selection Bias40820.4 Combined Bayes-Frequentist estimation20.5 Notes and details417
暂无评论