Machine learning: a Bayesian and optimization perspective/ Sergios Theodoriodis
Publication details: London: Academic Press, 2020Edition: 2nd edDescription: xxvii, 1131 pages 23.5 cmISBN:- 9780128188033
- 23 006.31 T385
Item type | Current library | Call number | Status | Date due | Barcode | Item holds | |
---|---|---|---|---|---|---|---|
Books | ISI Library, Kolkata | 006.31 T385 (Browse shelf(Opens below)) | Available | 138466 |
Includes Bibliographical References and Index
1. Introduction -- 2. Probability and stochastic Processes -- 3. Learning in parametric Modeling: Basic Concepts and Directions -- 4. Mean-Square Error Linear Estimation -- 5. Stochastic Gradient Descent: the LMS Algorithm and its Family -- 6. The Least-Squares Family -- 7. Classification: A Tour of the Classics -- 8. Parameter Learning: A Convex Analytic Path -- 9. Sparsity-Aware Learning: Concepts and Theoretical Foundations -- 10. Sparsity-Aware Learning: Algorithms and Applications -- 11. Learning in Reproducing Kernel Hilbert Spaces -- 12. Bayesian Learning: Inference and the EM Algorithm -- 13. Bayesian Learning: Approximate Inference and nonparametric Models -- 14. Montel Carlo Methods -- 15. Probabilistic Graphical Models: Part 1 -- 16. Probabilistic Graphical Models: Part 2 -- 17. Particle Filtering -- 18. Neural Networks and Deep Learning -- 19. Dimensionality Reduction and Latent Variables Modeling -- Index
This book gives a unified perspective on machine learning by covering both pillars of supervised learning, namely regression and classification. The book starts with the basics, including mean square, least squares and maximum likelihood methods, ridge regression, Bayesian decision theory classification, logistic regression, and decision trees. It then progresses to more recent techniques, covering sparse modelling methods, learning in reproducing kernel Hilbert spaces and support vector machines, Bayesian inference with a focus on the EM algorithm and its approximate inference variational versions, Monte Carlo methods, probabilistic graphical models focusing on Bayesian networks, hidden Markov models and particle filtering. Dimensionality reduction and latent variables modelling are also considered in depth.
It also covers the fundamentals of statistical parameter estimation, Wiener and Kalman filtering, convexity and convex optimization, including a chapter on stochastic approximation and the gradient descent family of algorithms, presenting related online learning techniques as well as concepts and algorithmic versions for distributed optimization.
There are no comments on this title.