Online Public Access Catalogue (OPAC)
Library,Documentation and Information Science Division

“A research journal serves that narrow

borderland which separates the known from the unknown”

-P.C.Mahalanobis


Image from Google Jackets

Machine learning: a Bayesian and optimization perspective/ Sergios Theodoriodis

By: Publication details: London: Academic Press, 2020Edition: 2nd edDescription: xxvii, 1131 pages 23.5 cmISBN:
  • 9780128188033
Subject(s): DDC classification:
  • 23 006.31  T385
Contents:
1. Introduction -- 2. Probability and stochastic Processes -- 3. Learning in parametric Modeling: Basic Concepts and Directions -- 4. Mean-Square Error Linear Estimation -- 5. Stochastic Gradient Descent: the LMS Algorithm and its Family -- 6. The Least-Squares Family -- 7. Classification: A Tour of the Classics -- 8. Parameter Learning: A Convex Analytic Path -- 9. Sparsity-Aware Learning: Concepts and Theoretical Foundations -- 10. Sparsity-Aware Learning: Algorithms and Applications -- 11. Learning in Reproducing Kernel Hilbert Spaces -- 12. Bayesian Learning: Inference and the EM Algorithm -- 13. Bayesian Learning: Approximate Inference and nonparametric Models -- 14. Montel Carlo Methods -- 15. Probabilistic Graphical Models: Part 1 -- 16. Probabilistic Graphical Models: Part 2 -- 17. Particle Filtering -- 18. Neural Networks and Deep Learning -- 19. Dimensionality Reduction and Latent Variables Modeling -- Index
Summary: This book gives a unified perspective on machine learning by covering both pillars of supervised learning, namely regression and classification. The book starts with the basics, including mean square, least squares and maximum likelihood methods, ridge regression, Bayesian decision theory classification, logistic regression, and decision trees. It then progresses to more recent techniques, covering sparse modelling methods, learning in reproducing kernel Hilbert spaces and support vector machines, Bayesian inference with a focus on the EM algorithm and its approximate inference variational versions, Monte Carlo methods, probabilistic graphical models focusing on Bayesian networks, hidden Markov models and particle filtering. Dimensionality reduction and latent variables modelling are also considered in depth. It also covers the fundamentals of statistical parameter estimation, Wiener and Kalman filtering, convexity and convex optimization, including a chapter on stochastic approximation and the gradient descent family of algorithms, presenting related online learning techniques as well as concepts and algorithmic versions for distributed optimization.
Tags from this library: No tags from this library for this title. Log in to add tags.
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Books ISI Library, Kolkata 006.31 T385 (Browse shelf(Opens below)) Available 138466
Total holds: 0

Includes Bibliographical References and Index

1. Introduction -- 2. Probability and stochastic Processes -- 3. Learning in parametric Modeling: Basic Concepts and Directions -- 4. Mean-Square Error Linear Estimation -- 5. Stochastic Gradient Descent: the LMS Algorithm and its Family -- 6. The Least-Squares Family -- 7. Classification: A Tour of the Classics -- 8. Parameter Learning: A Convex Analytic Path -- 9. Sparsity-Aware Learning: Concepts and Theoretical Foundations -- 10. Sparsity-Aware Learning: Algorithms and Applications -- 11. Learning in Reproducing Kernel Hilbert Spaces -- 12. Bayesian Learning: Inference and the EM Algorithm -- 13. Bayesian Learning: Approximate Inference and nonparametric Models -- 14. Montel Carlo Methods -- 15. Probabilistic Graphical Models: Part 1 -- 16. Probabilistic Graphical Models: Part 2 -- 17. Particle Filtering -- 18. Neural Networks and Deep Learning -- 19. Dimensionality Reduction and Latent Variables Modeling -- Index

This book gives a unified perspective on machine learning by covering both pillars of supervised learning, namely regression and classification. The book starts with the basics, including mean square, least squares and maximum likelihood methods, ridge regression, Bayesian decision theory classification, logistic regression, and decision trees. It then progresses to more recent techniques, covering sparse modelling methods, learning in reproducing kernel Hilbert spaces and support vector machines, Bayesian inference with a focus on the EM algorithm and its approximate inference variational versions, Monte Carlo methods, probabilistic graphical models focusing on Bayesian networks, hidden Markov models and particle filtering. Dimensionality reduction and latent variables modelling are also considered in depth.
It also covers the fundamentals of statistical parameter estimation, Wiener and Kalman filtering, convexity and convex optimization, including a chapter on stochastic approximation and the gradient descent family of algorithms, presenting related online learning techniques as well as concepts and algorithmic versions for distributed optimization.

There are no comments on this title.

to post a comment.
Library, Documentation and Information Science Division, Indian Statistical Institute, 203 B T Road, Kolkata 700108, INDIA
Phone no. 91-33-2575 2100, Fax no. 91-33-2578 1412, ksatpathy@isical.ac.in