Online Public Access Catalogue (OPAC)
Library,Documentation and Information Science Division

“A research journal serves that narrow

borderland which separates the known from the unknown”

-P.C.Mahalanobis


Image from Google Jackets

Machine learning : a Bayesian and optimization perspective / Sergios Theodoridis.

By: Publication details: Amsterdam : Elsevier, ©2015.Description: xxi, 1050 p. : illustrations (some color) ; 24 cmISBN:
  • 9780128015223
Subject(s): DDC classification:
  • 006.31 23 T388
Contents:
1. Introduction -- 2. Probability and stochastic processes -- 3. Learning in parametric modeling: basic concepts and directions -- 4. Mean-square error linear estimation -- 5. Stochastic gradient descent: the LMS algorithm and its family -- 6. The least-squares family -- 7. Classification: a tour of the classics -- 8. Parameter learning: a convex analytic path -- 9. Sparsity-aware learning: concepts and theoretical foundations -- 10. Sparsity-aware learning: algorithms and applications -- 11. Learning in reproducing Kernel Hilbert spaces -- 12. Bayesian learning: inference and the EM algorithm -- 13. Bayesian learning: approximate inference and nonparametric models -- 14. Monte Carlo methods -- 15. Probabilistic graphical models: Part I -- 16. Probabilistic graphical models: Part II -- 17. Particle filtering -- 18. Neural networks and deep learning -- 19. Dimensionality reduction and latent variables modeling -- Appendices.
Summary: The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.
Tags from this library: No tags from this library for this title. Log in to add tags.
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Books ISI Library, Kolkata 006.31 T388 (Browse shelf(Opens below)) Available 137270
Total holds: 0

Includes bibliographical references and index.

1. Introduction --
2. Probability and stochastic processes --
3. Learning in parametric modeling: basic concepts and directions --
4. Mean-square error linear estimation --
5. Stochastic gradient descent: the LMS algorithm and its family --
6. The least-squares family --
7. Classification: a tour of the classics --
8. Parameter learning: a convex analytic path --
9. Sparsity-aware learning: concepts and theoretical foundations --
10. Sparsity-aware learning: algorithms and applications --
11. Learning in reproducing Kernel Hilbert spaces --
12. Bayesian learning: inference and the EM algorithm --
13. Bayesian learning: approximate inference and nonparametric models --
14. Monte Carlo methods --
15. Probabilistic graphical models: Part I --
16. Probabilistic graphical models: Part II --
17. Particle filtering --
18. Neural networks and deep learning --
19. Dimensionality reduction and latent variables modeling --
Appendices.

The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.

There are no comments on this title.

to post a comment.
Library, Documentation and Information Science Division, Indian Statistical Institute, 203 B T Road, Kolkata 700108, INDIA
Phone no. 91-33-2575 2100, Fax no. 91-33-2578 1412, ksatpathy@isical.ac.in