Machine learning : a Bayesian and optimization perspective / Sergios Theodoridis.
Publication details: Amsterdam : Elsevier, ©2015.Description: xxi, 1050 p. : illustrations (some color) ; 24 cmISBN:- 9780128015223
- 006.31 23 T388
Item type | Current library | Call number | Status | Date due | Barcode | Item holds | |
---|---|---|---|---|---|---|---|
Books | ISI Library, Kolkata | 006.31 T388 (Browse shelf(Opens below)) | Available | 137270 |
Includes bibliographical references and index.
1. Introduction --
2. Probability and stochastic processes --
3. Learning in parametric modeling: basic concepts and directions --
4. Mean-square error linear estimation --
5. Stochastic gradient descent: the LMS algorithm and its family --
6. The least-squares family --
7. Classification: a tour of the classics --
8. Parameter learning: a convex analytic path --
9. Sparsity-aware learning: concepts and theoretical foundations --
10. Sparsity-aware learning: algorithms and applications --
11. Learning in reproducing Kernel Hilbert spaces --
12. Bayesian learning: inference and the EM algorithm --
13. Bayesian learning: approximate inference and nonparametric models --
14. Monte Carlo methods --
15. Probabilistic graphical models: Part I --
16. Probabilistic graphical models: Part II --
17. Particle filtering --
18. Neural networks and deep learning --
19. Dimensionality reduction and latent variables modeling --
Appendices.
The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.
There are no comments on this title.