- To introduce students to the basic concepts and techniques of Machine Learning.
- To have a thorough understanding of the Supervised and Unsupervised learning techniques
- To study the various probability based learning techniques
- To understand graphical models of machine learning algorithms
CP5191 MACHINE LEARNING TECHNIQUES SYLLABUS |
Learning – Types of Machine Learning – Supervised Learning – The Brain and the Neuron – Design a Learning System – Perspectives and Issues in Machine Learning – Concept Learning Task – Concept Learning as Search – Finding a Maximally Specific Hypothesis – Version Spaces and the Candidate Elimination Algorithm – Linear Discriminants – Perceptron – Linear Separability – Linear Regression.
UNIT II LINEAR MODELS
Multi-layer Perceptron – Going Forwards – Going Backwards: Back Propagation Error – Multi- layer Perceptron in Practice – Examples of using the MLP – Overview – Deriving Back- Propagation – Radial Basis Functions and Splines – Concepts – RBF Network – Curse of Dimensionality – Interpolations and Basis Functions – Support Vector Machines.
UNIT III TREE AND PROBABILISTIC MODELS
Learning with Trees – Decision Trees – Constructing Decision Trees – Classification and Regression Trees – Ensemble Learning – Boosting – Bagging – Different ways to Combine Classifiers – Probability and Learning – Data into Probabilities – Basic Statistics – Gaussian Mixture Models – Nearest Neighbor Methods – Unsupervised Learning – K means Algorithms –
Vector Quantization – Self Organizing Feature Map
UNIT IV DIMENSIONALITY REDUCTION AND EVOLUTIONARY MODELS
Dimensionality Reduction – Linear Discriminant Analysis – Principal Component Analysis – Factor Analysis – Independent Component Analysis – Locally Linear Embedding – Isomap – Least Squares Optimization – Evolutionary Learning – Genetic algorithms – Genetic Offspring: - Genetic Operators – Using Genetic Algorithms – Reinforcement Learning – Overview – Getting Lost Example – Markov Decision Process
UNIT V GRAPHICAL MODELS
Markov Chain Monte Carlo Methods – Sampling – Proposal Distribution – Markov Chain Monte Carlo – Graphical Models – Bayesian Networks – Markov Random Fields – Hidden Markov Models – Tracking Methods
TOTAL: 45 PERIODS
OUTCOMES:
- Upon completion of this course, the students will be able to:
- Distinguish between, supervised, unsupervised and semi-supervised learning
- Apply the appropriate machine learning strategy for any given problem
- Suggest supervised, unsupervised or semi-supervised learning algorithms for any given problem
- Design systems that uses the appropriate graph models of machine learning
- Modify existing machine learning algorithms to improve classification efficiency
REFERENCES:
- Ethem Alpaydin, ―Introduction to Machine Learning 3e (Adaptive Computation and Machine Learning Series)‖, Third Edition, MIT Press, 2014
- Jason Bell, ―Machine learning – Hands on for Developers and Technical Professionals‖, First Edition, Wiley, 2014
- Peter Flach, ―Machine Learning: The Art and Science of Algorithms that Make Sense of Data‖, First Edition, Cambridge University Press, 2012.
- Stephen Marsland, ―Machine Learning – An Algorithmic Perspective‖, Second Edition, Chapman and Hall/CRC Machine Learning and Pattern Recognition Series, 2014.
- Tom M Mitchell, ―Machine Learning‖, First Edition, McGraw Hill Education, 2013.
Interesting post. I Have Been wondering about this issue, so thanks for posting. Pretty cool post.It 's really very nice and Useful post.Thanks Goldshell KD6
ReplyDelete