We are not going to just teach you machine learning. We are going to provide you a comprehensive platform to facilitate the learning of both theory and applications of machine learning through your active participation. We will equip you with knowledge and skills necessary to tackle the real-world problems in a principled manner. In essence, we aim to save you from the failures and frustrations of "spray and pray" or "blackbox" methods of unskilled practitioners.

We have custom designed learning modules for concepts with techniques that are best suited for each of them. While some concepts are most efficiently learnt through video lectures, some others are understood better via active explorations. Real-world problem-solving contexts could be apt for yet others. The mix of these strategies along with pedagogical techniques like spaced-repetition, reflection, in-line quizzing etc., are incorporated naturally into the courses.

The activities in each course will follow the structure below:

  • In-Person Sessions The theoretical concepts will be discussed during the weekly in-person sessions. The sessions will also involve working with iPython notebooks, designed for active exploration of the concepts.
  • Assignments Pedagogically designed assignments will enable you to internalize the concepts while applying them to solve real-world or close to real-world projects. Some projects may be simplified to avoid distractions from the core objective of the course.
  • Hangout Sessions We will have mid-week hangout session to provide assistance on assignments. These will be up to 90 minutes per session.

Course Details

Hands-on Machine Learning - Foundations

Why learn classical ML methods, in a world bustling with the news and demos, flashing the amazing capabilities of deep learning?

The pragmatic fact is that most real-world problems either don’t need deep learning or can be better dealt with shallower, simpler models by incorporating already well-understood domain knowledge. Deep learning requires a very large amount of labeled training data. However, with good feature engineering, combining the expert domain knowledge and the know-how of the internal machinery of the machine learning models, you can build solutions that work great. This approach has other benefits too: lesser labeled data, quicker turnaround time and more interpretable models. The Googles and Facebooks of the world, even with strong deep learning technologies available for them, routinely use classical machine learning methods for most problems. Classical machine learning is the workhorse of the real-world data-driven problem-solving.

Consider self-driving cars or Alexa kind of applications. They would mainly use deep learning for processing visual scenes or speech input, which makes up only a small, albeit a very important, part of the whole system. Much of the system would use rule-based, algorithmic and simpler ML models, reasoning with knowledge models, etc. to put together the whole act of self-driving or understanding and responding to the users. It’s these classical methods that make up the majority of the system. Such methods are even more important when you are working with tabular data, common in business problems.

The Hands-on Machine Learning Foundations Course is designed to provide you a strong foundation of the general concepts of machine learning while introducing you to some powerful classical machine learning methods, that would surely cover the majority of your real-world demands. Moreover, the concepts that you would master in this course will lay a perfect launchpad for more advanced models discussed in Advanced Course, which includes an introduction to deep learning.

The outline of the course is given below.

Session 1 - A Big Picture

A Generic ML Machinery
  1. Introduction to ML
  2. Problems Addressed by ML - Supervised, Unsupervised, Reinforcement Learning
  3. General Framework of ML - Understanding hypothesis space, model, objective function, and training mechanisms
  4. Evaluating ML Algorithms (train, test, Cross-validation)
Dive into Action: Getting a Flavour of End-to-End ML
  1. Formulating ML Problems
  2. Understanding Data & Feature Engineering
  3. Training & Testing Models (Blackbox Approach)
  4. Limitaions of Blackbox Approach

Session 2 - Naive Bayes & Decision Trees

Naive Bayes Models
  1. Probability Basics & Bayes Rule
  2. Using the Bayes Rule to formulate ML Decisions
  3. Naive Bayes Models
  4. Defining the Objective Function
  5. Maximum Likelihood Estimation
  6. Practical Tips & Applications
Decision Trees & Random Forests
  1. Solving Problems with If-Else Rules & Establishing the Connection to Decision Trees
  2. Strategies for Learning (Growing) Decision Trees (Classification & Regression)
  3. Strengths & Limitations of the Decision Trees
  4. Regularisation, Bagging & Boosting (AdaBoost, XGBoost)
  5. Random Forests
  6. Practical Tips & Applications

Session 3 - Linear Models

Linear Regression
  1. Formulating Linear Regression
  2. Object Function and MLE Optimisation
  3. Introduction to SGD
  4. Hyperparameter tuning, Regularisation
  5. Fitting non-linear functions - (Non-linear Feature transformation)
  6. Practical Tips & Applications
Logistic Regression
  1. Formulating Logistic Regression
  2. Objective Function, MLE, and SGD
  3. Hyperparameter tuning & Regularisation
  4. Extending to multi-class classification (Softmax Regression)
  5. Practical Tips & Applications
Support Vector Machines
  1. SVM model & Key ideas
  2. Understanding the optimization machinery of the SVMs
  3. Dealing with non-linearity - Kernel Trick
  4. Training: Hyperparameter tuning & Regularisation
  5. Practical Tips & Applications

Session 4 - Unsupervised Models

Density Estimation
  1. Gaussian Distributions - Univariate & Multivariate
  2. Fitting Gaussian Distributions to Data
  3. Multi-Modal Distributions via Gaussian Mixture Models
  4. Using Random Forests for Density Estimation
  5. One-class SVM for novelty detection
  6. Practical Tips & Applications
  1. K-Means Clustering
  2. Agglomerative Clustering
  3. Soft-Clustering using GMMs
  4. Practical Tips & Applications
Instructor Profile

Ram Prakash H.

Course Designer & Instructor

These courses are designed is designed and delivered by Mr. Ram Prakash H. Ram holds a B.Tech degree in Computer Science from IIT Madras. He has been an entrepreneur, Machine Learning researcher, and a hands-on practitioner for more than 15 years and is currently working with Flipkart as ML/Data Science Consultant in Bangalore. He has built and shipped several ML based technologies like

  • Innovation of Quillpad, first of its kind ML based multilingual predictive transliteration engine for Indian languages. Quillpad has been instrumental in triggering the rapid rise of user generated content in Indian languages on the internet. For this pioneering work, he was recognised among top 20 innovators in India by MIT TR35 awards.
  • Co-founding and leading the R&D team of a company which developed deep learning models for understanding aesthetic elements of fashion products.
  • Designing and developing an end-to-end industry grade machine learning product for converting scanned multilingual books to editable e-pubs, with state-of-the-art OCR accuracy for Indian languages. This technology was acquired by a leading e-book publisher in India.
Apart from MIT TR35 recognition, he has been,
  • invited as a Speaker at the top-tier International Conference on Computational Linguistics and
  • a winner of Nokia Best India Innovation Award

As a self-taught ML practitioner, he understands the questions faced by uninitiated learners and the dangers of learning through the black-box approach. His workshops will enable efficient learning for participants, through explanation of underlying principles to make the functioning of said methods more transparent and easy to understand.

On the research front, he is working on creating an AI-assisted active learning environment to help learners master a wide range of subjects.

Ram is an avid Runner and Badminton player. Connecting the dots through the application of Science and Math in any activity that he pursues is what gives him an edge over others in terms of the time taken to learn and master the techniques.His unconventional teaching style comprises anecdotes from all fields and makes his courses an enriching experience for learners.




  • No prior knowledge of Machine Learning required
  • Should be comfortable coding in Python
  • Elementary knowledge of matrices, probability and differential calculus is preferred

Hands-on Machine Learning - Advanced

Graphical Models, Neural Nets & Intro to Deep Learning

The Hands-on Machine Learning Advanced Course will build on the ML Foundations and introduce you to some power tools of machine learning: Probabilistic Graphical Models, Neural Networks, and Deep Learning. With these techniques in your arsenal, you will be able to design solutions for problems involving making sense of visual data, understanding, and auto-responding to natural language queries/questions, knowledge representation and reasoning using the graphically modeled domain knowledge. We will cover deep learning models, including Convnets and RNNs, with emphasis on the practical process of designing and training models using Keras.

The outline of the Advanced Course is given below.

Session 1 (Probabilistic Graphical Models)

  1. Introduction to Bayesian Networks
  2. Designing Bayesian Networks in Practice
  3. Inferencing with Bayesian Networks
  4. Learning in Bayesian Networks (MLE)
  5. Practical Tips & Applications

Session 2 (Probabilistic Models for Spatial & Temporal data)

  1. Markov Random Fields with sample Application in Image Processing
  2. Hidden Markov Models with sample Application in Gesture Recognition
  3. Conditional Random Fields sample Application in NLP (Entity Extraction for Chatbot Building)

Session 3 (Neural Networks)

  • Neurons
  • Understanding the need for Bias, Multiple Layers, and Non-linearity
  • Single Layer Neural Network ( Gradient Descent )
  • Multi-layer Neural Network ( Backpropagation )
  • Training & Tuning Neural Networks (Hyper-parameter tuning)

Session 4 (Introduction to Deep Learning)

  • Deep Learning Introduction
  • Convolutional Neural Networks
  • Recurrent Neural Networks
  • Using Pre-trained Models (transfer learning)
  • Practical Tips for building & training large models using Keras

Hands-on Deep Learning

As far as Deep Learning is concerned, if mastering frameworks like Tensorflow or Caffe etc., was all it took to build production quality solutions, isn’t it unlikely that those frameworks would be open-sourced in the first place! Even if one has good training data, the key to building successful deep learning models lies in applying the theoretical knowledge to diagnose and tune them in a principled manner.

As of today, deep learning in practice is necessarily empirical and each iteration is time-consuming if not approached systematically. A good deep learning engineer would minimise costly iterations by following an appropriate design & train process that is backed by a thorough understanding of the underlying math machinery of the models.

To help you become a successful deep learning practitioner, we have designed this course to provide you the necessary theoretical knowledge and practical know-hows.

The detailed course outline is as under.

Session 1 (Introduction)

  • Deep Learning Introduction
  • Neural Networks Revision
  • Gradient Descent & Variants
  • Computational Graphs

Session 2 (Convolutional Neural Networks)

  • Convolutions
  • The motivation behind convolutional neural networks
  • Designing CNNs for various Applications
  • Understanding successful CNN Architectures
  • Auto-Encoders (Sparse & De-noising)

Session 3 (Recurrent Neural Networks)

  • RNN Introduction
  • LSTM Working
  • Designing RNNs for various Applications ( sequence labelling, classification, generation etc.)
  • Understanding successful RNN Architectures
  • Bi-Directional RNNs

Session 4 (DL for other Problems)

  • Embeddings (Representation Learning)
  • Siamese Networks (Learning to compare, rank etc.)
  • De-Convolutions
  • Fully-convolutional Networks


Math Background

  • Vectors, Matrices
  • Dot Product, Matrix Multiplication
  • Basic Differentiation
  • Elementary Probability Concepts, Bayes Rule

ML Basics

  • Knowledge of machine learning, in general, will help: types of models, awareness of data challenges, typical ML pipeline, etc.


  • Well-versed with Python
  • Should be comfortable with Numpy
  • Should be comfortable plotting with Matplotlib
  • Should be familiar with working with images: matrix representation, cropping, scaling, rotation etc.

Features of
Active learning
  • Increased engagement
  • Sparking Creativity
  • Deepening understanding
  • Widening participation
Benefits of
In-person Sessions
  • Better focus on learning and less distraction
  • Individualized and personalized support for students
  • Enhancement of learning by a classroom discussion
  • Enforcement of real-time discipline and structure
Advantages of
Guided Practice
  • Decontextualise learning from classroom to "real life" scenario
  • Scaffolding of the learner's attempt through support, encouragement, hints and feedback
  • Gradual transition of cognitive skills from modelling stage to independent practice
  • More confidence to apply the skills independently


About US

AI is here to stay. It will change the type of jobs humans have to do. We believe that the current educational system, as well as the professional skill development programs, are not designed for equipping people for this imminent scenario.

Our team is working on scalable learning environments to help people learn various subjects in such a way that empowers them to do tasks which AI will not be able to in the next decade. Such skills require deeper conceptual understanding, ability to formulate and solve problems and analyse and troubleshoot unseen situations. We believe that AI is going to play a significant role in achieving it. Our research focuses on addressing education-related problems by using latest advances in AI, cognitive sciences, and gamification. In essence, we are embracing AI to help ourselves stay a step ahead.