Courses

Machine Learning for Hands-on Engineers

We are not going to just teach you machine learning. We are going to provide you a comprehensive platform to facilitate the learning of both theory and applications of machine learning through your active participation. We will equip you with knowledge and skills necessary to tackle the real-world problems in a principled manner. In essence, we aim to save you from the failures and frustrations of "spray and pray" or "blackbox" methods of unskilled practitioners.


We have custom designed learning modules for concepts with techniques that are best suited for each of them. While some concepts are most efficiently learnt through video lectures, some others are understood better via active explorations. Real-world problem-solving contexts could be apt for yet others. The mix of these strategies along with pedagogical techniques like spaced-repetition, reflection, in-line quizzing etc., are incorporated naturally into the course.


Course Details

This is a two-month course, with video lectures to enable self-paced learning, simulators and interactive modules to explore concepts, 48 hours of in-person sessions for tutorials and deeper discussions, and 24 hours of assistance via scheduled mid-week Google Hangout sessions with the instructor for hands-on projects.


Foundations & Tools

  • General Approach to Machine Learning
  • Review of Mathematical Concepts
  • iPython, Numpy, Matplotlib Tutorial
General Approach to ML:
  • Structure of ML Pipeline
  • Preparing the Data
  • Selecting an Appropriate Machine Learning Model (OR Just Statistics will do? Depends on where it would be deployed, amount of training data, nature/complexity of the problem etc.)
  • Pre-Processing, Feature Engineering or Not
  • The Anatomy of ML Algorithms - model, objective and learning
  • Types of Machine Learning Problems
  • Evaluating Model Performance (overfitting, underfitting etc.)
  • Visualising Decision Boundaries
  • Error Analysis and Diagnosing Models (Coming up with the fixes)
  • Tuning the Model Hyperparameters
  • Post Production Monitoring and Retraining models over time
Mathematical Concepts Review:
  • Linear Algebra Review
  • Probability and Statics Review
  • Calculus Review
Tools and Techniques Tutorial:
  • Jupyterlab Overview
  • Numpy, Scipy Introduction (Linear algebra operations, broadcasting concepts, vectorisation examples)
  • Matplotlib Introduction - Basics of plotting and user event handling.
..topics covered

Understanding Data & Features

  • Understanding & Exploring Data
  • Feature Engineering
  • Visualisation Techniques
  • Matplotlib Visualisations
Understanding & Exploring Data
  • Types of Variables and Corresponding Numerical Representations
  • Evaluating the Quality of Data (Representativeness, Balance, Labelling Accuracy, Noise)
  • Sampling for Train/Test/Validation
Feature Engineering
  • Feature encoding various real-world data: Text, Images, Business Data, Time Series Data
  • Basic Descriptive Statistics for Analysing Data
  • Dimensionality Reduction - PCA
  • Standardising, Normalising Variables
Data Visualisation Tools & Techniques
  • Jupyterlab-Matplotlib for temporal plots, scatterplot, histogram, overlaying plots, dendrograms, matrix diagrams.
  • PCA, t-SNE for visualising high dimensional data
  • Using Pandas to filter, summarise data
..topics covered

Non-Parametric Models

  • Nearest Neighbour Models
  • Decision Trees
  • Random Forests
  • Scikit Learn Tutorial
Nearest Neighbour Models
  • Model, Objective and Learning Perspective
  • Similarity Metrics
  • k-Nearest Neighbour Algorithm
  • Visualising the Behaviour of k-NN models (decision boundaries)
  • Strengths & Limitations
  • Practical Tips: When & How to best use them.
Decision Trees
  • Model, Objective and Learning Perspective
  • Metrics for Measuring the Quality of Splitting the Data (Gini Impurity, Information
  • Gain, Variance Reduction for regression)
  • Training and Prediction
  • Visualising the Behaviour of Decision Trees
  • Strengths & Limitations
  • Boosting to improve their performance
  • Practical Tips: When & How to best use them.
Random Forests
  • The idea of Ensemble Models
  • The role of randomisation and why it makes Random Forests more powerful?
  • How to randomise effectively across different trees?
  • Training & Prediction
  • Visualising the Behaviour of Random Forests (non-linear boundaries)
  • Strengths & Limitations
  • Practical Tips: When and How to best use them.
Scikit Learn Tutorial
  • Introduction to the Scikit Learn ML interface
  • Tutorial demonstration of how to use the models discussed in this section
  • Describing the necessary parameters and what you can control while designing or
  • troubleshooting models
..topics covered

Probabilistic Graphical Models

  • Naive Bayes Models
  • Bayesian Networks
  • Conditional Random Fields
  • Tutorial: NLTK/spaCy for NLP
Naive Bayes Models
  • Bayes Rule Refresher
  • Naive Bayes Model, Maximum Likelihood Objection and MLE parameter estimation
  • Bag of words/bag of features view
  • Strengths & Limitations
  • Practical Tips
Bayesian Networks
  • Bayesian Networks model, understanding conditional dependencies, and reasoning from the graph
  • Incorporating domain knowledge in to the network structure, modelling and training local distributions
  • High-level introduction to Hidden Markov Models for modelling sequence data
Conditional Random Fields
  • Generative vs Discriminative Models, advantages and disadvantages
  • CRFs as discriminative models for sequences
  • CRFs model description
  • Strengths & Limitations
  • Practical Tips: Designing CRFs for sequence labelling problems
NLTK/spaCy Tutorial
  • Introduction to basic NLP utilities for extracting natural language features
  • Word2vec intuition and using pre-trained word2vec models in NLP applications
  • Building with NLTK CRF module
..topics covered

Feedforward Neural Networks

  • Single Layer Neural Network
  • Multilayer Neural Network
  • Introduction to Keras
Single Layer Neural Network
  • Neuron model, the role of weights and bias, activation function (linear and sigmoid)
  • Single layer single output, and single layer multiple output models (matrix formulations)
  • Training single layer network with SGD
  • Motivating the need for multiple layers, non-linearity
Multilayer Neural Network
  • Multilayer architecture
  • Backpropagation algorithm - Chain rule intuition, basic computational graph to understand backpropagation in complex architectures
  • Understanding vanishing and exploding gradients and how to control them
  • Understanding the importance of weight initialisation and input normalisation - Why and how?
  • Hyperparameter tuning - number of layers/units per layer, learning rate
  • Diagnostics: Interpreting the learning curves to help understand under fitting, over fitting issues and fixes. Error analysis to techniques
Introduction to Keras
  • Building multilayer networks in Keras
  • Using Convolutional and Recurrent layers
  • Using pre-trained models - Transfer learning
..topics covered

Deep Learning Introduction

  • Deep Learning Overview
  • Convolutional Neural Networks
  • Recurrent Neural Networks
  • Implementing Deep Neural Networks in Keras
Deep Learning Overview
  • Deep Learning as a multilayer neural network with lot of layers
  • The idea of hierarchical, distributed representations, composed by stacking layers
  • The key factors that made deep learning possible: GPU computing, ReLU activation functions, Drop-out regularisation
Convolutional Neural Networks (Basics)
  • Convolution operation, and intuition into the functioning of the convolution layers
  • Feature maps (channels), and what can they potentially learn?
  • Guidelines for designing convolutional layers, getting the tensor dimensions right
  • The types of problems that CNNs are suitable for
  • Sample Applications
Recurrent Neural Networks (Basics)
  • The intuition behind recurrent connections, their power
  • Time unrolling recurrent networks to view them as normal multilayer networks
  • LSTM for long-term modelling dependency in sequence data, the types of problems solved using LSTMs
  • Sample Applications
Implementing Deep Neural Networks in Keras (Tutorial)
  • Implementing deep neural networks in Keras
  • Examples including CNN and LSTMs
..topics covered

Models for Other Common Tasks

  • Clustering Models
  • Recommender Models
  • Learning to Rank Models
  • Anomaly Detection
Clustering Models
  • K-Means
  • Agglomerative Clustering
Recommender Models
  • Content-based Systems
  • Collaborative Filtering
Ranking Models
  • Link Analysis (PageRank)
  • Pair-wise Approach (LambdaMART)
Anomaly Detection
  • Density Based Models (GMM, k-NN)
  • One class SVM
..topics covered

Instructor Profile

Ram Prakash H.

Course Designer & Instructor

This course is designed and delivered by Mr. Ram Prakash H. Ram holds a B.Tech degree in Computer Science from IIT Madras. As an entrepreneur, Machine Learning researcher, and a hands-on practitioner for more than 15 years, he has built and shipped several ML based technologies like

  • Innovation of Quillpad, first of its kind ML based multilingual predictive transliteration engine for Indian languages. Quillpad has been instrumental in triggering the rapid rise of user generated content in Indian languages on the internet. For this pioneering work, he was recognised among top 20 innovators in India by MIT TR35 awards.
  • Co-founding and leading the R&D team of a company which developed deep learning models for understanding aesthetic elements of fashion products.
  • Designing and developing an end-to-end industry grade machine learning product for converting scanned multilingual books to editable e-pubs, with state-of-the-art OCR accuracy for Indian languages. This technology was acquired by a leading e-book publisher in India.
Apart from MIT TR35 recognition, he has been,
  • invited as a Speaker at the top-tier International Conference on Computational Linguistics and
  • a winner of Nokia Best India Innovation Award

As a self-taught ML practitioner, he understands the questions faced by uninitiated learners and the dangers of learning through the black-box approach. His workshops will enable efficient learning for participants, through explanation of underlying principles to make the functioning of said methods more transparent and easy to understand.

On the research front, he is working on creating an AI-assisted active learning environment to help learners master a wide range of subjects.

Ram is an avid Runner and Badminton player. Connecting the dots through the application of Science and Math in any activity that he pursues is what gives him an edge over others in terms of the time taken to learn and master the techniques.His unconventional teaching style comprises anecdotes from all fields and makes his courses an enriching experience for learners.

..more
Features of
Active learning
  • Increased engagement
  • Sparking Creativity
  • Deepening understanding
  • Widening participation
Benefits of
In-person Sessions
  • Better focus on learning and less distraction
  • Individualized and personalized support for students
  • Enhancement of learning by a classroom discussion
  • Enforcement of real-time discipline and structure
Advantages of
Guided Practice
  • Decontextualise learning from classroom to "real life" scenario
  • Scaffolding of the learner's attempt through support, encouragement, hints and feedback
  • Gradual transition of cognitive skills from modelling stage to independent practice
  • More confidence to apply the skills independently

Course Fee: Rs. 55,000/-

Why think twice? We offer a "No Question Asked Refund" in case you feel, within the first two weeks of the course, that this course is not helping you.


Pre-requisites

 

  • No prior knowledge of Machine Learning required
  • Should be comfortable coding in Python
  • Elementary knowledge of matrices, probability and differential calculus is preferred


Venue and other details

Venue: IKP EDEN, Koramangala, Bengaluru, Karnataka 560029.

Date: June-July 2018 (Sundays)

Timings: Sundays 9:00 am to 5:00 pm

Testimonials

About US

AI is here to stay. It will change the type of jobs humans have to do. We believe that the current educational system, as well as the professional skill development programs, are not designed for equipping people for this imminent scenario.

Our team is working on scalable learning environments to help people learn various subjects in such a way that empowers them to do tasks which AI will not be able to in the next decade. Such skills require deeper conceptual understanding, ability to formulate and solve problems and analyse and troubleshoot unseen situations. We believe that AI is going to play a significant role in achieving it. Our research focuses on addressing education-related problems by using latest advances in AI, cognitive sciences, and gamification. In essence, we are embracing AI to help ourselves stay a step ahead.

Contact