News



AI Mentoring Program by Prof. Ajit Jaokar

AI Mentoring Program by Prof. Ajit Jaokar

After successfully completing two programs Garje Marathi Global announces AI mentoring program by Prof. Ajit Jaokar starting from First week Feb 2022.

Starting from 06 Feb 2022, for 8 consecutive Sundays from 3.00 to 6.00PM UK Time

Application for admission to the workshop: https://forms.gle/WGPeWeNasR7SiAjs6

Who can join?

*If you want to learn coding for AI and if you have done coding in any form at any time.

What will we do?

We will study the core machine learning and deep learning algorithms in AI namely, Regression, Classification. PCA. Clustering, MLP, CNN

How will we do it?

We will study the problem first, understand the data, run the code, analyze the code, study the results. modify the code (quiz)

How will we study

Online in groups of 5, multiple groups

What do the participants need?

Just a browser i.e. running on the web/cloud

What is the schedule?

Eight weeks starting Feb 2, 2022 on Sundays 3 pm UK time for 3 hours

What is our approach to learning?

AI is a complex subject but by focusing on the core set of AI problems and working in small, supporting groups, you can learn AI relatively faster. You will also explore the similarities and common elements across the different algorithms helping you to learn faster.

 

Ajit Jaokar 

Course Director: Artificial Intelligence: Cloud and Edge Implementations - University of Oxford

This course is presented by Ajit in a personal capacity and not affiliated to any university or company he is associated with, His LinkedIn profile is Ajit Jaokar

Detailed schedule

Week 1 Feb 6 Concepts

Week 2:  Feb 13 Hands On exercises in Classification and Regression

Week 3: Feb 20 – model selection and evaluation

Week 4: Feb 27 – Feature engineering

Week 5: March 6 – Deep learning models

Week6: March 13 – Classification without dimensionality reduction

Week7:  March 20 – Principal component analysis

Week8: March 27 – Classification with Dimensionality reduction

Detailed schedule

Week 1 Feb 6 Concepts

The aim of this week to introduce you to the core concepts in developing and building a machine learning model, including:

  • an overview of machine learning and types of machine learning
  • an introduction of two main problem types: regression and classification in machine learning
  • an introduction of the key machine learning libraries in python
  • define machine learning workflow
  • explain methods and techniques to build and train a machine learning model (such as data exploration; feature selection, feature engineering, preprocessing (outliers, normalize, missing values), model selection and evaluation)

                                                                                            

Week 2:  Feb 13 Hands On exercises in Classification and Regression

Exercise: Boston House Prices prediction

In this exercise, we aim to predict the Boston house prices based on several environmental, economic, demographic, and societal features using Boston House dataset. Firstly,explore the data. We continue to build a regression model(using support vector regression) and evaluate the performance of the model.

Exercise: Breast Cancer classification

In this exercise, we aim to create a classification model (mainly Support vector Classification) that predicts if the cancer diagnosis is benign or malignant based on several features.

Week 3: Feb 20 – model selection and evaluation

The aim of this week to explain model selection and evaluation, including:

  • An overview of model selection and model evaluation
  • What are regression and classification evaluation metrics for measuring the performance of this trained model and explain when to choose these metrics
  • An overview of cross validation
  • An overview of ensemble methods
  • An overview of hyperparameter tuning (such as GridSearchCV and RandomizedSearchCV)

 

Week 4: Feb 27 – Feature engineering

An exploration of feature engineering techniques

This week will include all the techniques covered in the previous but also following weeks. Feature engineering is a very important process in the way to build powerful machine learning or deep learning models. The techniques used for this are very diverse. We will cover:

  • Feature selection including correlation and missing values imputation.
  • Feature transformation by means of normalization and standardization and other tools.
  • Feature extraction By finding a smaller set of new variables, each being a combination of the input variables, containing basically the same information as the input variables.

a. PCA(Principal Component Analysis)

b. TSNE

Week 5: March 6 – Deep learning models

The aim of this week is to build a deep learning model in Keras, including:

  • An overview of a deep learning; MLP, CNN, SLP, VGG16, ResNet models
  • An overview of Keras Model API: the sequential and the functional API
  • Steps to define and train a model in Keras
  • Explain how to improve a baseline model: adding hidden layers, dropout layer, changing optimizer, epochs, batch sizes, using SGD
  • Types of layer; convolutional layer, pooling layer, max layer, fullu connected layer

 

Week6: March 13 – Classification without dimensionality reduction

Exercise: Classification FM without dimensionality reduction

The project consists of a main task which is to classify items in the Fashion MNIST dataset successfully.  We will divide the implementation course of two main parts, one which we will be doing just for once and we won’t need to rewrite the code for it every time we run the whole code. The second part, which will contain the different models we will be using in this project mainly: Convolutional Neural Networks (CNN), Multi-Layer Perceptron (MLP), Single Layer Perceptron (SLP), VGG16, ResNet, Gaussian Mixture Model and Clustering more specifically we are going to use K-means Clustering techniques. To have more fun, we will apply these models on processed data before and after applying Principal Component Analysis (PCA).

Week7:  March 20 – Principal component analysis

The aim of this week is to decide where and when to use PCA, including;

  • An overview of Principal Component Analysis
  • Explain how to apply PCA; data standardization, create a covariance matrix,eigen decomposition, feature transformation,

 

Week8: March 27 – Classification with Dimensionality reduction

Exercise: Classification FM withdimensionality

The project consists of a main task which is to classify items in the Fashion MNIST dataset successfully.  We will divide the implementation course of two main parts, one which we will be doing just for once and we won’t need to rewrite the code for it every time we run the whole code. The second part, which will contain the different models we will be using in this project mainly: Convolutional Neural Networks (CNN), Multi-Layer Perceptron (MLP), Single Layer Perceptron (SLP), VGG16, ResNet, Gaussian Mixture Model and Clustering more specifically we are going to use K-means Clustering techniques. To have more fun! We will apply these models on processed data before and after applying Principal Component Analysis (PCA).

Admission to the Workshop is entirely at the discretion of Prof. Ajit Jaokar and Garje Marathi Global Inc.