Back to Rooms
Enrollment Open · Batch 3

AI & ML Engineering Bootcamp — Batch 3

A 22-week hands-on program taking you from mathematical foundations to deploying production ML systems. Build real models, ship real code.

Welcome, Future ML Engineers!

Over 22 weeks you will grow from someone who uses ML models into someone who builds, trains, and deploys them in production. The curriculum blends mathematical foundations, classical machine learning, deep learning, Transformers, and MLOps into a single coherent journey.


Program at a Glance

Duration22 weeks (66 hours of instruction)
ScheduleThursday & Friday · 1.5 hours / session
Weekly Commitment3 hrs in-class + 4–6 hrs self-study
Start DateMarch 26, 2026 (Still Accepting Applications)
ApproachMath → Classical ML → Deep Learning → Transformers → MLOps
Live teaching session

Live teaching session

Panel speaking engagement

Panel speaking engagement

Online session — Batch 3

Online session — Batch 3


Tech Stack

Python
NumPy
Pandas
Scikit-learn
PyTorch
Hugging Face
FastAPI
Docker
Jupyter
Git
GitHub Actions
MLflow
Python
NumPy
Pandas
Scikit-learn
PyTorch
Hugging Face
FastAPI
Docker
Jupyter
Git
GitHub Actions
MLflow
Python
NumPy
Pandas
Scikit-learn
PyTorch
Hugging Face
FastAPI
Docker
Jupyter
Git
GitHub Actions
MLflow

Curriculum Modules

Module 1 · Foundations (4 weeks)

Mathematical and conceptual building blocks for ML. Build the intuition behind how machines learn before writing a single fit() call.

  • Week 1 — AI/ML/Deep Learning landscape · supervised vs unsupervised learning · types of ML problems
  • Week 2 — Vectors & matrices · dot products & matrix multiplication · gradients · gradient descent from scratch
  • Week 3 — Probability distributions · Bayes' theorem · MSE & cross-entropy loss · bias-variance tradeoff
  • Week 4 — Exploratory data analysis · handling missing data · feature scaling · train your first model

🎯 End-of-Module Project: Implement gradient descent and linear regression from scratch using only NumPy.


Module 2 · Classical Machine Learning (5 weeks)

The Scikit-learn ecosystem and tabular data mastery. Build, evaluate, and tune real-world classifiers and regressors.

  • Week 5 — Linear regression (OLS) · logistic regression · sigmoid function · decision boundaries
  • Week 6 — Decision trees (Gini/entropy splitting) · random forests · bagging · feature importance
  • Week 7 — XGBoost & LightGBM gradient boosting · metrics (precision, recall, F1, AUC-ROC) · confusion matrices
  • Week 8 — K-fold cross-validation · grid & Bayesian hyperparameter search · feature engineering · preventing data leakage
  • Week 9 — Kaggle competition workflow · end-to-end sklearn Pipeline · model serialization

🎯 End-of-Module Project: Compete in a Kaggle tabular-data challenge and ship a complete sklearn pipeline.


Module 3 · Deep Learning with PyTorch (4 weeks)

Neural networks from first principles to GPU-accelerated CNNs. Understand every layer, gradient update, and training trick.

  • Week 10 — Perceptrons · multi-layer networks · forward propagation · backpropagation & chain rule
  • Week 11 — Activation functions (ReLU, Softmax) · PyTorch tensors · custom Dataset & DataLoader · data augmentation
  • Week 12 — Training loops · Adam/SGD optimizers · early stopping · model checkpointing with torch.save
  • Week 13 — Convolutional layers & pooling · ResNet/VGG architectures · transfer learning · fine-tuning strategies

🎯 End-of-Module Project: Build an image classifier using transfer learning with a pretrained CNN.


Module 4 · Transformers & Hugging Face (3 weeks)

The attention mechanism that powers modern AI. Fine-tune BERT and GPT-class models for real NLP tasks.

  • Week 14 — Self-attention · multi-head attention · transformer architecture · tokenization (BPE/WordPiece) · positional encoding
  • Week 15 — Hugging Face Hub & Pipeline API · fine-tuning with Trainer API · BERT for text classification & NER
  • Week 16 — NLP competition strategy · pushing models to Hugging Face Hub · building a live text classification service

🎯 End-of-Module Project: Fine-tune a transformer on a Kaggle NLP challenge and publish it to Hugging Face Hub.


Module 5 · MLOps & Deployment (3 weeks)

From Jupyter notebook to a production-grade API. Learn the tools and practices every ML engineer needs in industry.

  • Week 17 — Model serialization (pickle / joblib / ONNX) · DVC versioning · REST prediction APIs with FastAPI
  • Week 18 — Docker images & containers · Dockerfile best practices · MLflow experiment tracking & model registry
  • Week 19 — GitHub Actions CI/CD · automated testing · data drift detection · model monitoring & alerting

🎯 End-of-Module Project: Deploy an ML model end-to-end with FastAPI, Docker, and a CI/CD pipeline.


Module 6 · Capstone (1 week)

Build and ship a full production ML system — from raw data to a live API.

  • Source a real dataset (Kaggle or real-world problem)
  • Full EDA, preprocessing, and feature engineering
  • Train and compare multiple models with documented hyperparameter tuning
  • Deploy a REST API (FastAPI) inside a Docker container
  • Set up a CI/CD pipeline with GitHub Actions
  • 10–15 minute live demo presentation

Example projects: sentiment analysis · medical image classification · real-estate price prediction · fake news detection · customer churn · text summarization API.


Who Is This For?

  • Developers who know Python and want to break into ML engineering
  • University students wanting practical, resume-worthy ML projects
  • Anyone who has taken online courses but wants structured, project-based depth

What You Will Build

By the end of the bootcamp you will have trained and deployed real models, competed in Kaggle challenges, and delivered a capstone project that demonstrates full-stack ML skills.