FOR LEARNERS

Become a machine learning engineer

Why us

All of our programmes are full-time and in-person. We keep class sizes small. You will be surrounded by motivated peers and supported by expert practitioners.

You will receive one-on-one sessions with seasoned machine learning engineers who have worked across BigTech and research. You will read and understand cutting-edge AI papers, implement them in code, and present your solutions to a cohort of engaged and supportive peers.

Overview

Week 1. Predict HN Upvotes

Week 2. Learn To Search

Week 3. Object Detection

Week 4. Tiny Stories

Week 5. Multimodality

Week 6. Fine Tuning At Scale

Week 7. RAG

Week 8. Build Your Startup

Curriculum

Our programme is structured into a series of weekly projects, each focusing on practical applications of advanced machine learning techniques, ranging from predicting upvotes on Hacker News to building object detection models for sports analytics. Participants will engage with a variety of tasks, including text generation with Transformers, search and retrieval with Two-Tower Neural Networks, and image captioning using multi-modal models. The capstone project in the final week will allow students to apply the learned skills to a unique problem, showcasing their understanding of machine learning concepts and their ability to build impactful solutions.

The course covers data engineering, devops and deep learning and dives into key neural network architectures and methodologies such as Word2Vec, Two-Tower Neural Networks for search, and Vision Transformers (ViT) for image captioning. Participants will gain hands-on experience with complex models like YOLO for object detection and Transformer models, emphasising components like multi-head attention and custom loss functions such as those adapted for circular bounding boxes.

Throughout the course, the use of GPUs for training and inference is emphasised, alongside efficient deployment practices using Docker, Kubernetes, and Streamlit. Participants will explore Parameter Efficient Fine Tuning (PEFT) techniques such as Low-Rank Adaptation (LoRA) and soft prompting, designed to reduce computational costs while maintaining performance, especially in large language models (LLMs). Attention to deployment considerations, including mixed-precision training and distributed data parallelism, will equip participants with the knowledge to scale models effectively in real-world environments.

What you will build in practice:

  • Predictive model for Hacker News upvotes using word embeddings

  • Document retrieval system with Two-Tower Neural Networks

  • Object detection model with custom circular bounding regions

  • Transformer model for generating tiny stories

  • Multi-modal model for image captioning with Vision Transformers

  • Fine-tuned large language model using LoRA and soft prompting techniques

Tools and libraries you will use:

TORCH
POSTGRES
DOCKER
PYTHON
FASTAPI
PLOT
COMPOSE
AIRFLOW
JUPYTER
K8S
KAFKA
SPARK
SYSTEMD
UBUNTU

Cost & Eligibility

For eligible applicants, our programme is free. Apply now to find out more.

Application process

FAQs