COMP4901Y_Course_HKUST
COMP4901Y_Course_HKUST copied to clipboard
Course Material for the UG Course COMP4901Y
Large-Scale Machine Learning for Foundation Models
Lecturer: Binhang Yuan.
Teaching Assistant: Ran Yan, Xinyang Huang.
Overview
In recent years, foundation models have fundamentally revolutionized the state-of-the-art of artificial intelligence. Thus, the computation in the training or inference of the foundation model could be one of the most important workflows running on top of modern computer systems. This course unravels the secrets of the efficient deployment of such workflows from the system perspective. Specifically, we will i) explain how a modern machine learning system (i.e., PyTorch) works; ii) understand the performance bottleneck of machine learning computation over modern hardware (e.g., Nvidia GPUs); iii) discuss four main parallel strategies in foundation model training (data-, pipeline-, tensor model-, optimizer- parallelism); and iv) real-world deployment of foundation model including efficient inference and fine-tuning.
Syllabus
| Date | Topic |
|---|---|
| W1 - 01/31 | Introduction and Logistics [Slides] |
| W2 - 02/05, 02/07 | Machine Learning Preliminary & PyTorch Tensors [Slides] |
| W3 - 02/14 | Stochastic Gradient Descent [Slides] |
| W4 - 02/19, 02/21 | Automatic Differentiation & PyTorch Autograd [Slides] |
| W5 - 02/26, 02/28 | Nvidia GPU Performance [Slides] & Collective Communication Library [Slides] |
| W6 - 03/04, 03/06 | Transformer Architecture [Slides] & Large Scale Pretrain Overview [Slides] |
| W7 - 03/11, 03/13 | Data Parallel Training [Slides] & Pipeline Parallel Training [Slides] |
| W8 - 03/18, 03/20 | Tensor Model Parallel Training [Slides] & Optimizer Parallel Training [Slides] |
| W9 – 03/25, 03/27 | Mid-Term Review [Slides] & Mid-Term Exam :heavy_check_mark: |
| W10 - 04/08, 04/10 | Generative Inference [Slides] & Hugging Face Library [Slides] |
| W11 - 04/15, 04/17 | Generative Inference Optimization [Slides] & Speculative Decoding [Slides] |
| W12 - 04/22, 04/24 | Prompt Engineering Overview & Practices |
| W13 - 04/29 | Parameter Efficient Fine-tuning (LoRA) |
| W14 - 05/06, 05/08 | Guest Speech (TBD) & Final Exam Review |
Grading Policy
- 4 Homework (4 $\times$ 5% $=$ 20%);
- Mid-term exam (30%);
- Final exam (50%).
Homework
| Topic | Release | Due |
|---|---|---|
| [Homework1] | 2024-02-26 :heavy_check_mark: | 2024-03-18 :heavy_check_mark: |
| [Homework2] | 2024-03-20 :heavy_check_mark: | 2024-04-08 :heavy_check_mark: |
| [Homework3] | 2024-04-11 :heavy_check_mark: | 2024-04-25 |
| Homework4 | 2024-04-26 | 2024-05-10 |