dl-fundamentals icon indicating copy to clipboard operation
dl-fundamentals copied to clipboard

Deep Learning Fundamentals -- Code material and exercises

Deep Learning Fundamentals: Code Materials and Exercises

This repository contains code materials & exercises for Deep Learning Fundamentals course by Sebastian Raschka and Lightning AI.

  • Link to the course website: https://lightning.ai/pages/courses/deep-learning-fundamentals/
  • Link to the discussion forum: https://github.com/Lightning-AI/dl-fundamentals/discussions
  • Reach out to Lightning & Sebastian on social media: @LightningAI @rasbt

For other announcements, updates, and additional materials, you can follow Lightning AI and Sebastian on Twitter!


Links to the materials

Unit 1. Welcome to Machine Learning and Deep Learning [ Link to videos ]

Unit 2. First Steps with PyTorch: Using Tensors [ Link to videos ]

Unit 3. Model Training in PyTorch [ Link to videos ]

Unit 4. Training Multilayer Neural Networks [ Link to videos ]

  • 4.1 Dealing with More than Two Classes: Softmax Regression
  • 4.2 Multilayer Neural Networks and Why We Need Them
  • 4.3 Training a Multilayer Perceptron in PyTorch
    • XOR data
    • MNIST data
  • 4.4 Defining Efficient Data Loaders
  • 4.5 Multilayer Neural Networks for Regression
  • 4.6 Speeding Up Model Training Using GPUs
  • Unit 4 exercises
    • Excercise 1: Changing the Number of Layers
    • Exercise 2: Implementing a Custom Dataset Class for Fashion MNIST

Unit 5. Organizing your PyTorch Code with Lightning [ Link to videos ]

  • 5.1 Organizing Your Code with PyTorch Lightning
  • 5.2 Training a Multilayer Perceptron in PyTorch Lightning
  • 5.3 Computing Metrics Efficiently with TorchMetrics
  • 5.4 Making Code Reproducible
  • 5.5 Organizing Your Data Loaders with Data Modules
  • 5.6 The Benefits of Logging Your Model Training
  • 5.7 Evaluating and Using Models on New Data
  • 5.8 Add functionality with callbacks
  • Unit 5 exercises

Unit 6. Essential Deep Learning Tips & Tricks [ Link to videos ]

  • 6.1 Model Checkpointing and Early Stopping
  • 6.2 Learning Rates and Learning Rate Schedulers
  • 6.3 Using More Advanced Optimization Algorithms
  • 6.4 Choosing Activation Functions
  • 6.5 Automating The Hyperparameter Tuning Process
  • 6.6 Improving Convergence with Batch Normalization
  • 6.7 Reducing Overfitting With Dropout
  • 6.8 Debugging Deep Neural Networks
  • Unit 6 exercises

Unit 7. Getting Started with Computer Vision [ Link to videos ]

  • 7.1 Working With Images
  • 7.2 How Convolutional Neural Networks Work
  • 7.3 Convolutional Neural Network Architectures
  • 7.4 Training Convolutional Neural Networks
  • 7.5 Improving Predictions with Data Augmentation
  • 7.6 Leveraging Pre-trained Models with Transfer Learning
  • 7.7 Using Unlabeled Data with Self-Supervised
  • Unit 7 exercises

Unit 8. Introduction to Natural Language Processing and Large Language Models [ Link to videos ]

  • 8.1 Working with Text Data
  • 8.2 Training Text Classifier Baseline
  • 8.3. Introduction to Recurrent Neural Networks
  • 8.4 From RNNas to the Transformer Architecture
  • 8.5 Understanding Self-Attention
  • 8.6 Large Language Models
  • 8.7 Using Large Language Model for Classification
  • Unit 8 exercises

Unit 9. Techniques for Speeding Up Model Training [ Link to videos ]

  • 9.1 Accelerated Model Training via Mixed-Precision Training
  • 9.2 Multi-GPU Training Strategies
  • 9.3 Deep Dive Into Data Parallelism
  • 9.4 Compiling PyTorch Models
  • 9.5 Increasing Batch Sizes to Increase Throughput
  • Unit 9 exercises

Unit 10. The Finale: Our Next Steps After AI Model Training [ Link to videos ]

  • 10.1 Trustworthy and Reliable Machine Learning
  • 10.2 Fabric - Scaling PyTorch Model Training without Boilerplate Code
  • 10.3 Designing Machine Learning Systems
  • 10.4 Conclusion
  • Unit 10 exercises