Large Model (LLM) Training Curriculum – Weekly Topics and Resources
This article outlines a five‑week large‑model training curriculum, detailing weekly topics such as transformer fundamentals, encoder‑decoder architectures, self‑attention, LoRA fine‑tuning, and quantization, along with associated video lectures and PDF slide decks for developers.
This document presents a structured five‑week curriculum for training large language models, covering core concepts, advanced techniques, and practical resources.
Week 1 (2024‑01‑21) introduces the course, provides an overview of large models and the transformer architecture, and includes videos on the explosive growth of large models, how they are built, transformer applications, and self‑attention.
Week 2 (2024‑01‑28) delves deeper into transformers, encoders, and advanced topics, with video parts covering transformer fundamentals, encoder‑based and decoder‑based LLMs, and other advanced subjects.
Week 3 (2024‑02‑25) focuses on large‑model fine‑tuning, presenting an overview of LoRA fine‑tuning, detailed LoRA algorithm explanations, and a step‑by‑step implementation from scratch to a fine‑tuned RoBERTa model.
Week 4 (2024‑03‑03) explores parameter‑efficient tuning methods, including Alpaca, AdaLoRA, and QLoRA, with corresponding video demonstrations for each technique.
Week 5 (2024‑03‑17) covers prefix‑tuning and model quantization, providing videos on prefix‑tuning basics and multiple quantization methods for LLMs.
Course Materials are provided as PDF slide decks for each week: 20240121.pdf , 20240128.pdf , 20240225.pdf , and 20240303.pdf .
Recommended Reading includes a list of practical articles on topics such as Kubernetes monitoring, MySQL, Redis, Nginx WAF, Docker‑Jenkins‑Kubernetes pipelines, and enterprise‑grade microservice API gateways.
Practical DevOps Architecture
Hands‑on DevOps operations using Docker, K8s, Jenkins, and Ansible—empowering ops professionals to grow together through sharing, discussion, knowledge consolidation, and continuous improvement.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.