11-766 LLM Applications

Graduate course on large language models with a focus on applications, prompting, fine-tuning, and alignment.

Instructor: Prof. Daphne Ippolito and Prof. Fernando Diaz

Term: Spring

Location: Carnegie Mellon University

Time: Various sections

Course Overview

Course: 11-766 LLM Applications (Spring 2026, Carnegie Mellon University)

Instructors: Prof. Daphne Ippolito and Prof. Fernando Diaz

This course provides comprehensive coverage of large language models (LLMs), from foundational concepts to cutting-edge applications. Topics include:

  • LLM Architectures: Transformers, attention mechanisms, and scaling laws
  • Training Methods: Pre-training, instruction tuning, and RLHF
  • Prompting Techniques: Zero-shot, few-shot, chain-of-thought, and advanced prompting strategies
  • Fine-tuning & Alignment: Parameter-efficient methods (LoRA, QLoRA), preference learning, and safety alignment
  • Applications: Code generation, reasoning tasks, multimodal systems, and agentic workflows
  • Evaluation & Analysis: Benchmarking, interpretability, and failure mode analysis

Teaching Responsibilities

As a Teaching Assistant, I support students through:

  • Office hours and technical mentorship
  • Assignment design and grading
  • Research guidance for course projects

Course Website

Visit cmu-llms.org for schedules, lecture materials, and assignments.