In-Context Learning and the Limits of Prompting

Module 3, Lecture 3.4 | Working with LLMs in Practice

This lecture covers in-context learning — how LLMs "learn" new patterns from examples in the prompt without any weight updates. It compares zero-shot and few-shot prompting, demonstrating how examples improve both accuracy and output format consistency. The lecture then addresses the limits of prompting: context rot as conversations and tool results accumulate, conflicting instructions, and the single-context constraint. These limits motivate the shift from "prompt engineering" to "context engineering" — curating the entire context state so the model has exactly what it needs and nothing it doesn't. This reframe defines the rest of the course.

Read the full lecture narrative

Additional Resources