Module 2, Lecture 2.3 | Introduction to Agentic Systems
An LLM is a powerful text generator, but it cannot read files, send emails, query databases, or access current information on its own. This lecture explains what bridges the gap from language model to agent: tool calling — the protocol that lets a model request structured actions and incorporate the results into its reasoning. We walk through the five-step tool-calling flow, connect it back to the perception-reasoning-action loop from Lecture 1.1, and then examine how system prompts shape agent identity, override training biases, and establish behavioral constraints. The lecture concludes by assembling the complete mental model: LLM + context window + tool calling + system prompt + agent loop.
Tool use with Claude — Anthropic Docs — Anthropic's official documentation on how tool calling works in the Claude API: defining tools with JSON schemas, handling tool_use responses, and building the tool execution loop.
Function calling — OpenAI Docs — OpenAI's equivalent guide. Comparing both providers' approaches illustrates how the same underlying pattern is implemented across the industry.
Function calling using LLMs — martinfowler.com — An analysis of the function calling pattern from a software architecture perspective, covering a practical agent example, security considerations, and the Model Context Protocol.