Reasoning / Research term
Reasoning and Acting (ReAct)
An agent design pattern where the model repeats a simple cycle: think about what to do next, take an action (like calling a tool or searching the web), observe the result, then think again. This loop is the backbone of most AI agent systems.
Say you ask an AI assistant to find the cheapest flight from New York to London next Tuesday. In a Reasoning and Acting (ReAct) loop, the model first reasons: "I need to search flight prices." It calls a flight search tool, observes the results, reasons again: "These are all from JFK; let me also check Newark," calls the tool again, and continues until it has enough information. Each cycle of think, act, observe lets the model adjust its plan based on real-world feedback rather than guessing everything in advance.
Builder example
Understanding this loop is essential for debugging agents. When an agent goes off the rails, the problem almost always lives in one specific part of the cycle: the model reasoned poorly, called the wrong tool, misinterpreted the tool's output, or failed to update its plan. Logging each step separately lets you pinpoint exactly where things broke down.
Common confusion: ReAct describes the underlying loop structure, not a specific product or framework. Most modern agent tools (Claude's tool use, LangChain, custom pipelines) follow this pattern even when they use different terminology.