这是用户在 2025-7-28 24:20 为 https://langchain-ai.github.io/langgraph/agents/run_agents/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Skip to content

Running agents
运行代理

Agents support both synchronous and asynchronous execution using either .invoke() / await .ainvoke() for full responses, or .stream() / .astream() for incremental streaming output. This section explains how to provide input, interpret output, enable streaming, and control execution limits.
代理支持同步和异步执行,使用 .invoke()/await .ainvoke() 进行完整响应,或使用 .stream()/.astream() 进行增量输出。本节解释如何提供输入、解释输出、启用流和控制执行限制。

Basic usage  基本用法

Agents can be executed in two primary modes:
代理可以在两种主要模式下执行:

  • Synchronous using .invoke() or .stream()
    使用 .invoke().stream() 进行同步
  • Asynchronous using await .ainvoke() or async for with .astream()
    异步使用 await .ainvoke() 或 await.astream()
from langgraph.prebuilt import create_react_agent

agent = create_react_agent(...)

response = agent.invoke({"messages": [{"role": "user", "content": "what is the weather in sf"}]})
from langgraph.prebuilt import create_react_agent

agent = create_react_agent(...)
response = await agent.ainvoke({"messages": [{"role": "user", "content": "what is the weather in sf"}]})

Inputs and outputs
输入和输出

Agents use a language model that expects a list of messages as an input. Therefore, agent inputs and outputs are stored as a list of messages under the messages key in the agent state.
代理使用一种语言模型,该模型期望将消息列表作为输入。因此,代理的输入和输出在代理状态下以消息键下的消息列表的形式存储。

Input format
输入格式

Agent input must be a dictionary with a messages key. Supported formats are:
代理输入必须是带有消息键的字典。支持的格式包括:

Format  格式 Example  例如
String  字符串 {"messages": "Hello"} — Interpreted as a HumanMessage
{“messages”:“Hello”}-解释为 HumanMessage
Message dictionary  消息词典 {"messages": {"role": "user", "content": "Hello"}}
List of messages  消息列表 {"messages": [{"role": "user", "content": "Hello"}]}
With custom state  自定义状态 {"messages": [{"role": "user", "content": "Hello"}], "user_name": "Alice"} — If using a custom state_schema
{"messages": [{"role": "user", "content": "Hello"}], "user_name": "Alice"} -如果使用自定义 state_schema

Messages are automatically converted into LangChain's internal message format. You can read more about LangChain messages in the LangChain documentation.
消息会自动转换为 LangChain 的内部消息格式。您可以在 LangChain 文档中阅读更多关于 LangChain 消息的信息

Using custom agent state
使用自定义代理状态

You can provide additional fields defined in your agent’s state schema directly in the input dictionary. This allows dynamic behavior based on runtime data or prior tool outputs.
您可以直接在输入字典中提供在代理的状态模式中定义的其他字段。这允许基于运行时数据或先前工具输出的动态行为。

See the context guide for full details.
有关详细信息,请参阅上下文指南

Note  注意

A string input for messages is converted to a HumanMessage. This behavior differs from the prompt parameter in create_react_agent, which is interpreted as a SystemMessage when passed as a string.
消息的字符串输入被转换为 HumanMessage。此行为与 create_react_agent 中的 prompt 参数不同,后者在作为字符串传递时被解释为 SystemMessage

Output format
输出格式

Agent output is a dictionary containing:
代理输出是一个字典,其中包含:

  • messages: A list of all messages exchanged during execution (user input, assistant replies, tool invocations).
    messages:执行过程中交换的所有消息的列表(用户输入、助手回复、工具调用)。
  • Optionally, structured_response if structured output is configured.
    如果配置了结构化输出 ,则为 structured_response
  • If using a custom state_schema, additional keys corresponding to your defined fields may also be present in the output. These can hold updated state values from tool execution or prompt logic.
    如果使用自定义 state_schema,则输出中还可能存在与您定义的字段对应的其他键。这些可以保存来自工具执行或提示逻辑的更新状态值。

See the context guide for more details on working with custom state schemas and accessing context.
有关使用自定义状态模式和访问上下文的更多详细信息,请参见上下文指南

Streaming output
流输出

Agents support streaming responses for more responsive applications. This includes:
代理支持响应速度更快的应用程序的流响应。这包括:

  • Progress updates after each step
    每一步后都会更新进度
  • LLM tokens as they're generated
    LLM 令牌生成时
  • Custom tool messages during execution
    执行期间的自定义工具消息

Streaming is available in both sync and async modes:
流媒体在同步和异步模式下均可用:

for chunk in agent.stream(
    {"messages": [{"role": "user", "content": "what is the weather in sf"}]},
    stream_mode="updates"
):
    print(chunk)
async for chunk in agent.astream(
    {"messages": [{"role": "user", "content": "what is the weather in sf"}]},
    stream_mode="updates"
):
    print(chunk)

Tip  尖端

For full details, see the streaming guide.
有关详细信息,请参阅流媒体指南

Max iterations
最大迭代次数

To control agent execution and avoid infinite loops, set a recursion limit. This defines the maximum number of steps the agent can take before raising a GraphRecursionError. You can configure recursion_limit at runtime or when defining agent via .with_config():
若要控制代理执行并避免无限循环,请设置递归限制。这定义了代理在引发 GraphRecursionError 之前可以执行的最大步骤数。您可以在运行时配置 recursion_limit,或者通过 .with_config() 定义代理时配置 recursion_limit:

from langgraph.errors import GraphRecursionError
from langgraph.prebuilt import create_react_agent

max_iterations = 3
recursion_limit = 2 * max_iterations + 1
agent = create_react_agent(
    model="anthropic:claude-3-5-haiku-latest",
    tools=[get_weather]
)

try:
    response = agent.invoke(
        {"messages": [{"role": "user", "content": "what's the weather in sf"}]},
        {"recursion_limit": recursion_limit},
    )
except GraphRecursionError:
    print("Agent stopped due to max iterations.")
from langgraph.errors import GraphRecursionError
from langgraph.prebuilt import create_react_agent

max_iterations = 3
recursion_limit = 2 * max_iterations + 1
agent = create_react_agent(
    model="anthropic:claude-3-5-haiku-latest",
    tools=[get_weather]
)
agent_with_recursion_limit = agent.with_config(recursion_limit=recursion_limit)

try:
    response = agent_with_recursion_limit.invoke(
        {"messages": [{"role": "user", "content": "what's the weather in sf"}]},
    )
except GraphRecursionError:
    print("Agent stopped due to max iterations.")

Additional Resources
其他资源