这是用户在 2025-7-28 10:55 为 https://langchain-ai.github.io/langgraph/how-tos/tool-calling/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Skip to content

Call tools  调用工具

Tools encapsulate a callable function and its input schema. These can be passed to compatible chat models, allowing the model to decide whether to invoke a tool and determine the appropriate arguments.
工具封装了一个可调用函数及其输入模式。这些参数可以传递给兼容的聊天模型 ,让模型决定是否调用工具并确定适当的参数。

You can define your own tools or use prebuilt tools
您可以定义自己的工具或使用预构建的工具

Define a tool
定义工具

Define a basic tool with the @tool decorator:
使用 @tool 装饰器定义一个基本工具:

API Reference: tool  API 参考:工具

from langchain_core.tools import tool

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

Run a tool  运行工具

Tools conform to the Runnable interface, which means you can run a tool using the invoke method:
工具符合 Runnable 接口 ,这意味着您可以使用 invoke 方法运行工具:

multiply.invoke({"a": 6, "b": 7})  # returns 42

If the tool is invoked with type="tool_call", it will return a ToolMessage:
如果使用 type=“tool_call” 调用工具,它将返回 ToolMessage

tool_call = {
    "type": "tool_call",
    "id": "1",
    "args": {"a": 42, "b": 7}
}
multiply.invoke(tool_call) # returns a ToolMessage object

Output:  输出量:

ToolMessage(content='294', name='multiply', tool_call_id='1')

Use in an agent
在代理中使用

To create a tool-calling agent, you can use the prebuilt create_react_agent:
要创建工具调用代理,可以使用预构建的 create_react_agent

API Reference: tool | create_react_agent
API 参考:工具|创建反应代理

from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet",
    tools=[multiply]
)
agent.invoke({"messages": [{"role": "user", "content": "what's 42 x 7?"}]})

Use in a workflow
在工作流中使用

If you are writing a custom workflow, you will need to:
如果您正在编写自定义工作流,则需要:

  1. register the tools with the chat model
    使用聊天模型注册工具
  2. call the tool if the model decides to use it
    如果模型决定使用该工具,则调用该工具

Use model.bind_tools() to register the tools with the model.
使用 model.bind_tools() 向模型注册工具。

API Reference: init_chat_model
API 参考:init_chat_model

from langchain.chat_models import init_chat_model

model = init_chat_model(model="claude-3-5-haiku-latest")

model_with_tools = model.bind_tools([multiply])

LLMs automatically determine if a tool invocation is necessary and handle calling the tool with the appropriate arguments.
LLM 自动确定是否需要工具调用,并使用适当的参数处理工具调用。

Extended example: attach tools to a chat model
扩展示例:将工具附加到聊天模型
from langchain_core.tools import tool
from langchain.chat_models import init_chat_model

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

model = init_chat_model(model="claude-3-5-haiku-latest")
model_with_tools = model.bind_tools([multiply])

response_message = model_with_tools.invoke("what's 42 x 7?")
tool_call = response_message.tool_calls[0]

multiply.invoke(tool_call)
ToolMessage(
    content='294',
    name='multiply',
    tool_call_id='toolu_0176DV4YKSD8FndkeuuLj36c'
)

ToolNode  工具节点

To execute tools in custom workflows, use the prebuilt ToolNode or implement your own custom node.
要在自定义工作流中执行工具,请使用预构建的 ToolNode 或实现您自己的自定义节点。

ToolNode is a specialized node for executing tools in a workflow. It provides the following features:
ToolNode 是用于在工作流中执行工具的专用节点。它提供以下功能:

  • Supports both synchronous and asynchronous tools.
    支持同步和异步工具。
  • Executes multiple tools concurrently.
    同时执行多个工具。
  • Handles errors during tool execution (handle_tool_errors=True, enabled by default). See handling tool errors for more details.
    处理工具执行期间的错误(handle_tool_errors=True,默认情况下启用)。有关详细信息,请参阅处理工具错误

ToolNode operates on MessagesState:
ToolNodeMessagesState 进行操作:

  • Input: MessagesState, where the last message is an AIMessage containing the tool_calls parameter.
    InputMessagesState,其中最后一条消息是包含 tool_calls 参数的 AIMessage
  • Output: MessagesState updated with the resulting ToolMessage from executed tools.
    输出 MessagesState 更新为执行工具的结果 ToolMessage

API Reference: ToolNode  API 参考:ToolNode

from langgraph.prebuilt import ToolNode

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

def get_coolest_cities():
    """Get a list of coolest cities"""
    return "nyc, sf"

tool_node = ToolNode([get_weather, get_coolest_cities])
tool_node.invoke({"messages": [...]})
Single tool call  单一工具调用
from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode

# Define tools
@tool
def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

tool_node = ToolNode([get_weather])

message_with_single_tool_call = AIMessage(
    content="",
    tool_calls=[
        {
            "name": "get_weather",
            "args": {"location": "sf"},
            "id": "tool_call_id",
            "type": "tool_call",
        }
    ],
)

tool_node.invoke({"messages": [message_with_single_tool_call]})
{'messages': [ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='tool_call_id')]}
Multiple tool calls  多个工具调用
from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode

# Define tools

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

def get_coolest_cities():
    """Get a list of coolest cities"""
    return "nyc, sf"

tool_node = ToolNode([get_weather, get_coolest_cities])

message_with_multiple_tool_calls = AIMessage(
    content="",
    tool_calls=[
        {
            "name": "get_coolest_cities",
            "args": {},
            "id": "tool_call_id_1",
            "type": "tool_call",
        },
        {
            "name": "get_weather",
            "args": {"location": "sf"},
            "id": "tool_call_id_2",
            "type": "tool_call",
        },
    ],
)

tool_node.invoke({"messages": [message_with_multiple_tool_calls]})  
{
    'messages': [
        ToolMessage(content='nyc, sf', name='get_coolest_cities', tool_call_id='tool_call_id_1'),
        ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='tool_call_id_2')
    ]
}
Use with a chat model
与聊天模型一起使用
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import ToolNode

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

tool_node = ToolNode([get_weather])

model = init_chat_model(model="claude-3-5-haiku-latest")
model_with_tools = model.bind_tools([get_weather])  


response_message = model_with_tools.invoke("what's the weather in sf?")
tool_node.invoke({"messages": [response_message]})
{'messages': [ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='toolu_01Pnkgw5JeTRxXAU7tyHT4UW')]}
Use in a tool-calling agent
在工具调用代理中使用

This is an example of creating a tool-calling agent from scratch using ToolNode. You can also use LangGraph's prebuilt agent.
这是一个使用 ToolNode 从头开始创建工具调用代理的示例。您也可以使用 LangGraph 的预构建代理。

from langchain.chat_models import init_chat_model
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, MessagesState, START, END

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

tool_node = ToolNode([get_weather])

model = init_chat_model(model="claude-3-5-haiku-latest")
model_with_tools = model.bind_tools([get_weather])

def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

def call_model(state: MessagesState):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}

builder = StateGraph(MessagesState)

# Define the two nodes we will cycle between
builder.add_node("call_model", call_model)
builder.add_node("tools", tool_node)

builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", should_continue, ["tools", END])
builder.add_edge("tools", "call_model")

graph = builder.compile()

graph.invoke({"messages": [{"role": "user", "content": "what's the weather in sf?"}]})
{
    'messages': [
        HumanMessage(content="what's the weather in sf?"),
        AIMessage(
            content=[{'text': "I'll help you check the weather in San Francisco right now.", 'type': 'text'}, {'id': 'toolu_01A4vwUEgBKxfFVc5H3v1CNs', 'input': {'location': 'San Francisco'}, 'name': 'get_weather', 'type': 'tool_use'}],
            tool_calls=[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01A4vwUEgBKxfFVc5H3v1CNs', 'type': 'tool_call'}]
        ),
        ToolMessage(content="It's 60 degrees and foggy."),
        AIMessage(content="The current weather in San Francisco is 60 degrees and foggy. Typical San Francisco weather with its famous marine layer!")
    ]
}

Tool customization
工具定制

For more control over tool behavior, use the @tool decorator.
要对工具行为进行更多控制,请使用 @tool 装饰器。

Parameter descriptions
参数说明

Auto-generate descriptions from docstrings:
从文档字符串自动生成描述:

API Reference: tool  API 参考:工具

from langchain_core.tools import tool

@tool("multiply_tool", parse_docstring=True)
def multiply(a: int, b: int) -> int:
    """Multiply two numbers.

    Args:
        a: First operand
        b: Second operand
    """
    return a * b

Explicit input schema
显式输入模式

Define schemas using args_schema:
使用 args_schema 定义架构:

API Reference: tool  API 参考:工具

from pydantic import BaseModel, Field
from langchain_core.tools import tool

class MultiplyInputSchema(BaseModel):
    """Multiply two numbers"""
    a: int = Field(description="First operand")
    b: int = Field(description="Second operand")

@tool("multiply_tool", args_schema=MultiplyInputSchema)
def multiply(a: int, b: int) -> int:
    return a * b

Tool name  工具名称

Override the default tool name (function name) using the first argument:
使用第一个参数替换默认工具名称(函数名称):

API Reference: tool  API 参考:工具

from langchain_core.tools import tool

@tool("multiply_tool")
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

Context management
上下文管理

Tools within LangGraph sometimes require context data, such as runtime-only arguments (e.g., user IDs or session details), that should not be controlled by the model. LangGraph provides three methods for managing such context:
LangGraph 中的工具有时需要上下文数据,例如仅运行时参数(例如,用户 ID 或会话详细信息),不应由模型控制。LangGraph 提供了三种方法来管理这样的上下文:

Type  类型 Usage Scenario  使用场景 Mutable  可变 Lifetime  寿命
Configuration   配置 Static, immutable runtime data
静态、不可变的运行时数据
Single invocation  单一调用
Short-term memory   短期记忆 Dynamic, changing data during invocation
在调用期间动态更改数据
Single invocation  单一调用
Long-term memory   长期记忆 Persistent, cross-session data
持久的跨会话数据
Across multiple sessions
跨多个会话

Configuration
配置

Use configuration when you have immutable runtime data that tools require, such as user identifiers. You pass these arguments via RunnableConfig at invocation and access them in the tool:
当您拥有工具所需的不可变运行时数据(如用户标识符)时,请使用配置。您可以在调用时通过 RunnableConfig 传递这些参数,并在工具中访问它们:

API Reference: tool | RunnableConfig
API 参考:工具|RunnableConfig

from langchain_core.tools import tool
from langchain_core.runnables import RunnableConfig

@tool
def get_user_info(config: RunnableConfig) -> str:
    """Retrieve user information based on user ID."""
    user_id = config["configurable"].get("user_id")
    return "User is John Smith" if user_id == "user_123" else "Unknown user"

# Invocation example with an agent
agent.invoke(
    {"messages": [{"role": "user", "content": "look up user info"}]},
    config={"configurable": {"user_id": "user_123"}}
)
Extended example: Access config in tools
扩展示例:访问工具中的配置
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

def get_user_info(
    config: RunnableConfig,
) -> str:
    """Look up user info."""
    user_id = config["configurable"].get("user_id")
    return "User is John Smith" if user_id == "user_123" else "Unknown user"

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[get_user_info],
)

agent.invoke(
    {"messages": [{"role": "user", "content": "look up user information"}]},
    config={"configurable": {"user_id": "user_123"}}
)

Short-term memory
短期记忆

Short-term memory maintains dynamic state that changes during a single execution.
短期记忆保持在一次执行过程中变化的动态状态。

To access (read) the graph state inside the tools, you can use a special parameter annotationInjectedState:
访问 (读取)工具内部的图形状态,您可以使用特殊的参数 annotation-InjectedState

API Reference: tool | InjectedState | create_react_agent | AgentState
API 参考:工具|InjectedState|创建反应代理|代理状态

from typing import Annotated, NotRequired
from langchain_core.tools import tool
from langgraph.prebuilt import InjectedState, create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState

class CustomState(AgentState):
    # The user_name field in short-term state
    user_name: NotRequired[str]

@tool
def get_user_name(
    state: Annotated[CustomState, InjectedState]
) -> str:
    """Retrieve the current user-name from state."""
    # Return stored name or a default if not set
    return state.get("user_name", "Unknown user")

# Example agent setup
agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[get_user_name],
    state_schema=CustomState,
)

# Invocation: reads the name from state (initially empty)
agent.invoke({"messages": "what's my name?"})

Use a tool that returns a Command to update user_name and append a confirmation message:
使用返回命令的工具更新 user_name 并附加确认消息:

API Reference: Command | ToolMessage | tool | InjectedToolCallId
API 参考:命令|工具消息|工具|InjectedToolCallId

from typing import Annotated
from langgraph.types import Command
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool, InjectedToolCallId

@tool
def update_user_name(
    new_name: str,
    tool_call_id: Annotated[str, InjectedToolCallId]
) -> Command:
    """Update user-name in short-term memory."""
    return Command(update={
        "user_name": new_name,
        "messages": [
            ToolMessage(f"Updated user name to {new_name}", tool_call_id=tool_call_id)
        ]
    })

Important  重要

If you want to use tools that return Command and update graph state, you can either use prebuilt create_react_agent / ToolNode components, or implement your own tool-executing node that collects Command objects returned by the tools and returns a list of them, e.g.:
如果你想使用返回命令和更新图形状态的工具,你可以使用预构建的 create_react_agent/ToolNode 组件,或者实现你自己的工具执行节点,它收集工具返回的命令对象并返回它们的列表,例如:

def call_tools(state):
    ...
    commands = [tools_by_name[tool_call["name"]].invoke(tool_call) for tool_call in tool_calls]
    return commands

Long-term memory
长期记忆

Use long-term memory to store user-specific or application-specific data across conversations. This is useful for applications like chatbots, where you want to remember user preferences or other information.
使用长期记忆来存储会话中的用户特定或应用特定数据。这对于像聊天机器人这样的应用程序很有用,在这些应用程序中,您希望记住用户偏好或其他信息。

To use long-term memory, you need to:
要使用长期记忆,你需要:

  1. Configure a store to persist data across invocations.
    配置存储以跨调用持久化数据。
  2. Use the get_store function to access the store from within tools or prompts.
    使用 get_store 函数从工具或提示符中访问存储区。

To access information in the store:
访问商店中的信息,请执行以下操作:

API Reference: RunnableConfig | tool | StateGraph | get_store
API 参考:RunnableConfig|工具|状态图|get_store

from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.graph import StateGraph
from langgraph.config import get_store

@tool
def get_user_info(config: RunnableConfig) -> str:
    """Look up user info."""
    # Same as that provided to `builder.compile(store=store)` 
    # or `create_react_agent`
    store = get_store()
    user_id = config["configurable"].get("user_id")
    user_info = store.get(("users",), user_id)
    return str(user_info.value) if user_info else "Unknown user"

builder = StateGraph(...)
...
graph = builder.compile(store=store)
Access long-term memory  获取长期记忆
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.config import get_store
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore

store = InMemoryStore() 

store.put(  
    ("users",),  
    "user_123",  
    {
        "name": "John Smith",
        "language": "English",
    } 
)

@tool
def get_user_info(config: RunnableConfig) -> str:
    """Look up user info."""
    # Same as that provided to `create_react_agent`
    store = get_store() 
    user_id = config["configurable"].get("user_id")
    user_info = store.get(("users",), user_id) 
    return str(user_info.value) if user_info else "Unknown user"

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[get_user_info],
    store=store 
)

# Run the agent
agent.invoke(
    {"messages": [{"role": "user", "content": "look up user information"}]},
    config={"configurable": {"user_id": "user_123"}}
)

To update information in the store:
更新商店中的信息,请执行以下操作:

API Reference: RunnableConfig | tool | StateGraph | get_store
API 参考:RunnableConfig|工具|状态图|get_store

from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.graph import StateGraph
from langgraph.config import get_store

@tool
def save_user_info(user_info: str, config: RunnableConfig) -> str:
    """Save user info."""
    # Same as that provided to `builder.compile(store=store)` 
    # or `create_react_agent`
    store = get_store()
    user_id = config["configurable"].get("user_id")
    store.put(("users",), user_id, user_info)
    return "Successfully saved user info."

builder = StateGraph(...)
...
graph = builder.compile(store=store)
Update long-term memory  更新长期记忆
from typing_extensions import TypedDict

from langchain_core.tools import tool
from langgraph.config import get_store
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore

store = InMemoryStore() 

class UserInfo(TypedDict): 
    name: str

@tool
def save_user_info(user_info: UserInfo, config: RunnableConfig) -> str: 
    """Save user info."""
    # Same as that provided to `create_react_agent`
    store = get_store() 
    user_id = config["configurable"].get("user_id")
    store.put(("users",), user_id, user_info) 
    return "Successfully saved user info."

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[save_user_info],
    store=store
)

# Run the agent
agent.invoke(
    {"messages": [{"role": "user", "content": "My name is John Smith"}]},
    config={"configurable": {"user_id": "user_123"}} 
)

# You can access the store directly to get the value
store.get(("users",), "user_123").value

Advanced tool features
高级工具功能

Immediate return
立即返回

Use return_direct=True to immediately return a tool's result without executing additional logic.
使用 return_direct=True 立即返回工具的结果,而不执行其他逻辑。

This is useful for tools that should not trigger further processing or tool calls, allowing you to return results directly to the user.
这对于不应触发进一步处理或工具调用的工具很有用,允许您将结果直接返回给用户。

@tool(return_direct=True)
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b
Extended example: Using return_direct in a prebuilt agent
扩展示例:在预构建代理中使用 return_direct
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

@tool(return_direct=True)
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[add]
)

agent.invoke(
    {"messages": [{"role": "user", "content": "what's 3 + 5?"}]}
)

Using without prebuilt components
不使用预构建组件

If you are building a custom workflow and are not relying on create_react_agent or ToolNode, you will also need to implement the control flow to handle return_direct=True.
如果您正在构建自定义工作流,并且不依赖于 create_react_agentToolNode,则还需要实现控制流来处理 return_direct=True

Force tool use
强制工具使用

If you need to force a specific tool to be used, you will need to configure this at the model level using the tool_choice parameter in the bind_tools method.
如果需要强制使用特定工具,则需要在模型级别使用 bind_tools 方法中的 tool_choice 参数进行配置。

Force specific tool usage via tool_choice:
通过 tool_choice 强制使用特定工具:

@tool(return_direct=True)
def greet(user_name: str) -> int:
    """Greet user."""
    return f"Hello {user_name}!"

tools = [greet]

configured_model = model.bind_tools(
    tools,
    # Force the use of the 'greet' tool
    tool_choice={"type": "tool", "name": "greet"}
)
Extended example: Force tool usage in an agent
扩展示例:在代理中强制使用工具

To force the agent to use specific tools, you can set the tool_choice option in model.bind_tools():

from langchain_core.tools import tool

@tool(return_direct=True)
def greet(user_name: str) -> int:
    """Greet user."""
    return f"Hello {user_name}!"

tools = [greet]

agent = create_react_agent(
    model=model.bind_tools(tools, tool_choice={"type": "tool", "name": "greet"}),
    tools=tools
)

agent.invoke(
    {"messages": [{"role": "user", "content": "Hi, I am Bob"}]}
)

Avoid infinite loops  避免无限循环

Forcing tool usage without stopping conditions can create infinite loops. Use one of the following safeguards:
在不停止条件的情况下强制使用工具可能会造成无限循环。使用以下安全措施之一:

  • Mark the tool with [return_direct=True](#immediate-return to end the loop after execution.
    使用[return_direct=True](#immediate-return 标记该工具,以在执行后结束循环。
  • Set recursion_limit to restrict the number of execution steps.
    设置 recursion_limit 以限制执行步骤的数量。

Tool choice configuration
刀具选择配置

The tool_choice parameter is used to configure which tool should be used by the model when it decides to call a tool.
tool_choice 参数用于配置模型决定调用工具时应使用哪个工具。

This is useful when you want to ensure that a specific tool is always called for a particular task or when you want to override the model's default behavior of choosing a tool based on its internal logic.
当您想要确保特定工具总是被特定任务调用时,或者当您想要覆盖模型的默认行为(根据其内部逻辑选择工具)时,这很有用。

Note that not all models support this feature, and the exact configuration may vary depending on the model you are using.
请注意,并非所有型号都支持此功能,确切的配置可能会因您使用的型号而异。

Disable parallel calls
禁用并行调用

For supported providers, you can disable parallel tool calling by setting parallel_tool_calls=False via the model.bind_tools() method:
对于支持的提供程序,您可以通过 model.bind_tools()方法设置 parallel_tool_calls=False 来禁用并行工具调用:

model.bind_tools(
    tools, 
    parallel_tool_calls=False
)
Extended example: disable parallel tool calls in a prebuilt agent
扩展示例:在预构建代理中禁用并行工具调用
from langchain.chat_models import init_chat_model

def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

model = init_chat_model("anthropic:claude-3-5-sonnet-latest", temperature=0)
tools = [add, multiply]
agent = create_react_agent(
    # disable parallel tool calls
    model=model.bind_tools(tools, parallel_tool_calls=False),
    tools=tools
)

agent.invoke(
    {"messages": [{"role": "user", "content": "what's 3 + 5 and 4 * 7?"}]}
)

Handle errors
处理错误

LangGraph provides built-in error handling for tool execution through the prebuilt ToolNode component, used both independently and in prebuilt agents.
LangGraph 通过预构建的 ToolNode 组件为工具执行提供内置的错误处理,可以独立使用,也可以在预构建的代理中使用。

By default, ToolNode catches exceptions raised during tool execution and returns them as ToolMessage objects with a status indicating an error.
默认情况下,ToolNode 捕获在工具执行期间引发的异常,并将它们作为 ToolMessage 对象返回,状态指示错误。

API Reference: AIMessage | ToolNode
API 参考:AIMessage|工具节点

from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode

def multiply(a: int, b: int) -> int:
    if a == 42:
        raise ValueError("The ultimate error")
    return a * b

# Default error handling (enabled by default)
tool_node = ToolNode([multiply])

message = AIMessage(
    content="",
    tool_calls=[{
        "name": "multiply",
        "args": {"a": 42, "b": 7},
        "id": "tool_call_id",
        "type": "tool_call"
    }]
)

result = tool_node.invoke({"messages": [message]})

Output:  输出量:

{'messages': [
    ToolMessage(
        content="Error: ValueError('The ultimate error')\n Please fix your mistakes.",
        name='multiply',
        tool_call_id='tool_call_id',
        status='error'
    )
]}

Disable error handling
禁用错误处理

To propagate exceptions directly, disable error handling:
要直接传播异常,请禁用错误处理:

tool_node = ToolNode([multiply], handle_tool_errors=False)

With error handling disabled, exceptions raised by tools will propagate up, requiring explicit management.
在禁用错误处理的情况下,工具引发的异常将向上传播,需要显式管理。

Custom error messages
自定义错误消息

Provide a custom error message by setting handle_tool_errors to a string:
通过将 handle_tool_errors 设置为字符串来提供自定义错误消息:

tool_node = ToolNode(
    [multiply],
    handle_tool_errors="Can't use 42 as the first operand, please switch operands!"
)

Example output:  输出示例:

{'messages': [
    ToolMessage(
        content="Can't use 42 as the first operand, please switch operands!",
        name='multiply',
        tool_call_id='tool_call_id',
        status='error'
    )
]}

Error handling in agents
代理中的错误处理

Error handling in prebuilt agents (create_react_agent) leverages ToolNode:
预构建代理(create_react_agent)中的错误处理利用 ToolNode

API Reference: create_react_agent
API 参考:create_react_agent

from langgraph.prebuilt import create_react_agent

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[multiply]
)

# Default error handling
agent.invoke({"messages": [{"role": "user", "content": "what's 42 x 7?"}]})

To disable or customize error handling in prebuilt agents, explicitly pass a configured ToolNode:
要在预构建代理中禁用或自定义错误处理,请显式传递已配置的 ToolNode

custom_tool_node = ToolNode(
    [multiply],
    handle_tool_errors="Cannot use 42 as a first operand!"
)

agent_custom = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=custom_tool_node
)

agent_custom.invoke({"messages": [{"role": "user", "content": "what's 42 x 7?"}]})

Handle large numbers of tools
处理大量工具

As the number of available tools grows, you may want to limit the scope of the LLM's selection, to decrease token consumption and to help manage sources of error in LLM reasoning.
随着可用工具数量的增加,您可能希望限制 LLM 的选择范围,以减少令牌消耗并帮助管理 LLM 推理中的错误来源。

To address this, you can dynamically adjust the tools available to a model by retrieving relevant tools at runtime using semantic search.
为了解决这个问题,您可以通过在运行时使用语义搜索检索相关工具来动态调整模型可用的工具。

See langgraph-bigtool prebuilt library for a ready-to-use implementation.
请参阅 langgraph-bigtool 预构建库以获得现成的实现。

Prebuilt tools
预建工具

LLM provider tools
LLM 提供者工具

You can use prebuilt tools from model providers by passing a dictionary with tool specs to the tools parameter of create_react_agent. For example, to use the web_search_preview tool from OpenAI:
您可以通过将带有工具规范的字典传递给 create_react_agenttools 参数来使用来自模型提供程序的预构建工具。例如,要使用 OpenAI 的 web_search_preview 工具:

API Reference: create_react_agent
API 参考:create_react_agent

from langgraph.prebuilt import create_react_agent

agent = create_react_agent(
    model="openai:gpt-4o-mini", 
    tools=[{"type": "web_search_preview"}]
)
response = agent.invoke(
    {"messages": ["What was a positive news story from today?"]}
)

Please consult the documentation for the specific model you are using to see which tools are available and how to use them.
请参考您正在使用的特定模型的文档,以了解可用的工具以及如何使用它们。

LangChain tools
LangChain 工具

Additionally, LangChain supports a wide range of prebuilt tool integrations for interacting with APIs, databases, file systems, web data, and more. These tools extend the functionality of agents and enable rapid development.
此外,LangChain 支持广泛的预构建工具集成,用于与 API,数据库,文件系统,Web 数据等进行交互。这些工具扩展了代理的功能并支持快速开发。

You can browse the full list of available integrations in the LangChain integrations directory.
您可以在 LangChain 集成目录中浏览可用集成的完整列表。

Some commonly used tool categories include:
一些常用的工具类别包括:

  • Search: Bing, SerpAPI, Tavily
    搜索 :Bing,SerpAPI,Tavily
  • Code interpreters: Python REPL, Node.js REPL
    代码解释器 :Python REPL,Node.js REPL
  • Databases: SQL, MongoDB, Redis
    数据库 :SQL,MongoDB,Redis
  • Web data: Web scraping and browsing
    Web 数据 :Web 抓取和浏览
  • APIs: OpenWeatherMap, NewsAPI, and others
    API:OpenWeatherMap、NewsAPI 等

These integrations can be configured and added to your agents using the same tools parameter shown in the examples above.
可以使用上面示例中显示的相同工具参数配置这些集成并将其添加到代理。