Function Tools 函式工具
Function tools provide a mechanism for models to retrieve extra information to help them generate a response.
函式工具提供模型檢索額外資訊的機制,以協助其生成回應。
They're useful when you want to enable the model to take some action and use the result, when it is impractical or impossible to put all the context an agent might need into the system prompt, or when you want to make agents' behavior more deterministic or reliable by deferring some of the logic required to generate a response to another (not necessarily AI-powered) tool.
當你希望讓模型能執行某些動作並使用結果,或當將代理可能需要的所有上下文放入系統提示中不切實際或不可能時,函式工具非常有用;此外,當你想讓代理的行為更具決定性或可靠性,透過將生成回應所需的部分邏輯延遲給另一個(不一定是 AI 驅動的)工具時,也很適合使用。
If you want a model to be able to call a function as its final action, without the result being sent back to the model, you can use an output function instead.
如果你希望模型能夠呼叫函式作為最終動作,但不將結果回傳給模型,可以改用輸出函式。
Function tools vs. RAG
函式工具與 RAG(檢索增強生成)
Function tools are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information.
函式工具基本上是 RAG(檢索增強生成,Retrieval-Augmented Generation)中的「R」——它們透過讓模型請求額外資訊來增強模型的能力。
The main semantic difference between PydanticAI Tools and RAG is RAG is synonymous with vector search, while PydanticAI tools are more general-purpose. (Note: we may add support for vector search functionality in the future, particularly an API for generating embeddings. See #58)
PydanticAI 工具與 RAG 之間的主要語義差異在於,RAG 同義於向量搜尋,而 PydanticAI 工具則是更通用的工具。(註:我們未來可能會加入向量搜尋功能的支援,特別是用於產生嵌入向量的 API。詳見 #58)
There are a number of ways to register tools with an agent:
有多種方式可以向代理註冊工具:
- via the
@agent.tooldecorator — for tools that need access to the agent context
透過@agent.tool裝飾器 — 適用於需要存取代理上下文的工具 - via the
@agent.tool_plaindecorator — for tools that do not need access to the agent context
透過@agent.tool_plain裝飾器 — 適用於不需要存取代理上下文的工具 - via the
toolskeyword argument toAgentwhich can take either plain functions, or instances ofTool
透過Agent的tools關鍵字參數註冊工具,該參數可以接受純函式或Tool的實例
Registering Function Tools via Decorator
透過裝飾器註冊函式工具
@agent.tool is considered the default decorator since in the majority of cases tools will need access to the agent context.
@agent.tool 被視為預設裝飾器,因為大多數情況下工具都需要存取代理上下文。
Here's an example using both:
以下是一個同時使用兩者的範例:
import random
from pydantic_ai import Agent, RunContext
agent = Agent(
'google-gla:gemini-1.5-flash',
deps_type=str,
system_prompt=(
"You're a dice game, you should roll the die and see if the number "
"you get back matches the user's guess. If so, tell them they're a winner. "
"Use the player's name in the response."
),
)
@agent.tool_plain
def roll_dice() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))
@agent.tool
def get_player_name(ctx: RunContext[str]) -> str:
"""Get the player's name."""
return ctx.deps
dice_result = agent.run_sync('My guess is 4', deps='Anne')
print(dice_result.output)
#> Congratulations Anne, you guessed correctly! You're a winner!
(This example is complete, it can be run "as is")
(此範例完整,可直接執行)
Let's print the messages from that game to see what happened:
讓我們列印該遊戲的訊息,看看發生了什麼:
from dice_game import dice_result
print(dice_result.all_messages())
"""
[
ModelRequest(
parts=[
SystemPromptPart(
content="You're a dice game, you should roll the die and see if the number you get back matches the user's guess. If so, tell them they're a winner. Use the player's name in the response.",
timestamp=datetime.datetime(...),
),
UserPromptPart(
content='My guess is 4',
timestamp=datetime.datetime(...),
),
]
),
ModelResponse(
parts=[
ToolCallPart(
tool_name='roll_dice', args={}, tool_call_id='pyd_ai_tool_call_id'
)
],
usage=Usage(requests=1, request_tokens=90, response_tokens=2, total_tokens=92),
model_name='gemini-1.5-flash',
timestamp=datetime.datetime(...),
),
ModelRequest(
parts=[
ToolReturnPart(
tool_name='roll_dice',
content='4',
tool_call_id='pyd_ai_tool_call_id',
timestamp=datetime.datetime(...),
)
]
),
ModelResponse(
parts=[
ToolCallPart(
tool_name='get_player_name', args={}, tool_call_id='pyd_ai_tool_call_id'
)
],
usage=Usage(requests=1, request_tokens=91, response_tokens=4, total_tokens=95),
model_name='gemini-1.5-flash',
timestamp=datetime.datetime(...),
),
ModelRequest(
parts=[
ToolReturnPart(
tool_name='get_player_name',
content='Anne',
tool_call_id='pyd_ai_tool_call_id',
timestamp=datetime.datetime(...),
)
]
),
ModelResponse(
parts=[
TextPart(
content="Congratulations Anne, you guessed correctly! You're a winner!"
)
],
usage=Usage(
requests=1, request_tokens=92, response_tokens=12, total_tokens=104
),
model_name='gemini-1.5-flash',
timestamp=datetime.datetime(...),
),
]
"""
We can represent this with a diagram:
我們可以用圖表來表示這一點:
Registering Function Tools via Agent Argument
透過代理參數註冊函式工具
As well as using the decorators, we can register tools via the tools argument to the Agent constructor. This is useful when you want to reuse tools, and can also give more fine-grained control over the tools.
除了使用裝飾器外,我們也可以透過傳遞 tools 參數給 Agent 建構子來註冊工具。這在您想重複使用工具時非常有用,並且可以對工具進行更細緻的控制。
import random
from pydantic_ai import Agent, RunContext, Tool
system_prompt = """\
You're a dice game, you should roll the die and see if the number
you get back matches the user's guess. If so, tell them they're a winner.
Use the player's name in the response.
"""
def roll_dice() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))
def get_player_name(ctx: RunContext[str]) -> str:
"""Get the player's name."""
return ctx.deps
agent_a = Agent(
'google-gla:gemini-1.5-flash',
deps_type=str,
tools=[roll_dice, get_player_name],
system_prompt=system_prompt,
)
agent_b = Agent(
'google-gla:gemini-1.5-flash',
deps_type=str,
tools=[
Tool(roll_dice, takes_ctx=False),
Tool(get_player_name, takes_ctx=True),
],
system_prompt=system_prompt,
)
dice_result = {}
dice_result['a'] = agent_a.run_sync('My guess is 6', deps='Yashar')
dice_result['b'] = agent_b.run_sync('My guess is 4', deps='Anne')
print(dice_result['a'].output)
#> Tough luck, Yashar, you rolled a 4. Better luck next time.
print(dice_result['b'].output)
#> Congratulations Anne, you guessed correctly! You're a winner!
(This example is complete, it can be run "as is")
(此範例完整,可直接執行)
Function Tool Output 函式工具輸出
Tools can return anything that Pydantic can serialize to JSON, as well as audio, video, image or document content depending on the types of multi-modal input the model supports:
工具可以回傳任何 Pydantic 能序列化成 JSON 的資料,以及根據模型支援的多模態輸入類型,回傳音訊、影片、圖片或文件內容:
from datetime import datetime
from pydantic import BaseModel
from pydantic_ai import Agent, DocumentUrl, ImageUrl
from pydantic_ai.models.openai import OpenAIResponsesModel
class User(BaseModel):
name: str
age: int
agent = Agent(model=OpenAIResponsesModel('gpt-4o'))
@agent.tool_plain
def get_current_time() -> datetime:
return datetime.now()
@agent.tool_plain
def get_user() -> User:
return User(name='John', age=30)
@agent.tool_plain
def get_company_logo() -> ImageUrl:
return ImageUrl(url='https://iili.io/3Hs4FMg.png')
@agent.tool_plain
def get_document() -> DocumentUrl:
return DocumentUrl(url='https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
result = agent.run_sync('What time is it?')
print(result.output)
#> The current time is 10:45 PM on April 17, 2025.
result = agent.run_sync('What is the user name?')
print(result.output)
#> The user's name is John.
result = agent.run_sync('What is the company name in the logo?')
print(result.output)
#> The company name in the logo is "Pydantic."
result = agent.run_sync('What is the main content of the document?')
print(result.output)
#> The document contains just the text "Dummy PDF file."
(此範例完整,可直接執行)
Some models (e.g. Gemini) natively support semi-structured return values, while some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.
有些模型(例如 Gemini)原生支援半結構化回傳值,而有些模型(如 OpenAI)則期望回傳文字,但似乎同樣能有效從資料中擷取意義。如果回傳的是 Python 物件,而模型期望的是字串,該值將會被序列化為 JSON。
Advanced Tool Returns 進階工具回傳值
For scenarios where you need more control over both the tool's return value and the content sent to the model, you can use ToolReturn. This is particularly useful when you want to:
在需要更精細控制工具回傳值及傳送給模型內容的情境下,可以使用 ToolReturn 。這在您想要:
- Provide rich multi-modal content (images, documents, etc.) to the model as context
提供豐富的多模態內容(圖片、文件等)作為模型上下文時特別有用 - Separate the programmatic return value from the model's context
將程式回傳值與模型上下文分離 - Include additional metadata that shouldn't be sent to the LLM
包含不應發送給 LLM 的額外元資料
Here's an example of a computer automation tool that captures screenshots and provides visual feedback:
以下是一個擷取螢幕截圖並提供視覺反饋的電腦自動化工具範例:
import time
from pydantic_ai import Agent
from pydantic_ai.messages import ToolReturn, BinaryContent
agent = Agent('openai:gpt-4o')
@agent.tool_plain
def click_and_capture(x: int, y: int) -> ToolReturn:
"""Click at coordinates and show before/after screenshots."""
# Take screenshot before action
before_screenshot = capture_screen()
# Perform click operation
perform_click(x, y)
time.sleep(0.5) # Wait for UI to update
# Take screenshot after action
after_screenshot = capture_screen()
return ToolReturn(
return_value=f"Successfully clicked at ({x}, {y})",
content=[
f"Clicked at coordinates ({x}, {y}). Here's the comparison:",
"Before:",
BinaryContent(data=before_screenshot, media_type="image/png"),
"After:",
BinaryContent(data=after_screenshot, media_type="image/png"),
"Please analyze the changes and suggest next steps."
],
metadata={
"coordinates": {"x": x, "y": y},
"action_type": "click_and_capture",
"timestamp": time.time()
}
)
# The model receives the rich visual content for analysis
# while your application can access the structured return_value and metadata
result = agent.run_sync("Click on the submit button and tell me what happened")
print(result.output)
# The model can analyze the screenshots and provide detailed feedback
return_value: The actual return value used in the tool response. This is what gets serialized and sent back to the model as the tool's result.
return_value:工具回應中實際使用的回傳值。這是會被序列化並作為工具結果回傳給模型的內容。content: A sequence of content (text, images, documents, etc.) that provides additional context to the model. This appears as a separate user message.
content:一系列內容(文字、圖片、文件等),為模型提供額外的上下文。這會以獨立的使用者訊息形式出現。metadata: Optional metadata that your application can access but is not sent to the LLM. Useful for logging, debugging, or additional processing. Some other AI frameworks call this feature "artifacts".
metadata:可選的應用程式元資料,您的應用程式可以存取,但不會傳送給 LLM。適用於記錄、除錯或額外處理。其他一些 AI 框架稱此功能為「artifacts」。
This separation allows you to provide rich context to the model while maintaining clean, structured return values for your application logic.
這種分離方式讓您能夠為模型提供豐富的上下文,同時保持應用程式邏輯的回傳值乾淨且結構化。
Function Tools vs. Structured Outputs
Function Tools 與 Structured Outputs
As the name suggests, function tools use the model's "tools" or "functions" API to let the model know what is available to call. Tools or functions are also used to define the schema(s) for structured responses, thus a model might have access to many tools, some of which call function tools while others end the run and produce a final output.
顧名思義,function tools 使用模型的「tools」或「functions」API,讓模型知道有哪些可供調用的功能。工具或函數同時用於定義結構化回應的架構,因此模型可能能夠存取多個工具,其中一些會調用 function tools,而其他則結束執行並產生最終輸出。
Function tools and schema
函式工具與結構(schema)
Function parameters are extracted from the function signature, and all parameters except RunContext are used to build the schema for that tool call.
函式參數是從函式簽名中擷取的,除了 RunContext 之外的所有參數都用於建立該工具呼叫的結構。
Even better, PydanticAI extracts the docstring from functions and (thanks to griffe) extracts parameter descriptions from the docstring and adds them to the schema.
更棒的是,PydanticAI 從函式中擷取 docstring,並且(多虧了 griffe)從 docstring 中擷取參數描述並將其加入到 schema 中。
Griffe supports extracting parameter descriptions from google, numpy, and sphinx style docstrings. PydanticAI will infer the format to use based on the docstring, but you can explicitly set it using docstring_format. You can also enforce parameter requirements by setting require_parameter_descriptions=True. This will raise a UserError if a parameter description is missing.
Griffe 支援從 google 、 numpy 和 sphinx 風格的 docstring 中擷取參數描述。PydanticAI 會根據 docstring 推斷要使用的格式,但你也可以使用 docstring_format 明確設定。你還可以透過設定 require_parameter_descriptions=True 來強制參數需求。如果缺少參數描述,將會引發 UserError 。
To demonstrate a tool's schema, here we use FunctionModel to print the schema a model would receive:
為了展示工具的結構,我們在此使用 FunctionModel 來列印模型會接收到的結構:
from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage, ModelResponse, TextPart
from pydantic_ai.models.function import AgentInfo, FunctionModel
agent = Agent()
@agent.tool_plain(docstring_format='google', require_parameter_descriptions=True)
def foobar(a: int, b: str, c: dict[str, list[float]]) -> str:
"""Get me foobar.
Args:
a: apple pie
b: banana cake
c: carrot smoothie
"""
return f'{a} {b} {c}'
def print_schema(messages: list[ModelMessage], info: AgentInfo) -> ModelResponse:
tool = info.function_tools[0]
print(tool.description)
#> Get me foobar.
print(tool.parameters_json_schema)
"""
{
'additionalProperties': False,
'properties': {
'a': {'description': 'apple pie', 'type': 'integer'},
'b': {'description': 'banana cake', 'type': 'string'},
'c': {
'additionalProperties': {'items': {'type': 'number'}, 'type': 'array'},
'description': 'carrot smoothie',
'type': 'object',
},
},
'required': ['a', 'b', 'c'],
'type': 'object',
}
"""
return ModelResponse(parts=[TextPart('foobar')])
agent.run_sync('hello', model=FunctionModel(print_schema))
(This example is complete, it can be run "as is")
(此範例完整,可直接執行)
If a tool has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the tool is simplified to be just that object.
如果工具只有一個參數,且該參數可以用 JSON schema 中的物件表示(例如 dataclass、TypedDict、pydantic 模型),則該工具的結構會簡化為僅該物件。
Here's an example where we use TestModel.last_model_request_parameters to inspect the tool schema that would be passed to the model.
以下是一個範例,我們使用 TestModel.last_model_request_parameters 來檢視將傳遞給模型的工具結構。
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.test import TestModel
agent = Agent()
class Foobar(BaseModel):
"""This is a Foobar"""
x: int
y: str
z: float = 3.14
@agent.tool_plain
def foobar(f: Foobar) -> str:
return str(f)
test_model = TestModel()
result = agent.run_sync('hello', model=test_model)
print(result.output)
#> {"foobar":"x=0 y='a' z=3.14"}
print(test_model.last_model_request_parameters.function_tools)
"""
[
ToolDefinition(
name='foobar',
parameters_json_schema={
'properties': {
'x': {'type': 'integer'},
'y': {'type': 'string'},
'z': {'default': 3.14, 'type': 'number'},
},
'required': ['x', 'y'],
'title': 'Foobar',
'type': 'object',
},
description='This is a Foobar',
)
]
"""
(This example is complete, it can be run "as is")
(此範例完整,可直接執行)
If you have a function that lacks appropriate documentation (i.e. poorly named, no type information, poor docstring, use of args or *kwargs and suchlike) then you can still turn it into a tool that can be effectively used by the agent with the Tool.from_schema function. With this you provide the name, description and JSON schema for the function directly:
如果你的函式缺乏適當的文件說明(例如命名不佳、沒有類型資訊、說明字串不足、使用 args 或 *kwargs 等),你仍然可以使用 Tool.from_schema 函式將其轉換成代理人能有效使用的工具。透過此方法,你可以直接提供函式的名稱、描述和 JSON 結構:
from pydantic_ai import Agent, Tool
from pydantic_ai.models.test import TestModel
def foobar(**kwargs) -> str:
return kwargs['a'] + kwargs['b']
tool = Tool.from_schema(
function=foobar,
name='sum',
description='Sum two numbers.',
json_schema={
'additionalProperties': False,
'properties': {
'a': {'description': 'the first number', 'type': 'integer'},
'b': {'description': 'the second number', 'type': 'integer'},
},
'required': ['a', 'b'],
'type': 'object',
}
)
test_model = TestModel()
agent = Agent(test_model, tools=[tool])
result = agent.run_sync('testing...')
print(result.output)
#> {"sum":0}
Please note that validation of the tool arguments will not be performed, and this will pass all arguments as keyword arguments.
請注意,工具參數將不會進行驗證,且所有參數將以關鍵字參數的形式傳遞。
Dynamic Function tools 動態函式工具
Tools can optionally be defined with another function: prepare, which is called at each step of a run to
customize the definition of the tool passed to the model, or omit the tool completely from that step.
工具可以選擇性地使用另一個函式定義: prepare ,該函式會在每次執行步驟時被呼叫,以自訂傳遞給模型的工具定義,或完全省略該步驟的工具。
A prepare method can be registered via the prepare kwarg to any of the tool registration mechanisms:
prepare 方法可以透過 prepare 關鍵字參數註冊到任何工具註冊機制:
@agent.tooldecorator@agent.tool裝飾器@agent.tool_plaindecorator@agent.tool_plain裝飾器TooldataclassTool資料類別
The prepare method, should be of type ToolPrepareFunc, a function which takes RunContext and a pre-built ToolDefinition, and should either return that ToolDefinition with or without modifying it, return a new ToolDefinition, or return None to indicate this tools should not be registered for that step.
prepare 方法,應為 ToolPrepareFunc 類型,一個接受 RunContext 和預先建立的 ToolDefinition 的函式,並且應該要麼回傳該 ToolDefinition (可修改或不修改),要麼回傳一個新的 ToolDefinition ,或回傳 None 以表示該工具不應在該步驟註冊。
Here's a simple prepare method that only includes the tool if the value of the dependency is 42.
這是一個簡單的 prepare 方法,只有當依賴的值為 42 時才包含該工具。
As with the previous example, we use TestModel to demonstrate the behavior without calling a real model.
如同前一個範例,我們使用 TestModel 來示範行為,而不呼叫真實模型。
from typing import Union
from pydantic_ai import Agent, RunContext
from pydantic_ai.tools import ToolDefinition
agent = Agent('test')
async def only_if_42(
ctx: RunContext[int], tool_def: ToolDefinition
) -> Union[ToolDefinition, None]:
if ctx.deps == 42:
return tool_def
@agent.tool(prepare=only_if_42)
def hitchhiker(ctx: RunContext[int], answer: str) -> str:
return f'{ctx.deps} {answer}'
result = agent.run_sync('testing...', deps=41)
print(result.output)
#> success (no tool calls)
result = agent.run_sync('testing...', deps=42)
print(result.output)
#> {"hitchhiker":"42 a"}
(This example is complete, it can be run "as is")
(此範例完整,可直接執行)
Here's a more complex example where we change the description of the name parameter to based on the value of deps
這裡有一個更複雜的範例,我們根據 deps 的值來改變 name 參數的描述。
For the sake of variation, we create this tool using the Tool dataclass.
為了增加變化,我們使用 Tool dataclass 來建立這個工具。
from __future__ import annotations
from typing import Literal
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.test import TestModel
from pydantic_ai.tools import Tool, ToolDefinition
def greet(name: str) -> str:
return f'hello {name}'
async def prepare_greet(
ctx: RunContext[Literal['human', 'machine']], tool_def: ToolDefinition
) -> ToolDefinition | None:
d = f'Name of the {ctx.deps} to greet.'
tool_def.parameters_json_schema['properties']['name']['description'] = d
return tool_def
greet_tool = Tool(greet, prepare=prepare_greet)
test_model = TestModel()
agent = Agent(test_model, tools=[greet_tool], deps_type=Literal['human', 'machine'])
result = agent.run_sync('testing...', deps='human')
print(result.output)
#> {"greet":"hello a"}
print(test_model.last_model_request_parameters.function_tools)
"""
[
ToolDefinition(
name='greet',
parameters_json_schema={
'additionalProperties': False,
'properties': {
'name': {'type': 'string', 'description': 'Name of the human to greet.'}
},
'required': ['name'],
'type': 'object',
},
)
]
"""
(This example is complete, it can be run "as is")
Agent-wide Dynamic Tool Preparation
In addition to per-tool prepare methods, you can also define an agent-wide prepare_tools function. This function is called at each step of a run and allows you to filter or modify the list of all tool definitions available to the agent for that step. This is especially useful if you want to enable or disable multiple tools at once, or apply global logic based on the current context.
The prepare_tools function should be of type ToolsPrepareFunc, which takes the RunContext and a list of ToolDefinition, and returns a new list of tool definitions (or None to disable all tools for that step).
Note
The list of tool definitions passed to prepare_tools includes both regular tools and tools from any MCP servers attached to the agent.
Here's an example that makes all tools strict if the model is an OpenAI model:
from dataclasses import replace
from typing import Union
from pydantic_ai import Agent, RunContext
from pydantic_ai.tools import ToolDefinition
from pydantic_ai.models.test import TestModel
async def turn_on_strict_if_openai(
ctx: RunContext[None], tool_defs: list[ToolDefinition]
) -> Union[list[ToolDefinition], None]:
if ctx.model.system == 'openai':
return [replace(tool_def, strict=True) for tool_def in tool_defs]
return tool_defs
test_model = TestModel()
agent = Agent(test_model, prepare_tools=turn_on_strict_if_openai)
@agent.tool_plain
def echo(message: str) -> str:
return message
agent.run_sync('testing...')
assert test_model.last_model_request_parameters.function_tools[0].strict is None
# Set the system attribute of the test_model to 'openai'
test_model._system = 'openai'
agent.run_sync('testing with openai...')
assert test_model.last_model_request_parameters.function_tools[0].strict
(This example is complete, it can be run "as is")
Here's another example that conditionally filters out the tools by name if the dependency (ctx.deps) is True:
from typing import Union
from pydantic_ai import Agent, RunContext
from pydantic_ai.tools import Tool, ToolDefinition
def launch_potato(target: str) -> str:
return f'Potato launched at {target}!'
async def filter_out_tools_by_name(
ctx: RunContext[bool], tool_defs: list[ToolDefinition]
) -> Union[list[ToolDefinition], None]:
if ctx.deps:
return [tool_def for tool_def in tool_defs if tool_def.name != 'launch_potato']
return tool_defs
agent = Agent(
'test',
tools=[Tool(launch_potato)],
prepare_tools=filter_out_tools_by_name,
deps_type=bool,
)
result = agent.run_sync('testing...', deps=False)
print(result.output)
#> {"launch_potato":"Potato launched at a!"}
result = agent.run_sync('testing...', deps=True)
print(result.output)
#> success (no tool calls)
(This example is complete, it can be run "as is")
You can use prepare_tools to:
- Dynamically enable or disable tools based on the current model, dependencies, or other context
- Modify tool definitions globally (e.g., set all tools to strict mode, change descriptions, etc.)
If both per-tool prepare and agent-wide prepare_tools are used, the per-tool prepare is applied first to each tool, and then prepare_tools is called with the resulting list of tool definitions.
Tool Execution and Retries
When a tool is executed, its arguments (provided by the LLM) are first validated against the function's signature using Pydantic. If validation fails (e.g., due to incorrect types or missing required arguments), a ValidationError is raised, and the framework automatically generates a RetryPromptPart containing the validation details. This prompt is sent back to the LLM, informing it of the error and allowing it to correct the parameters and retry the tool call.
Beyond automatic validation errors, the tool's own internal logic can also explicitly request a retry by raising the ModelRetry exception. This is useful for situations where the parameters were technically valid, but an issue occurred during execution (like a transient network error, or the tool determining the initial attempt needs modification).
from pydantic_ai import ModelRetry
def my_flaky_tool(query: str) -> str:
if query == 'bad':
# Tell the LLM the query was bad and it should try again
raise ModelRetry("The query 'bad' is not allowed. Please provide a different query.")
# ... process query ...
return 'Success!'
ModelRetry also generates a RetryPromptPart containing the exception message, which is sent back to the LLM to guide its next attempt. Both ValidationError and ModelRetry respect the retries setting configured on the Tool or Agent.引發
ModelRetry 同時也會產生一個包含例外訊息的 RetryPromptPart ,該訊息會回傳給 LLM 以指導其下一次嘗試。 ValidationError 和 ModelRetry 都會遵守在 Tool 或 Agent 上設定的 retries 設定。
Third-Party Tools
MCP Tools
See the MCP Client documentation for how to use MCP servers with Pydantic AI.
LangChain Tools
If you'd like to use a tool from LangChain's community tool library with Pydantic AI, you can use the pydancic_ai.ext.langchain.tool_from_langchain convenience method. Note that Pydantic AI will not validate the arguments in this case -- it's up to the model to provide arguments matching the schema specified by the LangChain tool, and up to the LangChain tool to raise an error if the arguments are invalid.
You will need to install the langchain-community package and any others required by the tool in question.
Here is how you can use the LangChain DuckDuckGoSearchRun tool, which requires the duckduckgo-search package:
from langchain_community.tools import DuckDuckGoSearchRun
from pydantic_ai import Agent
from pydantic_ai.ext.langchain import tool_from_langchain
search = DuckDuckGoSearchRun()
search_tool = tool_from_langchain(search)
agent = Agent(
'google-gla:gemini-2.0-flash',
tools=[search_tool],
)
result = agent.run_sync('What is the release date of Elden Ring Nightreign?')
print(result.output)
#> Elden Ring Nightreign is planned to be released on May 30, 2025.
ACI.dev Tools
If you'd like to use a tool from the ACI.dev tool library with Pydantic AI, you can use the pydancic_ai.ext.aci.tool_from_aci convenience method. Note that Pydantic AI will not validate the arguments in this case -- it's up to the model to provide arguments matching the schema specified by the ACI tool, and up to the ACI tool to raise an error if the arguments are invalid.
You will need to install the aci-sdk package, set your ACI API key in the ACI_API_KEY environment variable, and pass your ACI "linked account owner ID" to the function.
Here is how you can use the ACI.dev TAVILY__SEARCH tool:
import os
from pydantic_ai import Agent
from pydantic_ai.ext.aci import tool_from_aci
tavily_search = tool_from_aci(
'TAVILY__SEARCH',
linked_account_owner_id=os.getenv('LINKED_ACCOUNT_OWNER_ID')
)
agent = Agent(
'google-gla:gemini-2.0-flash',
tools=[tavily_search]
)
result = agent.run_sync('What is the release date of Elden Ring Nightreign?')
print(result.output)
#> Elden Ring Nightreign is planned to be released on May 30, 2025.