Messages 消息
Overview 概述
Messages are the unit of communication in chat models. They are used to represent the input and output of a chat model, as well as any additional context or metadata that may be associated with a conversation.
消息是聊天模型中的通信单元。它们用于表示聊天模型的输入和输出,以及可能与对话相关联的任何其他上下文或元数据。
Each message has a role (e.g., "user", "assistant") and content (e.g., text, multimodal data) with additional metadata that varies depending on the chat model provider.
每条消息都有一个角色 (例如,“用户”、“助理”)和内容 (例如,文本、多模式数据)以及根据聊天模型提供者而变化的附加元数据。
LangChain provides a unified message format that can be used across chat models, allowing users to work with different chat models without worrying about the specific details of the message format used by each model provider.
LangChain 提供了一种统一的消息格式,可以跨聊天模型使用,允许用户使用不同的聊天模型,而无需担心每个模型提供者使用的消息格式的具体细节。
What is inside a message?
消息里面有什么?
A message typically consists of the following pieces of information:
消息通常由以下信息组成:
- Role: The role of the message (e.g., "user", "assistant").
角色 :消息的角色(例如,“用户”、“助理”)。 - Content: The content of the message (e.g., text, multimodal data).
内容 :消息的内容(例如,文本、多模式数据)。 - Additional metadata: id, name, token usage and other model-specific metadata.
附加元数据:id、名称、 令牌使用和其他模型特定的元数据。
Role 作用
Roles are used to distinguish between different types of messages in a conversation and help the chat model understand how to respond to a given sequence of messages.
角色用于区分对话中不同类型的消息,并帮助聊天模型了解如何响应给定的消息序列。
| Role 作用 | Description 描述 |
|---|---|
| system 系统 | Used to tell the chat model how to behave and provide additional context. Not supported by all chat model providers. 用于告诉聊天模型如何行为并提供额外的上下文。并非所有聊天模式提供程序都支持。 |
| user 用户 | Represents input from a user interacting with the model, usually in the form of text or other interactive input. 表示来自与模型交互的用户的输入,通常以文本或其他交互式输入的形式。 |
| assistant 助理 | Represents a response from the model, which can include text or a request to invoke tools. 表示来自模型的响应,该响应可以包括文本或调用工具的请求。 |
| tool 工具 | A message used to pass the results of a tool invocation back to the model after external data or processing has been retrieved. Used with chat models that support tool calling. 在检索到外部数据或处理之后,用于将工具调用的结果传递回模型的消息。与支持工具调用的聊天模型一起使用。 |
| function (legacy) 函数 (遗留) | This is a legacy role, corresponding to OpenAI's legacy function-calling API. tool role should be used instead. 这是一个遗留角色,对应于 OpenAI 的遗留函数调用 API。 工具的作用,而不是使用。 |
Content 内容
The content of a message text or a list of dictionaries representing multimodal data (e.g., images, audio, video). The exact format of the content can vary between different chat model providers.
消息文本的内容或表示多模态数据的字典列表(例如,图像、音频、视频)。内容的确切格式可能因不同的聊天模式提供商而异。
Currently, most chat models support text as the primary content type, with some models also supporting multimodal data. However, support for multimodal data is still limited across most chat model providers.
目前,大多数聊天模型支持文本作为主要内容类型,有些模型还支持多模态数据。然而,大多数聊天模型提供商对多模式数据的支持仍然有限。
For more information see:
如需详细信息,请参阅:
- SystemMessage -- for content which should be passed to direct the conversation
SystemMessage--用于应传递以引导对话的内容 - HumanMessage -- for content in the input from the user.
HumanMessage--用于用户输入中的内容。 - AIMessage -- for content in the response from the model.
AIMessage--来自模型的响应中的内容。 - Multimodality -- for more information on multimodal content.
Multimodality--有关多模态内容的更多信息。
Other Message Data 其他消息数据
Depending on the chat model provider, messages can include other data such as:
根据聊天模型提供程序,消息可以包含其他数据,例如:
- ID: An optional unique identifier for the message.
ID:消息的可选唯一标识符。 - Name: An optional
nameproperty which allows differentiate between different entities/speakers with the same role. Not all models support this!
名称 :一个可选的名称属性,允许区分具有相同角色的不同实体/发言者。并非所有型号都支持此功能! - Metadata: Additional information about the message, such as timestamps, token usage, etc.
元数据 :关于消息的附加信息,如时间戳、令牌使用等。 - Tool Calls: A request made by the model to call one or more tools> See tool calling for more information.
工具调用 :模型发出的调用一个或多个工具的请求%3 E 有关详细信息,请参阅工具调用 。
Conversation Structure 会话结构
The sequence of messages into a chat model should follow a specific structure to ensure that the chat model can generate a valid response.
进入聊天模型的消息序列应该遵循特定的结构,以确保聊天模型可以生成有效的响应。
For example, a typical conversation structure might look like this:
例如,一个典型的会话结构可能看起来像这样:
- User Message: "Hello, how are you?"
用户留言 :“你好,你好吗?”" - Assistant Message: "I'm doing well, thank you for asking."
助理消息 :“我很好,谢谢您的询问。" - User Message: "Can you tell me a joke?"
网友留言 :“你能给我讲个笑话吗?”" - Assistant Message: "Sure! Why did the scarecrow win an award? Because he was outstanding in his field!"
助理留言 :“当然!稻草人为什么获奖?因为他在自己的领域里出类拔萃!"
Please read the chat history guide for more information on managing chat history and ensuring that the conversation structure is correct.
请阅读聊天记录指南,了解有关管理聊天记录和确保对话结构正确的更多信息。
LangChain Messages LangChain 在线留言
LangChain provides a unified message format that can be used across all chat models, allowing users to work with different chat models without worrying about the specific details of the message format used by each model provider.
LangChain 提供了一种统一的消息格式,可以在所有聊天模型中使用,允许用户使用不同的聊天模型,而无需担心每个模型提供商使用的消息格式的具体细节。
LangChain messages are Python objects that subclass from a BaseMessage.
LangChain 消息是从 BaseMessage 子类化的 Python 对象。
The five main message types are:
五种主要的消息类型是:
- SystemMessage: corresponds to system role
SystemMessage:对应系统角色 - HumanMessage: corresponds to user role
HumanMessage:对应用户角色 - AIMessage: corresponds to assistant role
AIMessage:对应助理角色 - AIMessageChunk: corresponds to assistant role, used for streaming responses
AIMessageChunk:对应于助理角色,用于流式响应 - ToolMessage: corresponds to tool role
ToolMessage:对应工具角色
Other important messages include:
其他重要信息包括:
- RemoveMessage -- does not correspond to any role. This is an abstraction, mostly used in LangGraph to manage chat history.
RemoveMessage--不对应于任何角色。这是一个抽象,主要用于 LangGraph 来管理聊天记录。 - Legacy FunctionMessage: corresponds to the function role in OpenAI's legacy function-calling API.
LegacyFunctionMessage:对应 OpenAI 遗留函数调用 API 中的函数角色。
You can find more information about messages in the API Reference.
您可以在 API 参考中找到有关消息的更多信息。
SystemMessage 系统消息
A SystemMessage is used to prime the behavior of the AI model and provide additional context, such as instructing the model to adopt a specific persona or setting the tone of the conversation (e.g., "This is a conversation about cooking").SystemMessage 用于启动 AI 模型的行为并提供额外的上下文,例如指示模型采用特定的角色或设置对话的基调(例如,“这是一个关于烹饪的对话。
Different chat providers may support system message in one of the following ways:
不同的聊天提供商可以通过以下方式之一支持系统消息:
- Through a "system" message role: In this case, a system message is included as part of the message sequence with the role explicitly set as "system."
通过“系统”消息角色 :在这种情况下,系统消息作为消息序列的一部分被包含,角色显式设置为“系统”。" - Through a separate API parameter for system instructions: Instead of being included as a message, system instructions are passed via a dedicated API parameter.
通过用于系统指令的单独 API 参数: 系统指令不是作为消息包括在内,而是通过专用 API 参数传递。 - No support for system messages: Some models do not support system messages at all.
不支持系统消息 :某些模型根本不支持系统消息。
Most major chat model providers support system instructions via either a chat message or a separate API parameter. LangChain will automatically adapt based on the provider’s capabilities.
大多数主要的聊天模型提供商通过聊天消息或单独的 API 参数支持系统指令。LangChain 将根据提供商的能力自动调整。
If the provider supports a separate API parameter for system instructions, LangChain will extract the content of a system message and pass it through that parameter.
如果提供者支持系统指令的单独 API 参数,LangChain 将提取系统消息的内容并通过该参数传递。
If no system message is supported by the provider, in most cases LangChain will attempt to incorporate the system message's content into a HumanMessage or raise an exception if that is not possible.
如果提供者不支持系统消息,在大多数情况下,LangChain 会尝试将系统消息的内容合并到 HumanMessage 中,或者在不可能的情况下引发异常。
However, this behavior is not yet consistently enforced across all implementations, and if using a less popular implementation of a chat model (e.g., an implementation from the langchain-community package) it is recommended to check the specific documentation for that model.
然而,这种行为尚未在所有实现中一致地强制执行,并且如果使用聊天模型的不太流行的实现(例如,来自 langchain-community 包的实现),建议检查该模型的特定文档。
HumanMessage 人力资源
The HumanMessage corresponds to the "user" role. A human message represents input from a user interacting with the model.HumanMessage 对应于 “用户” 角色。人工消息表示来自与模型交互的用户的输入。
Text Content 文本内容
Most chat models expect the user input to be in the form of text.
大多数聊天模型期望用户输入以文本的形式。
from langchain_core.messages import HumanMessage
model.invoke([HumanMessage(content="Hello, how are you?")])
When invoking a chat model with a string as input, LangChain will automatically convert the string into a HumanMessage object. This is mostly useful for quick testing.
当使用字符串作为输入调用聊天模型时,LangChain 会自动将字符串转换为 HumanMessage 对象。这对于快速测试非常有用。
model.invoke("Hello, how are you?")
Multi-modal Content 多模态内容
Some chat models accept multimodal inputs, such as images, audio, video, or files like PDFs.
一些聊天模型接受多模式输入,例如图像、音频、视频或 PDF 等文件。
Please see the multimodality guide for more information.
请参阅多模态指南了解更多信息。
AIMessage
AIMessage is used to represent a message with the role "assistant". This is the response from the model, which can include text or a request to invoke tools. It could also include other media types like images, audio, or video -- though this is still uncommon at the moment.AIMessage 用于表示角色为 “助理” 的消息。这是来自模型的响应,其中可以包括文本或调用工具的请求。它还可以包括其他媒体类型,如图像,音频或视频-尽管目前这仍然不常见。
from langchain_core.messages import HumanMessage
ai_message = model.invoke([HumanMessage("Tell me a joke")])
ai_message # <-- AIMessage
An AIMessage has the following attributes. The attributes which are standardized are the ones that LangChain attempts to standardize across different chat model providers. raw fields are specific to the model provider and may vary.AIMessage 具有以下属性。 标准化的属性是 LangChain 试图在不同的聊天模型提供商之间标准化的属性。 原始字段特定于模型提供者并且可以变化。
| Attribute 属性 | Standardized/Raw 标准化/原始 | Description 描述 |
|---|---|---|
content | Raw 原 | Usually a string, but can be a list of content blocks. See content for details. 通常是字符串,但也可以是内容块的列表。详情请参阅内容 。 |
tool_calls | Standardized 标准化 | Tool calls associated with the message. See tool calling for details. 与消息关联的工具调用。有关详细信息,请参见工具调用 。 |
invalid_tool_calls | Standardized 标准化 | Tool calls with parsing errors associated with the message. See tool calling for details. 具有与消息关联的分析错误的工具调用。有关详细信息,请参见工具调用 。 |
usage_metadata | Standardized 标准化 | Usage metadata for a message, such as token counts. See Usage Metadata API Reference. 消息的使用情况元数据,如令牌计数 。请参见使用元数据 API 参考 。 |
id | Standardized 标准化 | An optional unique identifier for the message, ideally provided by the provider/model that created the message. 消息的可选唯一标识符,理想情况下由创建消息的提供者/模型提供。 |
response_metadata | Raw 原 | Response metadata, e.g., response headers, logprobs, token counts. 响应元数据,例如,响应标头、logprobs、令牌计数。 |
content 内容
The content property of an AIMessage represents the response generated by the chat model.AIMessage 的 content 属性表示聊天模型生成的响应。
The content is either:
内容是:
- text -- the norm for virtually all chat models.
文本 --几乎所有聊天模式的标准。 - A list of dictionaries -- Each dictionary represents a content block and is associated with a
type.
字典列表 --每个字典代表一个内容块并与一个类型相关联。- Used by Anthropic for surfacing agent thought process when doing tool calling.
在进行工具调用时,由 Anthropic 用于表面处理代理思维过程。 - Used by OpenAI for audio outputs. Please see multi-modal content for more information.
由 OpenAI 用于音频输出。请参阅多模态内容了解更多信息。
- Used by Anthropic for surfacing agent thought process when doing tool calling.
The content property is not standardized across different chat model providers, mostly because there are
still few examples to generalize from.
content 属性在不同的聊天模型提供商之间没有标准化,主要是因为仍然没有几个例子可以概括。
AIMessageChunk
It is common to stream responses for the chat model as they are being generated, so the user can see the response in real-time instead of waiting for the entire response to be generated before displaying it.
通常在生成聊天模型的响应时对其进行流式处理,因此用户可以实时查看响应,而不是在显示之前等待整个响应生成。
It is returned from the stream, astream and astream_events methods of the chat model.
它从聊天模型的 stream、astream 和 astream_events 方法返回。
For example, 比如说,
for chunk in model.stream([HumanMessage("what color is the sky?")]):
print(chunk)
AIMessageChunk follows nearly the same structure as AIMessage, but uses a different ToolCallChunk
to be able to stream tool calling in a standardized manner.AIMessageChunk 遵循与 AIMessage 几乎相同的结构,但使用不同的 ToolCallChunk
能够以标准化的方式对工具调用进行流式处理。
Aggregating 聚合
AIMessageChunks support the + operator to merge them into a single AIMessage. This is useful when you want to display the final response to the user.AIMessageChunks 支持 + 运算符将它们合并为单个 AIMessage。当您想向用户显示最终响应时,这很有用。
ai_message = chunk1 + chunk2 + chunk3 + ...
ToolMessage 工具消息
This represents a message with role "tool", which contains the result of calling a tool. In addition to role and content, this message has:
这表示角色为“tool”的消息,其中包含调用工具的结果。除了作用和内容外 ,该信息还具有:
- a
tool_call_idfield which conveys the id of the call to the tool that was called to produce this result.tool_call_id字段,用于将调用的 ID 传递给被调用以产生此结果的工具。 - an
artifactfield which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model.工件字段,其可用于传递沿着工具执行的任意工件,所述工件对于跟踪是有用的,但不应被发送到模型。
Please see tool calling for more information.
请参阅工具调用以获取更多信息。
RemoveMessage
This is a special message type that does not correspond to any roles. It is used
for managing chat history in LangGraph.
这是一种特殊的消息类型,不对应于任何角色。它用于在 LangGraph 中管理聊天记录。
Please see the following for more information on how to use the RemoveMessage:
有关如何使用 RemoveMessage 的更多信息,请参阅以下内容:
(Legacy) FunctionMessage (旧版)函数消息
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. ToolMessage should be used instead to correspond to the updated tool-calling API.
这是一种遗留消息类型,对应于 OpenAI 的遗留函数调用 API。应该使用 ToolMessage 来对应更新的工具调用 API。
OpenAI Format OpenAI 格式
Inputs 输入
Chat models also accept OpenAI's format as inputs to chat models:
聊天模型也接受 OpenAI 的格式作为聊天模型的输入 :
chat_model.invoke([
{
"role": "user",
"content": "Hello, how are you?",
},
{
"role": "assistant",
"content": "I'm doing well, thank you for asking.",
},
{
"role": "user",
"content": "Can you tell me a joke?",
}
])
Outputs 输出
At the moment, the output of the model will be in terms of LangChain messages, so you will need to convert the output to the OpenAI format if you
need OpenAI format for the output as well.
目前,模型的输出将以 LangChain 消息形式表示,因此如果您也需要 OpenAI 格式的输出,则需要将输出转换为 OpenAI 格式。
The convert_to_openai_messages utility function can be used to convert from LangChain messages to OpenAI format.
convert_to_openai_messages 实用函数可用于将 LangChain 消息转换为 OpenAI 格式。