Develop, deploy, scale, and manage agents with LangGraph Platform—the platform for hosting long-running, agentic workflows. 使用 LangGraph Platform 开发、部署、扩展和管理代理——专为托管长时间运行的、代理式工作流而设计的平台。
Get started with LangGraph Platform 开始使用 LangGraph 平台
Check out the quickstart guides for instructions on how to use LangGraph Platform to run a LangGraph application locally or deploy to cloud. 查看快速入门指南,了解如何使用 LangGraph Platform 在本地运行 LangGraph 应用程序或部署到云端。
LangGraph Platform makes it easy to get your agent running in production — whether it’s built with LangGraph or another framework — so you can focus on your app logic, not infrastructure. LangGraph 平台让您的代理轻松部署到生产环境——无论它是使用 LangGraph 还是其他框架构建的——让您专注于应用逻辑,而不是基础设施。 Deploy with one click to get a live endpoint, and use our robust APIs and built-in task queues to handle production scale. 一键部署即可获得实时端点,并使用我们强大的 API 和内置任务队列来处理生产规模。
Streaming Support: As agents grow more sophisticated, they often benefit from streaming both token outputs and intermediate states back to the user. Without this, users are left waiting for potentially long operations with no feedback. 流式支持:随着代理变得越来越复杂,它们通常受益于将令牌输出和中间状态流式传输回用户。没有这个功能,用户将不得不等待可能很长的操作而没有反馈。 LangGraph Server provides multiple streaming modes optimized for various application needs. LangGraph Server 提供多种针对不同应用需求的流式传输模式。
Background Runs: For agents that take longer to process (e.g., hours), maintaining an open connection can be impractical. The LangGraph Server supports launching agent runs in the background and provides both polling endpoints and webhooks to monitor run status effectively. 后台运行:对于处理时间较长的代理(例如数小时),维持一个开放连接可能不切实际。LangGraph 服务器支持在后台启动代理运行,并提供轮询端点和 webhook 来有效监控运行状态。
Support for long runs: Regular server setups often encounter timeouts or disruptions when handling requests that take a long time to complete. 支持长时间运行:常规服务器设置在处理需要长时间完成的请求时,经常会遇到超时或中断问题。 LangGraph Server’s API provides robust support for these tasks by sending regular heartbeat signals, preventing unexpected connection closures during prolonged processes. LangGraph 服务器的 API 通过发送定期心跳信号,为这些任务提供强大的支持,防止在长时间处理过程中出现意外的连接关闭。
Handling Burstiness: Certain applications, especially those with real-time user interaction, may experience “bursty” request loads where numerous requests hit the server simultaneously. 处理突发性负载:某些应用程序,特别是那些具有实时用户交互的应用,可能会遇到“突发性”的请求负载,即大量请求同时冲击服务器。 LangGraph Server includes a task queue, ensuring requests are handled consistently without loss, even under heavy loads. LangGraph 服务器包含一个任务队列,确保即使在重负载下也能一致地处理请求而不丢失。
Double-texting: In user-driven applications, it’s common for users to send multiple messages rapidly. This “double texting” can disrupt agent flows if not handled properly. LangGraph Server offers built-in strategies to address and manage such interactions. 双文本输入:在用户驱动型应用中,用户常常会快速发送多条消息。如果处理不当,这种“双文本输入”可能会中断代理流程。LangGraph 服务器提供了内置策略来处理和管理此类交互。
Checkpointers and memory management: For agents needing persistence (e.g., conversation memory), deploying a robust storage solution can be complex. LangGraph Platform includes optimized checkpointers and a memory store, managing state across sessions without the need for custom solutions. 检查点和内存管理:对于需要持久化(例如对话记忆)的代理,部署一个强大的存储解决方案可能很复杂。LangGraph 平台包括优化的检查点和内存存储,可以在不同会话间管理状态,无需自定义解决方案。
Human-in-the-loop support: In many applications, users require a way to intervene in agent processes. LangGraph Server provides specialized endpoints for human-in-the-loop scenarios, simplifying the integration of manual oversight into agent workflows. 人工介入支持:在许多应用中,用户需要一种方式来干预代理流程。LangGraph 服务器提供了专门用于人工介入场景的端点,简化了将人工监督集成到代理工作流程的过程。
LangGraph Studio: Enables visualization, interaction, and debugging of agentic systems that implement the LangGraph Server API protocol. Studio also integrates with LangSmith to enable tracing, evaluation, and prompt engineering. LangGraph Studio:支持可视化、交互和调试实现 LangGraph Server API 协议的智能体系统。Studio 还与 LangSmith 集成,以实现追踪、评估和提示工程。
Deployment: There are three ways to deploy on LangGraph Platform: Cloud, Hybrid, and Self-Hosted. 部署:在 LangGraph 平台上部署有三种方式:云部署、混合部署和自托管部署。