The New Skill in AI is Not Prompting, It's Context Engineering
AI 的新技能不是提示,而是上下文工程
Context Engineering is new term gaining traction in the AI world. The conversation is shifting from "prompt engineering" to a broader, more powerful concept: Context Engineering. Tobi Lutke describes it as "the art of providing all the context for the task to be plausibly solvable by the LLM.” and he is right.
上下文工程是在 AI 领域越来越受欢迎的新术语。对话正在从 “提示工程” 转向一个更广泛、更强大的概念: 上下文工程 。Tobi Lutke 将其描述为“为 LLM 合理地解决任务提供所有背景的艺术”。他是对的。
With the rise of Agents it becomes more important what information we load into the “limited working memory”. We are seeing that the main thing that determines whether an Agents succeeds or fails is the quality of the context you give it. Most agent failures are not model failures anyemore, they are context failures.
随着 Agent 的兴起,我们将哪些信息加载到 “有限的工作记忆” 中变得更加重要。我们看到,决定 Agent 成功还是失败的主要因素是你给它的上下文的质量。大多数代理故障不再是模型故障,而是上下文故障。
What is the Context?
什么是上下文?
To understand context engineering, we must first expand our definition of "context." It isn't just the single prompt you send to an LLM. Think of it as everything the model sees before it generates a response.
要理解情境工程,我们必须首先扩展我们对 “情境” 的定义。这不仅仅是您发送给 LLM 的单个提示。将其视为模型在生成响应之前看到的所有内容。

- Instructions / System Prompt: An initial set of instructions that define the behavior of the model during a conversation, can/should include examples, rules ….
说明/系统提示: 定义模型在对话期间行为的初始指令集,可以/应该包括示例、规则...... - User Prompt: Immediate task or question from the user.
用户提示: 用户提出的即时任务或问题。 - State / History (short-term Memory): The current conversation, including user and model responses that have led to this moment.
状态/历史(短期记忆): 当前对话,包括导致此时刻的用户和模型响应。 - Long-Term Memory: Persistent knowledge base, gathered across many prior conversations, containing learned user preferences, summaries of past projects, or facts it has been told to remember for future use.
长期记忆: 持久性知识库,从许多先前的对话中收集,包含学习的用户偏好、过去项目的摘要或被告知要记住以备将来使用的事实。 - Retrieved Information (RAG): External, up-to-date knowledge, relevant information from documents, databases, or APIs to answer specific questions.
检索信息 (RAG): 外部的最新知识、来自文档、数据库或 API 的相关信息,用于回答特定问题。 - Available Tools: Definitions of all the functions or built-in tools it can call (e.g., check_inventory, send_email).
可用工具: 它可以调用的所有函数或内置工具的定义(例如,check_inventory、send_email)。 - Structured Output: Definitions on the format of the model's response, e.g. a JSON object.
结构化输出: 模型响应格式的定义,例如 JSON 对象。
Why It Matters: From Cheap Demo to Magical Product
重要性:从廉价的演示到神奇的产品
The secret to building truly effective AI agents has less to do with the complexity of the code you write, and everything to do with the quality of the context you provide.
构建真正有效的 AI 代理的秘诀与您编写的代码的复杂性关系不大,而与您提供的上下文的质量有关。
Building Agents is less about the code you write or framework you use. The difference between a cheap demo and a “magical” agent is about the quality of the context you provide. Imagine an AI assistant is asked to schedule a meeting based on a simple email:
构建代理与您编写的代码或使用的框架无关。廉价的 demo 和 “magical” 代理之间的区别在于您提供的上下文的质量。想象一下,要求 AI 助理根据一封简单的电子邮件安排会议:
Hey, just checking if you’re around for a quick sync tomorrow.
嘿,只是检查一下你明天是否在附近进行快速同步。
The "Cheap Demo" Agent has poor context. It sees only the user's request and nothing else. Its code might be perfectly functional—it calls an LLM and gets a response—but the output is unhelpful and robotic:
“Cheap Demo” Agent 的上下文很差。它只看到用户的请求,看不到其他任何内容。它的代码可能功能完备 — 它调用 LLM 并获得响应 — 但输出是无用的并且是机器人的:
Thank you for your message. Tomorrow works for me. May I ask what time you had in mind?
感谢您的留言。明天适合我。请问您当时想的几点?
The "Magical" Agent is powered by rich context. The code's primary job isn't to figure out how to respond, but to gather the information the LLM needs to full fill its goal. Before calling the LLM, you would extend the context to include
“Magical” Agent 由丰富的上下文提供支持。该代码的主要工作不是弄清楚如何响应,而是收集 LLM 完成其目标所需的信息 。在调用 LLM 之前,您需要扩展上下文以包括
- Your calendar information (which shows you're fully booked).
您的日历信息(显示您已订满)。 - Your past emails with this person (to determine the appropriate informal tone).
您过去与此人的电子邮件(以确定适当的非正式语气)。 - Your contact list (to identify them as a key partner).
您的联系人列表(用于将他们识别为关键合作伙伴)。 - Tools for send_invite or send_email.
用于 send_invite 或 send_email 的工具。
Then you can generate a response.
然后,您可以生成响应。
Hey Jim! Tomorrow’s packed on my end, back-to-back all day. Thursday AM free if that works for you? Sent an invite, lmk if it works.
嘿,吉姆!明天我这边收拾行李,一整天背靠背。周四上午免费,如果这对你有用?发送了邀请,如果有效,请 lmk。
The magic isn't in a smarter model or a more clever algorithm. It’s in about providing the right context for the right task. This is why context engineering will matter. Agent failures aren't only model failures; they are context failures.
魔力不在于更智能的模型或更聪明的算法。它是关于为正确的任务提供正确的环境。这就是情境工程很重要的原因。代理故障不仅仅是模型故障;它们是上下文失败。
From Prompt to Context Engineering
从提示到上下文工程
What is context engineering? While "prompt engineering" focuses on crafting the perfect set of instructions in a single text string, context engineering is a far broader. Let's put it simply:
什么是情境工程?虽然 “prompt engineering” 侧重于在单个文本字符串中制作完美的指令集,但上下文工程的范围要广泛得多。简单来说:
Context Engineering is the discipline of designing and building dynamic systems that provides the right information and tools, in the right format, at the right time, to give a LLM everything it needs to accomplish a task.
上下文工程是一门设计和构建动态系统的学科,它在正确的时间以正确的格式提供正确的信息和工具,为 LLM 提供完成任务所需的一切。
Context Engineering is 情境工程是
- A System, Not a String: Context isn't just a static prompt template. It’s the output of a system that runs before the main LLM call.
一个系统,而不是字符串:Context 不仅仅是一个静态的提示模板。它是在主 LLM 调用之前运行的系统的输出。 - Dynamic: Created on the fly, tailored to the immediate task. For one request this could be the calendar data for another the emails or a web search.
动态: 即时创建,为当前任务量身定制。对于一个请求,这可能是另一个请求的日历数据、电子邮件或 Web 搜索。 - About the right information, tools at the right time: The core job is to ensure the model isn’t missing crucial details ("Garbage In, Garbage Out"). This means providing both knowledge (information) and capabilities (tools) only when required and helpful.
关于正确的信息,正确的时间的工具: 核心工作是确保模型不会遗漏关键细节(“Garbage In, Garbage Out”)。这意味着仅在需要和有用时提供知识(信息)和能力(工具)。 - where the format matters: How you present information matters. A concise summary is better than a raw data dump. A clear tool schema is better than a vague instruction.
格式很重要的地方: 您呈现信息的方式很重要。简洁的摘要比原始数据转储要好。清晰的工具架构比模糊的指令要好。
Conclusion 结论
Building powerful and reliable AI Agents is becoming less about finding a magic prompt or model updates. It is about the engineering of context and providing the right information and tools, in the right format, at the right time. It’s a cross-functional challenge that involves understanding your business use case, defining your outputs, and structuring all the necessary information so that an LLM can “accomplish the task."
构建强大而可靠的 AI 代理不再是寻找神奇的提示或模型更新。它是关于上下文的工程设计,并在正确的时间以正确的格式提供正确的信息和工具。这是一项跨职能的挑战,涉及了解您的业务用例、定义您的输出以及构建所有必要的信息,以便 LLM 能够“完成任务”。
Acknowledgements 确认
This overview was created with the help of deep and manual research, drawing inspiration and information from several excellent resources, including:
本概述是在深入的手动研究的帮助下创建的,从几个优秀的资源中汲取灵感和信息,包括:
- Tobi Lutke tweet Tobi Lutke 的推文
- Karpathy tweet Karpathy 推文
- The rise of "context engineering"
“情境工程”的兴起 - Own your context window
拥有您的上下文窗口 - Context Engineering by Simon Willison
Simon Willison 的上下文工程 - Context Engineering for Agents
代理的上下文工程
Thanks for reading! If you have any questions or feedback, please let me know on Twitter or LinkedIn.
感谢阅读!如果您有任何问题或反馈,请在 Twitter 或 LinkedIn 上告诉我。