[{"data":1,"prerenderedAt":780},["ShallowReactive",2],{"featured-posts":3},[4,233,288,654],{"id":5,"title":6,"author":7,"body":8,"category":217,"date":218,"description":219,"extension":220,"featured":221,"home_position":222,"image":223,"meta":224,"navigation":221,"order":225,"path":226,"seo":227,"status":228,"stem":229,"tags":230,"__hash__":232},"content/articles/Agent的前世今生.md","Agent的前世今生","sibuchen",{"type":9,"value":10,"toc":213},"minimark",[11,16,25,31,40,47,55,59,64,69,79,90,112,140,151,155,158,163,168,173,178,181,186,191,197,203,207],[12,13,15],"h1",{"id":14},"agent的由来","Agent的由来",[17,18,19,24],"p",{},[20,21,23],"span",{"style":22},"color:red","符号主义 AI（Symbolic AI）","，常被称为传统人工智能，其核心信念是：智能源于对符号的逻辑操作。这里的符号是人类可读的实体（如词语、概念），操作则遵循严格的逻辑规则。\n其主要优势在于透明和可解释。由于推理步骤明确，其决策过程可以被完整追溯，这在金融、医疗等高风险领域至关重要。然而，其“阿喀琉斯之踵”在于脆弱性：它依赖于一个完备的规则体系，但在充满模糊和例外的现实世界中，任何未被覆盖的新情况都可能导致系统失灵，这就是所谓的“知识获取瓶颈”。",[17,26,27,30],{},[20,28,29],{"style":22},"亚符号主义 AI（Sub-symbolic AI）","，或称连接主义，则认为知识并非显式的规则，而是内隐地分布在一个由大量神经元组成的复杂网络中，是从海量数据中学习到的统计模式。神经网络和深度学习是其代表。\n他不是通过学习“猫有四条腿、毛茸茸、会喵喵叫”这样的规则来认识猫的，而是在看过成千上万张猫的图片后，大脑中的神经网络能辨识出“猫”这个概念的视觉模式 。这种方法的强大之处在于其模式识别能力和对噪声数据的鲁棒性 。它能够轻松处理图像、声音等非结构化数据，这在符号主义 AI 看来是极其困难的任务。\n然而，这种强大的直觉能力也伴随着不透明性。亚符号主义系统通常被视为一个黑箱（Black Box）。它能以惊人的准确率识别出图片中的猫，但你若问它“为什么你认为这是猫？”，它很可能无法给出一个合乎逻辑的解释。此外，它在纯粹的逻辑推理任务上表现不佳，有时会产生看似合理却事实错误的幻觉 。",[32,33,34],"blockquote",{},[17,35,36],{},[37,38,39],"em",{},"符号主义试图将人类的知识显式地编码给机器，而联结主义则试图创造出能够像人类一样学习知识的机器。",[17,41,42,43,46],{},"长久以来，符号主义和亚符号主义这两大阵营如同两条平行线，各自发展。为克服上述两种范式的局限，一种“大和解”的思想开始兴起，这就是",[20,44,45],{"style":22},"神经符号主义 AI（Neuro-Symbolic AI）","，也称神经符号混合主义。它的目标，是融合两大范式的优点，创造出一个既能像神经网络一样从数据中学习，又能像符号系统一样进行逻辑推理的混合智能体。它试图弥合感知与认知、直觉与理性之间的鸿沟。\n大语言模型驱动的智能体是神经符号主义的一个极佳实践范例。其内核是一个巨大的神经网络，使其具备模式识别和语言生成能力。然而，当它工作时，它会生成一系列结构化的中间步骤，如思想、计划或 API 调用，这些都是明确的、可操作的符号。通过这种方式，它实现了感知与认知、直觉与理性的初步融合。",[17,48,49,50],{},"符号主义、亚符号主义与神经符号混合主义的知识表示范式：\n",[51,52],"img",{"alt":53,"src":54},"","/assets/Agent%E7%9A%84%E5%89%8D%E4%B8%96%E4%BB%8A%E7%94%9F/%E7%AC%A6%E5%8F%B7%E4%B8%BB%E4%B9%89%E3%80%81%E4%BA%9A%E7%AC%A6%E5%8F%B7%E4%B8%BB%E4%B9%89%E4%B8%8E%E7%A5%9E%E7%BB%8F%E7%AC%A6%E5%8F%B7%E6%B7%B7%E5%90%88%E4%B8%BB%E4%B9%89%E7%9A%84%E7%9F%A5%E8%AF%86%E8%A1%A8%E7%A4%BA%E8%8C%83%E5%BC%8F.png",[12,56,58],{"id":57},"agent的分类","Agent的分类",[17,60,61],{},[51,62],{"alt":53,"src":63},"/assets/Agent%E7%9A%84%E5%89%8D%E4%B8%96%E4%BB%8A%E7%94%9F/Agent%E7%9A%84%E5%88%86%E7%B1%BB.png",[65,66,68],"h2",{"id":67},"agent-vs-workflow","Agent VS Workflow",[17,70,71,72,75,76],{},"尽管Workflow 和 Agent 都旨在实现任务自动化，但其底层逻辑、核心特征和适用场景却截然不同。\n简单来说，",[20,73,74],{"style":22},"Workflow 是让 AI 按部就班地执行指令，而 Agent 则是赋予 AI 自由度去自主达成目标。","在 AIWorkflow 中 LLM 只是一个节点，它只能盘活一个节点，主导的还是机械化代码，LLM 只是辅助。而在 AIAgent 中 LLM 作为主导，自主判断是否调用工具（机械化代码），主导的是LLM，机械化代码只是辅助。\n",[51,77],{"alt":53,"src":78},"/assets/Agent%E7%9A%84%E5%89%8D%E4%B8%96%E4%BB%8A%E7%94%9F/Workflow%20%E5%92%8C%20Agent%20%E7%9A%84%E5%B7%AE%E5%BC%82.png",[17,80,81,82,85,86,89],{},"如图所示，工作流是一种传统的自动化范式，其核心是",[20,83,84],{"style":22},"对一系列任务或步骤进行预先定义的、结构化的编排。","它本质上是一个精确的、静态的流程图，规定了在何种条件下、以何种顺序执行哪些操作。",[20,87,88],{"style":22},"整个过程的每一步、每一个判断条件都被精确地预先设定。","(符号主义 我们传统的编程思维)",[17,91,92,93,96,97,100,101,106,107,111],{},"与工作流不同，基于大型语言模型的智能体是一个",[20,94,95],{"style":22},"具备自主性的、以目标为导向的系统","。它不仅仅是执行预设指令，而是能够",[20,98,99],{"style":22},"在一定程度上理解环境、进行推理、制定计划，并动态地采取行动以达成最终目标","。LLM 在其中扮演着“大脑”的角色。一个典型的例子，便是我们创建的智能旅行助手 ",[102,103,105],"a",{"href":104},"/labs/nova","Nova"," 。当我们向它下达一个新指令，例如：",[108,109,110],"strong",{},"“明天我要去邵阳，有什么推荐景点吗？”"," 它的处理过程充分展现了其自主性：",[113,114,115,125,131],"ol",{},[116,117,118,121,122],"li",{},[108,119,120],{},"规划与工具调用："," Agent 首先会把任务拆解为两个步骤：① 查询天气；② 基于天气推荐景点。随即，它会自主选择并调用“天气查询 API”，并将“邵阳”作为参数传入。",[108,123,124],{},"(注意在 Nova 的测试案例中，我们也并没有规定要“先查天气”，甚至也没有“要查天气”的指示，这都是 Nova 自主决策的结果)",[116,126,127,130],{},[108,128,129],{},"推理与决策："," 假设 API 返回结果为“阴天”。Agent 的 LLM 大脑会基于这个信息进行推理：“需要搜索适合阴天游玩的邵阳景点”。接着，它会根据这个判断，在它的知识库中或再调用其它的搜索引擎工具，筛选出符合要求的景点，如瑶寨、白水洞等。",[116,132,133,136,137],{},[108,134,135],{},"生成结果："," 最后，Agent 会综合信息，给出一个完整的、人性化的回答：",[37,138,139],{},"“根据查询结果，邵阳明天是阴天，气温11摄氏度。在这样的天气条件下，我为您推荐游览瑶寨。瑶寨在阴天时景色宁静，非常适合摄影和放松，特别是其独特的丹霞地貌在阴天会显得格外迷人。祝您旅途愉快！”",[17,141,142,143,147,148],{},"在这个过程中，没有任何写死的",[144,145,146],"code",{},"if天气=阴天 then 推荐瑶寨","的规则。如果天气是“晴天”，Agent 会自主推理并调用合适的工具，最终可能会推荐崀山、南山牧场等景点。",[108,149,150],{},"这种基于实时信息进行动态推理和决策的能力，正是 Agent 的核心价值所在。",[12,152,154],{"id":153},"agent的发展","Agent的发展",[17,156,157],{},"前置知识了解：",[32,159,160],{},[17,161,162],{},"符号主义：专家系统、SHRDLU、ELIZA、“深蓝”计算机",[32,164,165],{},[17,166,167],{},"过度时期：“心智社会”理论（去中心化）",[32,169,170],{},[17,171,172],{},"联结主义：机器学习/深度学习、反向传播算法、卷积神经网络、Transformer模型",[32,174,175],{},[17,176,177],{},"行为主义：强化学习、TD-Gammon、AIphaGo",[17,179,180],{},"概念区分：",[32,182,183],{},[17,184,185],{},"机器学习/深度学习：静态数据集-预训练(背景知识)-微调(专业知识)-感知问题-学习能力）",[32,187,188],{},[17,189,190],{},"强化学习：环境交互数据-专业知识-决策问题-决策能力）",[17,192,193,194],{},"智能体发展演进时间线（未完全版）:\n",[51,195],{"alt":53,"src":196},"/assets/Agent%E7%9A%84%E5%89%8D%E4%B8%96%E4%BB%8A%E7%94%9F/%E6%99%BA%E8%83%BD%E4%BD%93%E5%8F%91%E5%B1%95%E6%BC%94%E8%BF%9B%E6%97%B6%E9%97%B4%E7%BA%BF%EF%BC%88%E6%9C%AA%E5%AE%8C%E5%85%A8%E7%89%88%EF%BC%89.png",[17,198,199,200],{},"AI Agent 技术栈概览：\n",[51,201],{"alt":53,"src":202},"/assets/Agent%E7%9A%84%E5%89%8D%E4%B8%96%E4%BB%8A%E7%94%9F/AI%20Agent%20%E6%8A%80%E6%9C%AF%E6%A0%88%E6%A6%82%E8%A7%88.png",[12,204,206],{"id":205},"现代agent的基础架构","现代Agent的基础架构",[17,208,209,210],{},"LLM驱动的智能体核心组件架构：\n",[51,211],{"alt":53,"src":212},"/assets/Agent%E7%9A%84%E5%89%8D%E4%B8%96%E4%BB%8A%E7%94%9F/LLM%E9%A9%B1%E5%8A%A8%E7%9A%84%E6%99%BA%E8%83%BD%E4%BD%93%E6%A0%B8%E5%BF%83%E7%BB%84%E4%BB%B6%E6%9E%B6%E6%9E%84.png",{"title":53,"searchDepth":214,"depth":214,"links":215},2,[216],{"id":67,"depth":214,"text":68},"agent","2026-02-19","如果你还不知道什么是Agent，亦或者是想探究Agent的奥秘，那不妨先停下你匆匆的脚步，用一盏茶的时间，随我一同闯入Agent的世界，探索Agent的前世与今生......","md",true,3,"/images/Agent的前世今生.png",{},1,"/articles/agent",{"title":6,"description":219},null,"articles/Agent的前世今生",[231,7],"AIAgent","RS2eoPfPyvbhkp-0RdfjaRcVIx7YfT-e94cU1B-LdVM",{"id":234,"title":105,"author":7,"body":235,"category":217,"date":279,"description":280,"extension":220,"featured":221,"home_position":214,"image":228,"meta":281,"navigation":221,"order":222,"path":104,"seo":282,"status":283,"stem":284,"tags":285,"__hash__":287},"content/labs/Nova.md",{"type":9,"value":236,"toc":277},[237,242,246,256,267],[32,238,239],{},[17,240,241],{},"Nova （诺瓦）在拉丁语意为“新的”，天文学中指“新星”。每一次旅行都是新的开始，发现新的风景~",[12,243,245],{"id":244},"前置知识智能体的构成与运行原理","前置知识：智能体的构成与运行原理",[17,247,248,249,252,253],{},"智能体并非一次性完成任务，而是通过一个持续的循环与环境进行交互，这个核心机制被称为",[20,250,251],{"style":22},"智能体循环 (Agent Loop)","：\n",[51,254],{"alt":53,"src":255},"/assets/Nova/%E6%99%BA%E8%83%BD%E4%BD%93%E4%B8%8E%E7%8E%AF%E5%A2%83%E4%BA%A4%E4%BA%92%E7%9A%84%E5%9F%BA%E6%9C%AC%E5%BE%AA%E7%8E%AF.png",[17,257,258,259,262,263,266],{},"在工程实践中，为了让 LLM 能够有效驱动这个循环，我们需要一套明确的",[20,260,261],{"style":22},"交互协议 (Interaction Protocol)","来规范其与环境之间的信息交换。\n在许多现代智能体框架中，这一协议体现在对智能体每一次输出的结构化定义上。智能体的输出不再是单一的自然语言回复，而是一段遵循特定格式的文本，其中明确地展示了其内部的推理过程与最终决策。\n由于该循环结构通常由 Thought、Action、Observation 三个部分构成，因此被称为",[20,264,265],{"style":22},"Thought-Action-Observation交互范式","。通过它，LLM智能体得以将内部的语言推理能力，与外部环境的真实信息和工具操作能力有效地结合起来。",[12,268,270,271],{"id":269},"源码位置github","源码位置：",[102,272,276],{"href":273,"rel":274},"https://github.com/sibuchen/Nova--TravelAgent",[275],"nofollow","Github",{"title":53,"searchDepth":214,"depth":214,"links":278},[],"2026-03-20","Nova 是一个由作者设计的简单Agent Demo，集中演示了基于 Thought-Action-Observation 循环的智能体所具备的四项基本能力：任务分解、工具调用、上下文理解和结果合成。通过这种循环的不断迭代，Nova 得以将一个模糊的用户意图，转化为一系列具体、可执行的步骤，并最终达成目标。",{},{"title":105,"description":280},"ARCHIVED","labs/Nova",[231,286,7],"ReAct","rMZtyutqD2YFPZZkfSXZuFxCwNcPIDVEMCC6ipKjQS8",{"id":289,"title":290,"author":7,"body":291,"category":217,"date":645,"description":646,"extension":220,"featured":221,"home_position":647,"image":228,"meta":648,"navigation":221,"order":214,"path":649,"seo":650,"status":283,"stem":651,"tags":652,"__hash__":653},"content/labs/WenJian.md","WenJian",{"type":9,"value":292,"toc":634},[293,298,302,307,311,328,332,335,338,349,353,358,501,505,533,537,570,574,626],[32,294,295],{},[17,296,297],{},"PS：“问”代表推理与探询，“剑”代表行动与决断。WenJian (问剑) 如侠客行走江湖，每遇迷障，先凝神“问”道于心（Reason），随即挥“剑”破局（Action）。剑出必有回响（Observation），回响再引剑招，往复之间，迷雾散尽 🗡~",[12,299,301],{"id":300},"前置知识reactreasoning-and-acting","前置知识：ReAct（Reasoning and Acting）",[32,303,304],{},[17,305,306],{},"推理使得行动更具有目的性，而行动则为推理提供了事实依据。",[65,308,310],{"id":309},"react过程","ReAct过程",[113,312,313,316,319,322],{},[116,314,315],{},"Thought（思考）：内心独白。分析当前情况 分解任务 制定下一步计划 / 反思上一步结果。执行Search函数 传入参数\"sibuchen\"",[116,317,318],{},"Action（行动）：具体动作。调用外部工具 / 输出。Search(\"sibuchen\")",[116,320,321],{},"Observation（观察）：环境变化。从外部工具返回的结果。\"计科学生\"",[116,323,324,325],{},"循环 上下文增加 直到Action=输出\nReAct 范式中的“思考-行动-观察”协同循环图解：\n",[51,326],{"alt":53,"src":327},"/assets/WenJian/ReAct%20%E8%8C%83%E5%BC%8F%E4%B8%AD%E7%9A%84%E2%80%9C%E6%80%9D%E8%80%83-%E8%A1%8C%E5%8A%A8-%E8%A7%82%E5%AF%9F%E2%80%9D%E5%8D%8F%E5%90%8C%E5%BE%AA%E7%8E%AF.png",[65,329,331],{"id":330},"react场景","ReAct场景",[17,333,334],{},"需要调用外部工具API的场景：查询实时信息、搜索专业知识、使用专业工具（计算器、代码解释器）、操作数据库、调用第三方服务API",[65,336,337],{"id":337},"tools的三要素",[113,339,340,343,346],{},[116,341,342],{},"名称：一个简洁、唯一的标识符，供智能体在Action中调用",[116,344,345],{},"描述：一段清晰的自然语言描述，说明这个工具的用途。最为关键。LLM依赖此描述来判断何时调用此工具",[116,347,348],{},"执行逻辑：真正执行的函数/方法",[65,350,352],{"id":351},"wenjian-vs-nova","WenJian VS Nova",[354,355,357],"h3",{"id":356},"️-核心差异-differences","🛠️ 核心差异 (Differences)",[359,360,361,375],"table",{},[362,363,364],"thead",{},[365,366,367,372],"tr",{},[368,369,371],"th",{"align":370},"left","模块",[368,373,374],{"align":370},"WenJian 进阶实现",[376,377,378,395,433],"tbody",{},[365,379,380,388],{},[381,382,383],"td",{"align":370},[108,384,385],{},[144,386,387],{},"config/settings.py",[381,389,390,391,394],{"align":370},"1. 完善的 ",[108,392,393],{},"配置检查"," 验证机制",[365,396,397,404],{},[381,398,399],{"align":370},[108,400,401],{},[144,402,403],{},"llm/client.py",[381,405,406,407,410,411,414,415,418,419,421,422,425,426,428,429,432],{"align":370},"1. ",[108,408,409],{},"自动读取"," setting 配置文件 (无需手动传参) ",[412,413],"br",{}," 2. 严密的 ",[108,416,417],{},"超时控制"," ",[412,420],{}," 3. ",[108,423,424],{},"强制熔断"," 机制 (严防模型幻觉) ",[412,427],{}," 4. ",[108,430,431],{},"流式响应"," 及其异常保护处理",[365,434,435,442],{},[381,436,437],{"align":370},[108,438,439,441],{},[144,440,217],{}," 核心逻辑",[381,443,406,444,447,448,450,451,454,455,421,457,460,461,428,463,466,467,469,470,473,474,476,477,418,480,482,483,486,487,489,490,493,494,496,497,500],{"align":370},[108,445,446],{},"动态工具箱"," (Prompt 实时注入) ",[412,449],{}," 2. ",[108,452,453],{},"单样本 (One-shot)"," 引导逻辑 ",[412,456],{},[108,458,459],{},"工具执行器 (Executor)"," 统一管理 ",[412,462],{},[108,464,465],{},"工厂模式 (Factory)"," 实现配置与逻辑深度解耦 ",[412,468],{}," 5. 支持运行时 ",[108,471,472],{},"动态注册/修改"," 工具 ",[412,475],{}," 6. 规范的 ",[108,478,479],{},"LLM 输出解析器",[412,481],{}," 7. ",[108,484,485],{},"提示词模板拆分"," (System + User 分离响应) ",[412,488],{}," 8. ",[108,491,492],{},"三重熔断"," 安全架构 (Prompt + Client + Core) ",[412,495],{}," 9. 极其清晰的 ",[108,498,499],{},"ReAct 状态机与记忆"," 链路",[354,502,504],{"id":503},"共同基因-similarities","🤝 共同基因 (Similarities)",[506,507,508,515,521,527],"ul",{},[116,509,510,511,514],{},"✅ ",[108,512,513],{},"核心范式",": 均遵循标准 ReAct 推理闭环逻辑",[116,516,510,517,520],{},[108,518,519],{},"容错能力",": 内置标准循环计数 (容错机制 i=5)",[116,522,510,523,526],{},[108,524,525],{},"输出净化",": 自动截断多余的冗余 Thought-Action 对",[116,528,510,529,532],{},[108,530,531],{},"记忆溯源",": 完整支持多轮历史对话上下文记忆",[65,534,536],{"id":535},"react的优势","ReAct的优势",[113,538,539,546,553,563],{},[116,540,541,542],{},"思考（易产生幻觉）+ 行动（易缺乏规划）=  ",[20,543,545],{"style":544},"color:#d83931","相互影响",[116,547,548,549,552],{},"形成”Thought + Action + Observation“链条  -- ",[20,550,551],{"style":544},"Agent所有行为公开透明"," -- 高可解释性 -- 有助于 理解、信任、调试 Agent",[116,554,555,558,559,562],{},[20,556,557],{"style":544},"动态规划与纠错"," -- 没用一次性生成完整计划 而是 ",[20,560,561],{"style":544},"” 走一步，看一步 “"," 每一步的 Observation 都会影响下一步的 Thought 和 Action ，可以实现Agent自我调优。例如，在 WenJian 查询 ” 张雪峰怎么了？“ 时，它认为 Observation 的结果可能是谣言 于是再执行了相同的 Action 并细化了搜索参数。",[116,564,565,566,569],{},"Agent = LLM + Tools + History + Core  --  实现了",[20,567,568],{"style":544},"LLM（亚符号）与外部Tools（符号）的深度结合"," 有效避免了LLM幻觉（例如 计算 解析 搜索任务）",[65,571,573],{"id":572},"react的劣势","ReAct的劣势",[113,575,576,590,600,617],{},[116,577,578,581,582,585,586],{},[20,579,580],{"style":544},"过度依赖LLM ，LLM的逻辑推理能力直接影响 Thought 的有效/正确规划，LLM的指令遵循能力与格式化输出能力直接影响 Action 的有效性。","这就是为什么 WenJian 要在 core 中实现对各自可能错误的”驳回“。甚至",[20,583,584],{"style":544}," Agent 的效果会受到 LLM 训练时的数据集影响","，例如 WenJian 在搜索 ”张雪峰怎么了？“ 时，LLM 出现了 ”2026是未来时间，该消息是谣言“ 的误判。",[20,587,589],{"style":588},"color:#4f81bd","--  尝试不同的模型/参数",[116,591,592,595,596,599],{},[20,593,594],{"style":544},"执行效率低下","，每次 Thought 只规划一步（Action），每次得到 Observation 才进行下一次 Thought 。一个任务需要",[20,597,598],{"style":544},"多次执行串行的ReAct循环，需要多次调用 LLM ","，需要消耗大量的时间。",[116,601,602,605,606,609,610,613,614],{},[20,603,604],{"style":544},"提示词的脆弱性，整个机制建立在一个精选设计的提示词模板上，模板中任何一个微小的变化，设置是用词的差异，都会影响 LLM 的行为。","例如，在prompt中去掉对Observation的描述（\"你当前正处于一个 ",[108,607,608],{},"Thought -> Action -> Observation"," 的闭环决策链中\"） 可以大幅度降低幻觉。另外，",[20,611,612],{"style":544},"由于 Thought 中心化堆积在一个 LLM 中，导致在处理复杂的任务时提示词容易失效，LLM 出现越权行为。","例如 WenJian 在处理 ” 邵阳有哪些好玩的地方？“时本意是想先搜索邵阳的热门景点，再搜索各个景点的具体介绍与推荐，但是由于上下文的堆砌，导致 LLM 直接幻想出了 Observation 的内容，并自我迭代”Thought + Action + Observation“链条。",[20,615,616],{"style":588},"--  为promot添加少样本",[116,618,619,622,623],{},[20,620,621],{"style":544},"不适合需要规划的长期任务，步进式决策模式使得 Agent 缺乏一个全局的、长远的规划，容易造成绕远路/原地打转的循环当中。","例如 WenJian 在处理 ”张雪峰怎么了？“ 时，出现时间的误判后就执着于搜索 ” 张雪峰社交媒体账号头像的颜色 “ 在无关的搜索中陷入了死循环。",[20,624,625],{"style":588},"--  打印完整 ReAct 流程进行分析",[12,627,629,630],{"id":628},"源码地址github","源码地址：",[102,631,276],{"href":632,"rel":633},"https://github.com/sibuchen/WenJian--ReActAgent",[275],{"title":53,"searchDepth":214,"depth":214,"links":635},[636,637,638,639,643,644],{"id":309,"depth":214,"text":310},{"id":330,"depth":214,"text":331},{"id":337,"depth":214,"text":337},{"id":351,"depth":214,"text":352,"children":640},[641,642],{"id":356,"depth":222,"text":357},{"id":503,"depth":222,"text":504},{"id":535,"depth":214,"text":536},{"id":572,"depth":214,"text":573},"2026-03-25","WenJian (问剑) 是一个高度解耦、易于扩展的轻量级 AIAgent 框架。它基于经典的 ReAct (Reason + Action) 范式实现，允许大语言模型（LLM）在解决复杂问题时，通过“思考（Thought）- 行动（Action）- 观察（Observation）”的循环，自主调用外部工具获取信息并得出最终结论。",4,{},"/labs/wenjian",{"title":290,"description":646},"labs/WenJian",[231,286,7],"ALLdwd0GY86jhyyPXbtjOtxgNokfK0gNnS3dIKjgmdo",{"id":655,"title":656,"author":7,"body":657,"category":217,"date":771,"description":772,"extension":220,"featured":221,"home_position":225,"image":228,"meta":773,"navigation":221,"order":225,"path":774,"seo":775,"status":283,"stem":776,"tags":777,"__hash__":779},"content/labs/ZhiQi.md","ZhiQi",{"type":9,"value":658,"toc":766},[659,664,668,673,677,688,692,712,716,760],[32,660,661],{},[17,662,663],{},"PS：ZhiQi (执棋) 如棋盘博弈，先观全局、定谋略 (Plan)，再步步为营、随机应变 (ReAct)。未落子时，全盘局势已在心中推演完毕。它将宏大的困局拆解为一个个精妙的定式，步步为营，运筹帷幄之中，决胜千里之外。♟~",[12,665,667],{"id":666},"前置知识psplan-and-solve","前置知识：P&S（Plan-and-Solve）",[32,669,670],{},[17,671,672],{},"父Agent + 子Agent（Planner、Executor）",[65,674,676],{"id":675},"ps过程","P&S过程",[113,678,679,682],{},[116,680,681],{},"规划阶段 (Planning Phase)： 首先，智能体会接收用户的完整问题。它的第一个任务不是直接去解决问题或调用工具，而是将问题分解，并制定出一个清晰、分步骤的行动计划。这个计划本身就是一次大语言模型的调用产物。",[116,683,684,685],{},"执行阶段 (Solving Phase)： 在获得完整的计划后，智能体进入执行阶段。它会严格按照计划中的步骤，逐一执行。每一步的执行都可能是一次独立的 LLM 调用，或者是对上一步结果的加工处理，直到计划中的所有步骤都完成，最终得出答案。\n",[51,686],{"alt":53,"src":687},"/assets/ZhiQi/Plan-and-Solve%20%E8%8C%83%E5%BC%8F%E7%9A%84%E4%B8%A4%E9%98%B6%E6%AE%B5%E5%B7%A5%E4%BD%9C%E6%B5%81.png",[65,689,691],{"id":690},"ps的优势","P&S的优势",[113,693,694,700,706],{},[116,695,696,699],{},[108,697,698],{},"解决盲目性","：通过前置 Planning，避免了 LLM 在处理长任务时因为注意力分散导致的死循环。",[116,701,702,705],{},[108,703,704],{},"高容错率","：每步执行内部依然使用 ReAct，通过 Observation 动态修正局部错误，而不是机械执行计划。",[116,707,708,711],{},[108,709,710],{},"可解释性极强","：终端日志清晰展示了“大计划 -> 小思考 -> 真实行动”的完整链条。",[65,713,715],{"id":714},"ps的劣势","P&S的劣势",[113,717,718,724,730,736,742,748,754],{},[116,719,720,723],{},[108,721,722],{},"对 Planner 要求高","：如果初始计划拆解错误，后续执行可能偏离目标（虽有结果反馈修正，但核心逻辑受限）。",[116,725,726,729],{},[108,727,728],{},"中途无法与用户交互","：Plan制定好后无法更改，无法根据用户的需求生成更合适的计划。",[116,731,732,735],{},[108,733,734],{},"运行效率低","：细分成了许多小步骤，每一步又需要执行完整的ReAct。",[116,737,738,741],{},[108,739,740],{},"上下文记忆爆炸","：大历史+小历史，需要更好的存储方式。例如，ZhiQi 在解决“明天我和父母要从广州出发去邵阳游玩，一共是2天1夜，有什么推荐的景点”问题时，Planner（子Agent）制定的计划一共有10步，Executor（子Agent）每次执行完ReAct 后又有一个 Finish 需要记录。",[116,743,744,747],{},[108,745,746],{},"Token 消耗较高","：因为每个步骤都可能涉及多次 LLM 调用，相比于单次生成，成本与延迟更高。",[116,749,750,753],{},[108,751,752],{},"缺乏审查与反馈机制","：中间某个环节出错/达到最大ReAct限制，Agent 会直接放弃该步骤，导致该步骤在“大历史”中被错误记录 / 被记录为“该步骤已达最大重试次数，未得出结论”，从而影响下一步骤的执行 / 逼迫 LLM 在执行下一步时不得不幻想此步骤的可能结果。例如，ZhiQi 在执行\"步骤 3/10: 搜索邵阳市区及周边核心景点（如崀山、南山牧场、魏源故居等）并筛选适合2天1夜行程的景点组合\"时，由于达到最大循环次数（i=5），迫使 ReAct 终止，导致在步骤 4/10 时出现了“【Thought】: 步骤3（搜索邵阳核心景点）未成功完成，但我需要基于已有信息和常识来推进步骤4。根据步骤1和2的结果，我已知......”",[116,755,756,759],{},[108,757,758],{},"受 LLM 的影响大","：大模型训练数据集的陈旧性。例如，ZhiQi 在执行\"步骤 2/10: 根据交通到达时间确定第一天可游玩的有效时长\"时，制定了错误的搜索参数：“Search【广州南站到邵阳高铁时刻表 2024 早上发车时间】”",[12,761,629,762],{"id":628},[102,763,276],{"href":764,"rel":765},"https://github.com/sibuchen/ZhiQi--PlanAndSolveAgent",[275],{"title":53,"searchDepth":214,"depth":214,"links":767},[768,769,770],{"id":675,"depth":214,"text":676},{"id":690,"depth":214,"text":691},{"id":714,"depth":214,"text":715},"2026-03-28","ZhiQi (执棋) 是一个将 Plan-and-Solve (规划与解决) 范式与 ReAct (推理与行动) 范式深度融合的 AIAgent 架构。它旨在解决传统 ReAct 代理在面对复杂、长期任务时容易“绕路”或“陷入死循环”的问题。",{},"/labs/zhiqi",{"title":656,"description":772},"labs/ZhiQi",[231,778,286,7],"P&S","-Rj8H7wiHTdlKLiOqJQGkKnZCTxXB1UFJncPH-aKyzc",1774960320751]