在典型的聊天机器人工作流程中,用户需要与机器人进行一次或多次交互才能完成一项任务。记忆和人机交互机制会在图形状态中启用检查点,并控制未来的响应。
如果您希望用户能够从之前的回复开始,探索不同的结果,该怎么办?或者,如果您希望用户能够回放聊天机器人的工作以修复错误或尝试不同的策略(这在自主软件工程师等应用中很常见),该怎么办?
您可以使用 LangGraph 的内置时间旅行功能来创建这些类型的体验。
本教程以自定义状态为基础。
1. 回放图表
使用图的get_state_history
方法获取检查点,即可回退图。然后,您可以从之前的这个时间点恢复执行。
1 | from dotenv import load_dotenv |
API 参考:TavilySearch | BaseMessage | InMemorySaver | StateGraph |开始|结束| add_messages | ToolNode | tools_condition
1 | import os |
2. 添加步骤
向图表中添加步骤。每个步骤都会在其状态历史记录中设置检查点:
1 | config = {"configurable": {"thread_id": "1"}} |
对话记录:
1 | ================================ Human Message ================================= |
变更对话:
1 | events = graph.stream( |
1 | ================================ Human Message ================================= |
Tool Integration:
Easily connect external tools:1
2from langgraph.prebuilt import ToolNode
tool_node = ToolNode([search_tool, math_tool])Cyclic Workflows:
Build self-correcting loops:1
2
3
4
5
6
7
8
9graph.add_conditional_edges(
"agent",
decide_next_action,
{
"tool": "tool_node",
"end": END,
"retry": "agent"
}
)
Recommended Learning Path
Start with Documentation:
LangGraph Agent GuideExperiment with Templates:
Try pre-built agent architectures:1
2pip install langgraph
from langgraph.prebuilt import AgentExecutorKey Concepts to Master:
StateGraph
for persistent memory- Conditional edges for decision-making
- Tool invocation nodes
- Error handling with
try ... except
nodes
Sample Project Flow:
1
2
3
4
5
6
7graph LR
A[User Input] --> B{Agent Node}
B -->|Requires Research| C[Search Tool]
B -->|Requires Calculation| D[Math Tool]
C --> B
D --> B
B -->|Complete| E[Response]
Pro Tips
- Use LangSmith for tracing and debugging agent decisions
- Start with single-agent systems before multi-agent setups
- Implement fallback mechanisms for tool errors
- Add reflection nodes for self-correction:
1
2
3def reflect_on_quality(state):
if state["response_quality"] < 0.8:
return "improve_response"
Would you like me to:
a) Provide a simple starter code template
b) Explain the agent architecture in more detail
c) Suggest specific agent project ideas?
1 |
|
1 | Num Messages: 6 Next: () |
检查点会保存图表中每一步的执行情况。这涵盖了所有调用,因此您可以回溯整个线程的历史记录。
从检查点恢复
to_replay
从第二次图形调用中的节点之后的状态恢复chatbot
。从此点恢复将调用接下来的操作节点。
1 | print(to_replay.next) |
1 | () |
4. 加载某个时刻的状态
检查点to_replay.config
包含一个checkpoint_id
时间戳。提供此checkpoint_id
值会告诉 LangGraph 的检查点程序从该时刻开始加载状态。
1 | # The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer. |
1 | ================================== Ai Message ================================== |
Tool Integration:
Easily connect external tools:1
2from langgraph.prebuilt import ToolNode
tool_node = ToolNode([search_tool, math_tool])Cyclic Workflows:
Build self-correcting loops:1
2
3
4
5
6
7
8
9graph.add_conditional_edges(
"agent",
decide_next_action,
{
"tool": "tool_node",
"end": END,
"retry": "agent"
}
)
Recommended Learning Path
Start with Documentation:
LangGraph Agent GuideExperiment with Templates:
Try pre-built agent architectures:1
2pip install langgraph
from langgraph.prebuilt import AgentExecutorKey Concepts to Master:
StateGraph
for persistent memory- Conditional edges for decision-making
- Tool invocation nodes
- Error handling with
try ... except
nodes
Sample Project Flow:
1
2
3
4
5
6
7graph LR
A[User Input] --> B{Agent Node}
B -->|Requires Research| C[Search Tool]
B -->|Requires Calculation| D[Math Tool]
C --> B
D --> B
B -->|Complete| E[Response]
Pro Tips
- Use LangSmith for tracing and debugging agent decisions
- Start with single-agent systems before multi-agent setups
- Implement fallback mechanisms for tool errors
- Add reflection nodes for self-correction:
1
2
3def reflect_on_quality(state):
if state["response_quality"] < 0.8:
return "improve_response"
Would you like me to:
a) Provide a simple starter code template
b) Explain the agent architecture in more detail
c) Suggest specific agent project ideas?
1 |
|
import os
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_tavily import TavilySearch
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import Command, interrupt
from langchain_core.tools import InjectedToolCallId, tool
import json
from langchain_core.messages import ToolMessage
load_dotenv()
memory = InMemorySaver()
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatOpenAI(
model=os.getenv(“LLM_MODEL_NAME”),
api_key=os.getenv(“LLM_API_KEY”),
base_url=os.getenv(“LLM_BASE_URL”),
temperature=os.getenv(“LLM_TEMPERATURE”),
max_tokens=os.getenv(“LLM_MAX_TOKENS”)
)
tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {“messages”: [llm_with_tools.invoke(state[“messages”])]}
graph_builder.add_node(“chatbot”, chatbot)
tool_node = ToolNode(tools=tools)
graph_builder.add_node(“tools”, tool_node)
graph_builder.add_conditional_edges(
“chatbot”,
tools_condition,
)
Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge(“tools”, “chatbot”)
graph_builder.add_edge(START, “chatbot”)
graph = graph_builder.compile(checkpointer=memory)
画图
png_bytes = graph.get_graph().draw_mermaid_png()
with open(“graph.png”, “wb”) as f:
f.write(png_bytes)
import os
os.system(“open graph.png”)
config = {“configurable”: {“thread_id”: “1”}}
events = graph.stream(
{
“messages”: [
{
“role”: “user”,
“content”: (
“I’m learning LangGraph. “
“Could you do some research on it for me?”
),
},
],
},
config,
stream_mode=”values”,
)
for event in events:
if “messages” in event:
event[“messages”][-1].pretty_print()
events = graph.stream(
{
“messages”: [
{
“role”: “user”,
“content”: (
“Ya that’s helpful. Maybe I’ll “
“build an autonomous agent with it!”
),
},
],
},
config,
stream_mode=”values”,
)
for event in events:
if “messages” in event:
event[“messages”][-1].pretty_print()
to_replay = None
for state in graph.get_state_history(config):
print(“Num Messages: “, len(state.values[“messages”]), “Next: “, state.next)
print(“-“ * 80)
if len(state.values[“messages”]) == 6:
# We are somewhat arbitrarily selecting a specific state based on the number of chat messages in the state.
to_replay = state
print(to_replay.next)
print(to_replay.config)
The checkpoint_id
in the to_replay.config
corresponds to a state we’ve persisted to our checkpointer.
for event in graph.stream(None, to_replay.config, stream_mode=”values”):
if “messages” in event:
event[“messages”][-1].pretty_print()