在典型的聊天机器人工作流程中,用户需要与机器人进行一次或多次交互才能完成一项任务。记忆人机交互机制会在图形状态中启用检查点,并控制未来的响应。

如果您希望用户能够从之前的回复开始,探索不同的结果,该怎么办?或者,如果您希望用户能够回放聊天机器人的工作以修复错误或尝试不同的策略(这在自主软件工程师等应用中很常见),该怎么办?

您可以使用 LangGraph 的内置时间旅行功能来创建这些类型的体验。

本教程以自定义状态为基础。

1. 回放图表

使用图的get_state_history方法获取检查点,即可回退图。然后,您可以从之前的这个时间点恢复执行。

1
2
3
4
5
6
7
8
9
10
11
from dotenv import load_dotenv

load_dotenv()

llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

API 参考:TavilySearch | BaseMessage | InMemorySaver | StateGraph |开始|结束| add_messages | ToolNode | tools_condition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import os
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_tavily import TavilySearch
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import Command, interrupt
from langchain_core.tools import InjectedToolCallId, tool

import json

from langchain_core.messages import ToolMessage

load_dotenv()

memory = InMemorySaver()

class State(TypedDict):
messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)


llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)


tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)


graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)

# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile(checkpointer=memory)

2. 添加步骤

向图表中添加步骤。每个步骤都会在其状态历史记录中设置检查点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{
"messages": [
{
"role": "user",
"content": (
"I'm learning LangGraph. "
"Could you do some research on it for me?"
),
},
],
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

对话记录:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
================================ Human Message =================================

I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================

I'll research LangGraph for you. Let me gather some information.
Tool Calls:
tavily_search (call_6e5538b4ae9f4d828143bb)
Call ID: call_6e5538b4ae9f4d828143bb
Args:
query: LangGraph overview, features, use cases, and learning resources
search_depth: advanced
================================= Tool Message =================================
Name: tavily_search

{"query": "LangGraph overview, features, use cases, and learning resources", "follow_up_questions": null, "answer": null, "images": [], "results": [{"url": "https://www.langchain.com/resources", "title": "Resources - LangChain", "content": "Resources Products Platforms LangSmithLangGraph Platform Resources Resources HubBlogCustomer StoriesLangChain AcademyCommunityExpertsChangelog Python Sign up # Resources Use cases & inspiration Built with LangGraph Use cases & inspiration ## Built with LangGraph Use cases & inspiration ## Built with LangGraph The Definitive Guide to Testing LLM Applications ## The Definitive Guide to Testing LLM Applications Use cases & inspiration Use cases & inspiration Get inspired LangChain State of AI 2024 Report ## LangChain State of AI 2024 Report ## LangChain State of AI 2024 Report See product data LangChain, LangSmith, and LangGraph are critical parts of the reference 
architecture to get you from prototype to production. Products Resources Python DocsJS/TS DocsGitHubIntegrationsChangelogCommunityLangSmith Trust Portal Sign up for our newsletter to stay up to date", "score": 0.98575, "raw_content": null}, {"url": "https://www.langchain.com/langgraph", "title": "LangGraph - LangChain", "content": "Design agent-driven user experiences with LangGraph Platform's APIs. Quickly deploy and scale your application with infrastructure built for agents. LangGraph sets the foundation for how we can build and scale AI workloads — from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. The next chapter in building complex production-ready features with LLMs is agentic, and with LangGraph and LangSmith, LangChain delivers an out-of-the-box solution to iterate quickly, debug immediately, and scale effortlessly.” LangGraph sets the foundation for how we can build and scale AI workloads — from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. LangGraph Platform is a service for deploying and scaling LangGraph applications, with an opinionated API for building agent UXs, plus an integrated developer studio.", "score": 0.98332, "raw_content": null}], "response_time": 1.64, "request_id": "c26d7a05-dc55-4209-965a-0479239767e9"}
================================== Ai Message ==================================

Based on my research on LangGraph, here's a comprehensive overview:

### Overview
LangGraph is a framework developed by **LangChain** for building stateful, multi-actor applications with LLMs. It's designed to orchestrate complex workflows involving multiple steps, agents, or tools while maintaining state across interactions.

### Key Features
1. **Agent-Driven Architecture**: Enables building conversational agents that can handle complex, multi-step tasks
2. **Stateful Workflows**: Maintains context and state across interactions for coherent task execution
3. **Task Automation**: Supports automation of complex workflows requiring sequential decision-making
4. **Scalable Infrastructure**: Built for production deployment with scaling capabilities
5. **Integrated Developer Studio**: Provides tools for debugging, iteration, and monitoring workflows

### Primary Use Cases
- **Conversational Agents**: Build chatbots that handle complex, multi-turn conversations
- **Task Automation**: Create workflows for document processing, data extraction, and decision-making
- **Custom LLM Experiences**: Develop specialized applications leveraging multiple LLMs/tools
- **AI Orchestration**: Coordinate multiple AI systems in complex pipelines

### Learning Resources
1. **Official Documentation**:
[LangGraph Documentation](https://www.langchain.com/langgraph) (Best starting point)
2. **LangChain Resources**:
[LangChain Resources Hub](https://www.langchain.com/resources) (Includes use cases and examples)
3. **Developer Studio**:
Integrated tools for building and testing LangGraph applications
4. **LangChain Academy**:
Offers courses on building with LangGraph (accessible through LangChain resources)

### Getting Started
LangGraph is part of the LangChain ecosystem. You can:
1. Install via pip: `pip install langgraph`
2. Explore tutorials in the documentation
3. Experiment with pre-built agent templates

Would you like me to explain any specific aspect in more detail or help with a learning roadmap?

变更对话:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
events = graph.stream(
{
"messages": [
{
"role": "user",
"content": (
"Ya that's helpful. Maybe I'll "
"build an autonomous agent with it!"
),
},
],
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
================================ Human Message =================================

Ya that's helpful. Maybe I'll build an autonomous agent with it!
================================== Ai Message ==================================

That's an excellent idea! Building an autonomous agent is one of LangGraph's core strengths. Here's what you should know about creating agents with LangGraph:

### Key Capabilities for Autonomous Agents
1. **State Management**:
LangGraph maintains agent state across steps using:
- `StateGraph` for tracking conversation/memory
- Custom state objects for persistent data

2. **Decision Nodes**:
Create nodes that decide next actions based on:
```python
def should_continue(state):
if state["query_complete"]:
return "end"
else:
return "research"
  1. Tool Integration:
    Easily connect external tools:

    1
    2
    from langgraph.prebuilt import ToolNode
    tool_node = ToolNode([search_tool, math_tool])
  2. Cyclic Workflows:
    Build self-correcting loops:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    graph.add_conditional_edges(
    "agent",
    decide_next_action,
    {
    "tool": "tool_node",
    "end": END,
    "retry": "agent"
    }
    )
  1. Start with Documentation:
    LangGraph Agent Guide

  2. Experiment with Templates:
    Try pre-built agent architectures:

    1
    2
    pip install langgraph
    from langgraph.prebuilt import AgentExecutor
  3. Key Concepts to Master:

    • StateGraph for persistent memory
    • Conditional edges for decision-making
    • Tool invocation nodes
    • Error handling with try ... except nodes
  4. Sample Project Flow:

    1
    2
    3
    4
    5
    6
    7
    graph LR
    A[User Input] --> B{Agent Node}
    B -->|Requires Research| C[Search Tool]
    B -->|Requires Calculation| D[Math Tool]
    C --> B
    D --> B
    B -->|Complete| E[Response]

Pro Tips

  1. Use LangSmith for tracing and debugging agent decisions
  2. Start with single-agent systems before multi-agent setups
  3. Implement fallback mechanisms for tool errors
  4. Add reflection nodes for self-correction:
    1
    2
    3
    def reflect_on_quality(state):
    if state["response_quality"] < 0.8:
    return "improve_response"

Would you like me to:
a) Provide a simple starter code template
b) Explain the agent architecture in more detail
c) Suggest specific agent project ideas?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15



# 3. 重放完整的状态历史记录

现在您已经向聊天机器人添加了步骤,您可以`replay`通过完整的状态历史记录来查看发生的所有事情。

```python
to_replay = None
for state in graph.get_state_history(config):
print("Num Messages: ", len(state.values["messages"]), "Next: ", state.next)
print("-" * 80)
if len(state.values["messages"]) == 6:
# We are somewhat arbitrarily selecting a specific state based on the number of chat messages in the state.
to_replay = state
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Num Messages:  6 Next:  ()
--------------------------------------------------------------------------------
Num Messages: 5 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 4 Next: ('__start__',)
--------------------------------------------------------------------------------
Num Messages: 4 Next: ()
--------------------------------------------------------------------------------
Num Messages: 3 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 2 Next: ('tools',)
--------------------------------------------------------------------------------
Num Messages: 1 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 0 Next: ('__start__',)
--------------------------------------------------------------------------------

检查点会保存图表中每一步的执行情况。这涵盖了所有调用,因此您可以回溯整个线程的历史记录。

从检查点恢复

to_replay从第二次图形调用中的节点之后的状态恢复chatbot。从此点恢复将调用接下来的操作节点。

1
2
print(to_replay.next)
print(to_replay.config)
1
2
()
{'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f081bdd-2a3e-6f18-8006-c056c1cc201e'}}

4. 加载某个时刻的状态

检查点to_replay.config包含一个checkpoint_id时间戳。提供此checkpoint_id值会告诉 LangGraph 的检查点程序从该时刻开始加载状态。

1
2
3
4
# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
for event in graph.stream(None, to_replay.config, stream_mode="values"):
if "messages" in event:
event["messages"][-1].pretty_print()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
================================== Ai Message ==================================

That's an excellent idea! Building an autonomous agent is one of LangGraph's core strengths. Here's what you should know about creating agents with LangGraph:

### Key Capabilities for Autonomous Agents
1. **State Management**:
LangGraph maintains agent state across steps using:
- `StateGraph` for tracking conversation/memory
- Custom state objects for persistent data

2. **Decision Nodes**:
Create nodes that decide next actions based on:
```python
def should_continue(state):
if state["query_complete"]:
return "end"
else:
return "research"
  1. Tool Integration:
    Easily connect external tools:

    1
    2
    from langgraph.prebuilt import ToolNode
    tool_node = ToolNode([search_tool, math_tool])
  2. Cyclic Workflows:
    Build self-correcting loops:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    graph.add_conditional_edges(
    "agent",
    decide_next_action,
    {
    "tool": "tool_node",
    "end": END,
    "retry": "agent"
    }
    )
  1. Start with Documentation:
    LangGraph Agent Guide

  2. Experiment with Templates:
    Try pre-built agent architectures:

    1
    2
    pip install langgraph
    from langgraph.prebuilt import AgentExecutor
  3. Key Concepts to Master:

    • StateGraph for persistent memory
    • Conditional edges for decision-making
    • Tool invocation nodes
    • Error handling with try ... except nodes
  4. Sample Project Flow:

    1
    2
    3
    4
    5
    6
    7
    graph LR
    A[User Input] --> B{Agent Node}
    B -->|Requires Research| C[Search Tool]
    B -->|Requires Calculation| D[Math Tool]
    C --> B
    D --> B
    B -->|Complete| E[Response]

Pro Tips

  1. Use LangSmith for tracing and debugging agent decisions
  2. Start with single-agent systems before multi-agent setups
  3. Implement fallback mechanisms for tool errors
  4. Add reflection nodes for self-correction:
    1
    2
    3
    def reflect_on_quality(state):
    if state["response_quality"] < 0.8:
    return "improve_response"

Would you like me to:
a) Provide a simple starter code template
b) Explain the agent architecture in more detail
c) Suggest specific agent project ideas?

1
2
3
4
5
6
7

图表从`tools`节点恢复执行。您可以判断出情况确实如此,因为上面打印的第一个值是来自我们的搜索引擎工具的响应。

**恭喜!**您现在已经在 LangGraph 中使用了时间旅行检查点遍历。能够回溯并探索其他路径,为调试、实验和交互式应用程序开辟了无限可能。

# 完整代码

import os
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_tavily import TavilySearch
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import Command, interrupt
from langchain_core.tools import InjectedToolCallId, tool

import json

from langchain_core.messages import ToolMessage

load_dotenv()

memory = InMemorySaver()

class State(TypedDict):
messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

llm = ChatOpenAI(
model=os.getenv(“LLM_MODEL_NAME”),
api_key=os.getenv(“LLM_API_KEY”),
base_url=os.getenv(“LLM_BASE_URL”),
temperature=os.getenv(“LLM_TEMPERATURE”),
max_tokens=os.getenv(“LLM_MAX_TOKENS”)
)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
return {“messages”: [llm_with_tools.invoke(state[“messages”])]}

graph_builder.add_node(“chatbot”, chatbot)

tool_node = ToolNode(tools=tools)
graph_builder.add_node(“tools”, tool_node)

graph_builder.add_conditional_edges(
“chatbot”,
tools_condition,
)

Any time a tool is called, we return to the chatbot to decide the next step

graph_builder.add_edge(“tools”, “chatbot”)
graph_builder.add_edge(START, “chatbot”)
graph = graph_builder.compile(checkpointer=memory)

画图

png_bytes = graph.get_graph().draw_mermaid_png()

with open(“graph.png”, “wb”) as f:
f.write(png_bytes)

import os

os.system(“open graph.png”)

config = {“configurable”: {“thread_id”: “1”}}
events = graph.stream(
{
“messages”: [
{
“role”: “user”,
“content”: (
“I’m learning LangGraph. “
“Could you do some research on it for me?”
),
},
],
},
config,
stream_mode=”values”,
)
for event in events:
if “messages” in event:
event[“messages”][-1].pretty_print()

events = graph.stream(
{
“messages”: [
{
“role”: “user”,
“content”: (
“Ya that’s helpful. Maybe I’ll “
“build an autonomous agent with it!”
),
},
],
},
config,
stream_mode=”values”,
)
for event in events:
if “messages” in event:
event[“messages”][-1].pretty_print()

to_replay = None
for state in graph.get_state_history(config):
print(“Num Messages: “, len(state.values[“messages”]), “Next: “, state.next)
print(“-“ * 80)
if len(state.values[“messages”]) == 6:
# We are somewhat arbitrarily selecting a specific state based on the number of chat messages in the state.
to_replay = state

print(to_replay.next)
print(to_replay.config)

The checkpoint_id in the to_replay.config corresponds to a state we’ve persisted to our checkpointer.

for event in graph.stream(None, to_replay.config, stream_mode=”values”):
if “messages” in event:
event[“messages”][-1].pretty_print()