聊天机器人现在可以使用工具来回答用户的问题,但它无法记住之前交互的上下文。这限制了它进行连贯、多轮对话的能力。

LangGraph 通过持久化检查点checkpointer解决了这个问题。如果您在编译图时提供,并thread_id在调用图时提供 ,LangGraph 会在每一步之后自动保存状态。当您再次使用相同的 调用图时thread_id,图会加载其已保存的状态,从而允许聊天机器人从上次中断的地方继续执行。

我们稍后会看到,检查点比简单的聊天记忆功能强大得多——它允许您随时保存和恢复复杂的状态,以进行错误恢复、人机交互、时间旅行交互等等。但首先,让我们添加检查点来实现多轮对话。

本教程以添加工具为基础。

创建MemorySaver检查点

创建MemorySaver检查点:

API 参考:InMemorySaver

1
2
3
from langgraph.checkpoint.memory import InMemorySaver

memory = InMemorySaver()

这是内存检查点,这对于本教程来说很方便。但是,在生产应用程序中,您可能会将其更改为使用SqliteSaverPostgresSaver连接数据库。

编译图表

使用提供的检查点编译图表,State当图表通过每个节点时,它将检查点:

1
graph = graph_builder.compile(checkpointer=memory)

与你的聊天机器人互动

现在您可以与您的机器人互动!

  1. 选择一个线索作为本次对话的关键。
1
config = {"configurable": {"thread_id": "1"}}
  1. 呼叫您的聊天机器人:
1
2
3
4
5
6
7
8
9
10
user_input = "Hi there! My name is Will."

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()

对话记录

1
2
3
4
5
6
================================ Human Message =================================

Hi there! My name is Will.
================================== Ai Message ==================================

Hello, Will! Nice to meet you. How can I assist you today?

提出后续问题

提出后续问题:

1
2
3
4
5
6
7
8
9
10
user_input = "Remember my name?"

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()

对话记录:

1
2
3
4
5
6
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

Of course! You mentioned your name is Will. Is there anything specific you'd like to know or discuss?

变更下thread_id再次进行下测试:

1
2
3
4
5
6
7
8
9
10
user_input = "Remember my name?"

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
{"configurable": {"thread_id": "2"}},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()

对话记录:

1
2
3
4
5
6
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

Of course! How can I assist you today? If you have any questions or need information on a particular topic, feel free to ask.

请注意,我们所做的唯一更改是修改了thread_id的配置。

检查状态

到目前为止,我们已经在两个不同的线程上创建了一些检查点。但是检查点包含哪些内容呢?要state随时检查给定配置的图表,请调用get_state(config)

1
2
3
4
5
6
7
snapshot = graph.get_state(config)
print("snapshot1: ", snapshot)
print("snapshot.next: ", snapshot.next)

snapshot = graph.get_state({"configurable": {"thread_id": "2"}})
print("snapshot2: ", snapshot)
print("snapshot.next: ", snapshot.next)

结果:

1
2
3
4
5
6
7
snapshot1:  StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='ecd9b10a-c211-401a-9445-adbc597c1044'), AIMessage(content='Hello, Will! Nice to meet you. How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 1563, 'total_tokens': 1580, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen2.5:7b', 'system_fingerprint': 'fp_ollama', 'id': 'chatcmpl-52', 'service_tier': None, 'finish_reason': 'stop', 'logprobs': None}, id='run--1c9c145c-6fa4-415f-92b6-47f84997466b-0', usage_metadata={'input_tokens': 1563, 'output_tokens': 17, 'total_tokens': 1580, 'input_token_details': {}, 'output_token_details': {}}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='5b1c1d1f-7178-4326-81a0-0beb501521fc'), AIMessage(content="Of course! You mentioned your name is Will. Is there anything specific you'd like to know or discuss?", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 1593, 'total_tokens': 1616, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen2.5:7b', 'system_fingerprint': 'fp_ollama', 'id': 'chatcmpl-599', 'service_tier': None, 'finish_reason': 'stop', 'logprobs': None}, id='run--faa5caf2-81bb-4150-b5c9-c049c8ef30a4-0', usage_metadata={'input_tokens': 1593, 'output_tokens': 23, 'total_tokens': 1616, 'input_token_details': {}, 'output_token_details': {}})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f07cbc5-e584-67b8-8004-70fd49b674a1'}}, metadata={'source': 'loop', 'step': 4, 'parents': {}}, created_at='2025-08-19T05:21:32.702483+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f07cbc5-da23-6766-8003-fece86761aa4'}}, tasks=(), interrupts=())
snapshot.next: ()



snapshot2: StateSnapshot(values={'messages': [HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='f820ff9b-cf06-4c7a-a2e1-12196a6c996b'), AIMessage(content='Of course! How can I assist you today? If you have any questions or need information on a particular topic, feel free to ask.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 29, 'prompt_tokens': 1559, 'total_tokens': 1588, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen2.5:7b', 'system_fingerprint': 'fp_ollama', 'id': 'chatcmpl-353', 'service_tier': None, 'finish_reason': 'stop', 'logprobs': None}, id='run--2407f4a6-93da-46f9-a1ef-a0db38abcf20-0', usage_metadata={'input_tokens': 1559, 'output_tokens': 29, 'total_tokens': 1588, 'input_token_details': {}, 'output_token_details': {}})]}, next=(), config={'configurable': {'thread_id': '2', 'checkpoint_ns': '', 'checkpoint_id': '1f07cbc5-f544-6d92-8001-e1e76d0b63d7'}}, metadata={'source': 'loop', 'step': 1, 'parents': {}}, created_at='2025-08-19T05:21:34.353969+00:00', parent_config={'configurable': {'thread_id': '2', 'checkpoint_ns': '', 'checkpoint_id': '1f07cbc5-e589-6aec-8000-7e8fbf548b00'}}, tasks=(), interrupts=())
snapshot.next: ()

上面的快照包含当前状态值、相应的配置以及next要处理的节点。在我们的例子中,图已达到某个END状态,因此next为空。

恭喜!借助 LangGraph 的检查点系统,您的聊天机器人现在可以跨会话保持对话状态。这为更自然、更符合情境的交互开辟了激动人心的可能性。LangGraph 的检查点甚至可以处理任意复杂的图状态,这比简单的聊天记忆更具表现力和功能强大得多。

完整代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
import os
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_tavily import TavilySearch
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import InMemorySaver

import json

from langchain_core.messages import ToolMessage

load_dotenv()

memory = InMemorySaver()

class State(TypedDict):
messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)


llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)


tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)


graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)

# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile(checkpointer=memory)


# 画图
png_bytes = graph.get_graph().draw_mermaid_png()

with open("graph.png", "wb") as f:
f.write(png_bytes)

# import os
# os.system("open graph.png")

config = {"configurable": {"thread_id": "1"}}

user_input = "Hi there! My name is Will."

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()


user_input = "Remember my name?"

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()



user_input = "Remember my name?"

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
{"configurable": {"thread_id": "2"}},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()

snapshot = graph.get_state(config)
print("snapshot1: ", snapshot)
print("snapshot.next: ", snapshot.next)

snapshot = graph.get_state({"configurable": {"thread_id": "2"}})
print("snapshot2: ", snapshot)
print("snapshot.next: ", snapshot.next)