环境准备

UV安装

mac环境安装命令:

1
curl -LsSf https://astral.sh/uv/install.sh | sh

安装python 3.13

1
2
3
4
5
# 查看已安装的python版本
uv python list

# 安装python版本3.13
uv python install 3.13

进入工作空间

1
mkdir -p llm && cd llm

创建工作目录:chatbot

1
2
3
4
5
# 使用指定python版本初始化工作目录
uv init chatbot -p 3.13

# 进入工作目录
cd chatbot

安装依赖

1
2
3
4
5
6
7
# 添加依赖
uv add langchain langgraph langsmith langchain_openai langchain_core dotenv IPython


# 对于使用pip作为依赖关系的项目
pip install -U langchain langgraph

创建StateGraph

现在,您可以使用 LangGraph 创建一个基本的聊天机器人。该聊天机器人将直接回复用户消息。

首先创建一个StateGraph。该StateGraph对象将聊天机器人的结构定义为“状态机”。我们将添加nodes来表示聊天机器人可以调用的 llm 和函数,并edges指定机器人如何在这些函数之间转换。

API 参考:StateGraph |开始|结束| add_messages

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages


class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)

我们的图表现在可以处理两个关键任务:

  1. 每个都node可以接收当前State输入并输出状态更新。
  2. messages由于预先构建了 reducer 函数,更新将附加到现有列表而不是覆盖它。

概念

定义图时,第一步是定义它的StateState包含图的架构和处理状态更新的Reducer 函数State。在我们的示例中,是一个包含一个键的架构:messages。 Reducer 函数用于将新消息附加到列表,而不是覆盖它。没有 Reducer 注释的键将覆盖先前的值。

要了解有关状态、reducer 和相关概念的更多信息,请参阅LangGraph 参考文档

添加节点

接下来,添加一个“ chatbot”节点。节点代表工作单元,通常是常规函数。

我们依然选择ollama本地部署的qwen2.5:7b模型

.env

我是通过ollama本地部署qwen2.5:7b,这里可以根据实际情况自行变更。

1
2
3
4
5
LLM_API_KEY=sk-ollama
LLM_MODEL_NAME=qwen2.5:7b
LLM_BASE_URL=http://localhost:11434/v1
LLM_TEMPERATURE = 0
LLM_MAX_TOKENS = 512

可以如此家在配置:

1
2
from dotenv import load_dotenv
load_dotenv()

对应的通过ChatOpenAI初始化llm实例:

1
2
3
4
5
6
7
8
9
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

我们现在可以将聊天模型合并到一个简单的节点中:

1
2
3
4
5
6
7
8
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}


# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)

注意节点函数如何chatbot将当前值State作为输入,并返回一个包含messages键“messages”下更新列表的字典。这是所有 LangGraph 节点函数的基本模式。

add_messages我们的函数将把StateLLM 的响应消息附加到状态中已有的任何消息中。

添加entry

添加一个entry点来告诉图表每次运行时从哪里开始工作:

1
graph_builder.add_edge(START, "chatbot")

添加exit

添加一个exit点来指示图表应该在哪里结束执行。这对于更复杂的流程很有帮助,但即使在像这样的简单图表中,添加结束节点也可以提高清晰度。

1
graph_builder.add_edge("chatbot", END)

这告诉图表在运行聊天机器人节点后终止。

编译图表

在运行图表之前,我们需要对其进行编译。我们可以通过调用compile() 图表构建器来完成此操作。这将创建一个CompiledGraph我们可以在状态上调用的函数。

1
graph = graph_builder.compile()

可视化图表(可选)

get_graph您可以使用方法以及“绘制”方法之一,我们可以基于此来实现图的可视化展示

1
2
3
4
5
6
7
8
# 画图
png_bytes = graph.get_graph().draw_mermaid_png()

with open("graph.png", "wb") as f:
f.write(png_bytes)

import os
os.system("open graph.png")

image-20250820225141422

运行聊天机器人

现在运行聊天机器人!

您可以随时通过输入、或退出quit聊天exit循环q

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def stream_graph_updates(user_input: str):
for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)


while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break

完整代码

API 参考: StateGraph | START | END | add_messages

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
import os
from typing import Annotated

from typing_extensions import TypedDict
from dotenv import load_dotenv
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI


load_dotenv()


llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)

def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}


# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)

graph_builder.add_edge(START, "chatbot")

graph_builder.add_edge("chatbot", END)

graph = graph_builder.compile()

# 画图
png_bytes = graph.get_graph().draw_mermaid_png()

with open("graph.png", "wb") as f:
f.write(png_bytes)

import os
os.system("open graph.png")

def stream_graph_updates(user_input: str):
for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)


while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break

对话过程:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
User: What do you know about LangGraph?
Assistant: LangGraph is an open-source knowledge graph project initiated and developed by Alibaba Cloud. It aims to provide researchers and developers with tools and resources for building and utilizing knowledge graphs in various applications, including but not limited to natural language processing (NLP), semantic search, recommendation systems, and more.

Key features of LangGraph include:

1. **Data Integration**: Supports the integration of diverse data sources into a unified knowledge graph.
2. **Knowledge Extraction**: Provides tools for extracting structured knowledge from unstructured or semi-structured data.
3. **Query Processing**: Offers efficient query processing capabilities to support complex queries over large-scale graphs.
4. **Visualization and Analysis**: Includes visualization tools to help users understand the structure and content of their knowledge graphs.

LangGraph is designed to be flexible, scalable, and easy to integrate into existing systems or workflows. It leverages Alibaba Cloud's expertise in big data and AI technologies to provide robust support for knowledge graph applications.

If you need more specific information about LangGraph, such as its technical details, use cases, or documentation, I can try to provide that based on publicly available information.
User: q
Goodbye!