环境准备

UV安装

mac环境安装命令:

1
curl -LsSf https://astral.sh/uv/install.sh | sh

安装python 3.13

1
2
3
4
5
# 查看已安装的python版本
uv python list

# 安装python版本3.13
uv python install 3.13

进入工作空间

1
mkdir -p llm && cd llm

创建工作目录:chatbot-tavily

1
2
3
4
5
# 使用指定python版本初始化工作目录
uv init chatbot-tavily -p 3.13

# 进入工作目录
cd chatbot-tavily

安装依赖

1
2
3
4
5
6
# 添加依赖
uv add langchain-tavily langchain langgraph langsmith langchain_openai langchain_core dotenv


# 对于使用pip作为依赖关系的项目
pip install -U langchain langgraph

前置条件

在开始本文之前,请确保您已具备以下条件:

配置环境

依然使用.envdotenv实现llm配置即Tavily api key的传入。

.env

我是通过ollama本地部署qwen2.5:7b,这里可以根据实际情况自行变更。

1
2
3
4
5
6
7
LLM_API_KEY=sk-ollama
LLM_MODEL_NAME=qwen2.5:7b
LLM_BASE_URL=http://localhost:11434/v1
LLM_TEMPERATURE = 0
LLM_MAX_TOKENS = 512

TAVILY_API_KEY=xxx

可以如此加载配置:

1
2
from dotenv import load_dotenv
load_dotenv()

对应的通过ChatOpenAI初始化llm实例:

1
2
3
4
5
6
7
llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

定义工具

定义网络搜索工具:

API 参考:TavilySearch

1
2
3
4
5
6
7
8
from langchain_tavily import TavilySearch
from dotenv import load_dotenv

load_dotenv()

tool = TavilySearch(max_results=2)
tools = [tool]
tool.invoke("What's a 'node' in LangGraph?")

可以将结果打印出来:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"query": "What's a 'node' in LangGraph?",
"follow_up_questions": null,
"answer": null,
"images": [],
"results": [
{
"url": "https://www.ibm.com/think/topics/langgraph LangGraph LangGraph",
"title": "What is LangGraph? - IBM",
"content": "LangGraph, created by LangChain, is an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. At its core, LangGraph uses the power of graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow. LangGraph illuminates the processes within an AI workflow, allowing full transparency of the agent’s state. By combining these technologies with a set of APIs and tools, LangGraph provides users with a versatile platform for developing AI solutions and workflows including chatbots, state graphs and other agent-based systems. Nodes: In LangGraph, nodes represent individual components or agents within an AI workflow. LangGraph uses enhanced decision-making by modeling complex relationships between nodes, which means it uses AI agents to analyze their past actions and feedback.",
"score": 0.90907925,
"raw_content": null
},
{
"url": "https://www.ionio.ai/blog/a-comprehensive-guide-about-langgraph-code-included A Comprehensive Guide About Langgraph: Code Included A Comprehensive Guide About Langgraph: Code Included",
"title": "A Comprehensive Guide About Langgraph: Code Included - Ionio",
"content": "A node can be any function or tool your agent uses in langgraph and these nodes are connected with other nodes using edges. Every workflow ends with a “END” node in langgraph which shows the end of workflow. You also need to define a starting node which will be the starting point of your workflow.",
"score": 0.9049283,
"raw_content": null
}
],
"response_time": 1.55,
"request_id": "6daf25f3-6e37-4e41-8770-08c0939dffdc"
}

定义图表

我们在上个文档里(LangGraph 案例:基本的聊天机器人)创建了StateGraph,现在需要在其基础上,添加搜索的工具。

API 参考:StateGraph |开始|结束| add_messages

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

tool = TavilySearch(max_results=2)
tools = [tool]

# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools)

class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)

创建一个函数来运行工具

现在,创建一个函数,用于在调用工具时运行它们。具体方法是将工具添加到一个名为 的新节点,BasicToolNode该节点检查状态中的最新消息,并在消息包含 时调用工具tool_calls。它依赖于 LLM 的tool_calling支持,该支持在 Anthropic、OpenAI、Google Gemini 和其他一些 LLM 提供商中均可用。

API 参考:ToolMessage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import json

from langchain_core.messages import ToolMessage


class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""

def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}

def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}


tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

如果您将来不想自己构建它,您可以使用 LangGraph 预先构建的ToolNode

定义conditional_edges

添加工具节点后,现在您可以定义conditional_edges

将控制流从一个节点路由到下一个节点。条件边从单个节点开始,通常包含“if”语句,根据当前图状态路由到不同的节点。这些函数接收当前图state并返回一个字符串或字符串列表,指示接下来要调用哪个节点。

接下来,定义一个名为 的路由器函数route_tools,用于检查tool_calls聊天机器人的输出。通过调用 将此函数提供给图add_conditional_edges,这将告知图,每当chatbot节点完成时,检查此函数以确定下一步要去哪里。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
tools`如果存在工具调用,则条件将路由到,`END`否则路由到 。由于条件可以返回,因此`END`您无需明确设置。`finish_point
def route_tools(
state: State,
):
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return END


# The `tools_condition` function returns "tools" if the chatbot asks to use a tool, and "END" if
# it is fine directly responding. This conditional routing defines the main agent loop.
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{"tools": "tools", END: END},
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()

您可以用预先构建的tools_condition替换它,使其更加简洁。

可视化图表(可选)

1
2
3
4
5
6
7
8
# 画图
png_bytes = graph.get_graph().draw_mermaid_png()

with open("graph.png", "wb") as f:
f.write(png_bytes)

import os
os.system("open graph.png")

image-20250820230229243

向机器人提问

现在您可以向聊天机器人询问其训练数据之外的问题:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def stream_graph_updates(user_input: str):
for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)

while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break

stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break

完整代码

完整代码(很丑):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
import os
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_tavily import TavilySearch
import json

from langchain_core.messages import ToolMessage

load_dotenv()

tool = TavilySearch(max_results=2)
tools = [tool]
# tool.invoke("What's a 'node' in LangGraph?")


llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

class State(TypedDict):
messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""

def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}

def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}


tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

def route_tools(
state: State,
):
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return END


# The `tools_condition` function returns "tools" if the chatbot asks to use a tool, and "END" if
# it is fine directly responding. This conditional routing defines the main agent loop.
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{"tools": "tools", END: END},
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()


# 画图
png_bytes = graph.get_graph().draw_mermaid_png()

with open("graph.png", "wb") as f:
f.write(png_bytes)

import os
os.system("open graph.png")


def stream_graph_updates(user_input: str):
for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)

while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break

stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break

对话记录:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
User: What do you know about LangGraph?
Assistant:
Assistant: {"query": "LangGraph", "follow_up_questions": null, "answer": null, "images": [], "results": [{"url": "https://www.ibm.com/think/topics/langgraph", "title": "What is LangGraph?", "content": "LangGraph, created by LangChain, is an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. At its core, LangGraph uses the power of graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow. LangGraph illuminates the processes within an AI workflow, allowing full transparency of the agent\u2019s state. By combining these technologies with a set of APIs and tools, LangGraph provides users with a versatile platform for developing AI solutions and workflows including chatbots, state graphs and other agent-based systems. Nodes: In LangGraph, nodes represent individual components or agents within an AI workflow. LangGraph uses enhanced decision-making by modeling complex relationships between nodes, which means it uses AI agents to analyze their past actions and feedback.", "score": 0.9536608, "raw_content": null}, {"url": "https://www.reddit.com/r/AI_Agents/comments/1l4uq7v/why_use_langgraph/", "title": "Why use LangGraph? : r/AI_Agents", "content": "LangGraph emphasizes graph-based workflows and state management, making it ideal for complex applications with sophisticated logic and memory persistence.", "score": 0.8302782, "raw_content": null}], "response_time": 1.19, "request_id": "40f8c334-fa22-4862-9024-40166e597c42"}
Assistant: LangGraph is an open-source AI agent framework created by LangChain. It's designed to help build, deploy, and manage complex generative AI agent workflows using graph-based architectures. Here are some key points about LangGraph:

- **Nodes**: In LangGraph, nodes represent individual components or agents within an AI workflow. The system uses enhanced decision-making by modeling complex relationships between these nodes.

- **Transparency**: LangGraph provides full transparency of the state of an AI agent's workflow.

- **Versatility**: It offers a versatile platform for developing various AI solutions and workflows including chatbots, state graphs, and other agent-based systems.

LangGraph is particularly useful for applications that require sophisticated logic and memory persistence due to its emphasis on graph-based workflows and state management. You can find more detailed information about LangGraph [here](https://www.ibm.com/think/topics/langgraph).
User: q
Goodbye!

使用预建

为了方便使用,请调整您的代码,将以下内容替换为 LangGraph 预构建组件。这些组件内置了并行 API 执行等功能。

最终的代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
import os
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain_tavily import TavilySearch
from langgraph.prebuilt import ToolNode, tools_condition
import json

from langchain_core.messages import ToolMessage

load_dotenv()

class State(TypedDict):
messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)


llm = ChatOpenAI(
model=os.getenv("LLM_MODEL_NAME"),
api_key=os.getenv("LLM_API_KEY"),
base_url=os.getenv("LLM_BASE_URL"),
temperature=os.getenv("LLM_TEMPERATURE"),
max_tokens=os.getenv("LLM_MAX_TOKENS")
)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)


tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)


graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)

# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()


# 画图
png_bytes = graph.get_graph().draw_mermaid_png()

with open("graph.png", "wb") as f:
f.write(png_bytes)

import os
os.system("open graph.png")


def stream_graph_updates(user_input: str):
for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)

while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break

stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break