diff --git a/README.md b/README.md
index 71b4c1c..f14bb3a 100644
--- a/README.md
+++ b/README.md
@@ -18,7 +18,7 @@
- ✅ **长期记忆:** 自动将对话记忆持久化至本地文件和数据库中,包括全局记忆和天级记忆,支持关键词及向量检索
- ✅ **技能系统:** 实现了Skills创建和运行的引擎,内置多种技能,并支持通过自然语言对话完成自定义Skills开发
- ✅ **多模态消息:** 支持对文本、图片、语音、文件等多类型消息进行解析、处理、生成、发送等操作
-- ✅ **多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi等国内外主流模型厂商
+- ✅ **多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi、Doubao等国内外主流模型厂商
- ✅ **多端部署:** 支持运行在本地计算机或服务器,可集成到网页、飞书、钉钉、微信公众号、企业微信应用中使用
- ✅ **知识库:** 集成企业知识库能力,让Agent成为专属数字员工,基于[LinkAI](https://link-ai.tech)平台实现
@@ -90,7 +90,7 @@ bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
项目支持国内外主流厂商的模型接口,可选模型及配置说明参考:[模型说明](#模型说明)。
-> 注:Agent模式下推荐使用以下模型,可根据效果及成本综合选择:GLM(glm-4.7)、MiniMAx(MiniMax-M2.1)、Qwen(qwen3-max)、Claude(claude-opus-4-6、claude-sonnet-4-5、claude-sonnet-4-0)、Gemini(gemini-3-flash-preview、gemini-3-pro-preview)
+> 注:Agent模式下推荐使用以下模型,可根据效果及成本综合选择:MiniMax(MiniMax-M2.5)、GLM(glm-5)、Kimi(kimi-k2.5)、Qwen(qwen3-max)、Claude(claude-sonnet-4-5)、Gemini(gemini-3-flash-preview)
同时支持使用 **LinkAI平台** 接口,可灵活切换 OpenAI、Claude、Gemini、DeepSeek、Qwen、Kimi 等多种常用模型,并支持知识库、工作流、插件等Agent能力,参考 [接口文档](https://docs.link-ai.tech/platform/api)。
@@ -136,9 +136,11 @@ pip3 install -r requirements-optional.txt
# config.json 文件内容示例
{
"channel_type": "web", # 接入渠道类型,默认为web,支持修改为:feishu,dingtalk,wechatcom_app,terminal,wechatmp,wechatmp_service
- "model": "MiniMax-M2.1", # 模型名称
+ "model": "MiniMax-M2.5", # 模型名称
"minimax_api_key": "", # MiniMax API Key
"zhipu_ai_api_key": "", # 智谱GLM API Key
+ "moonshot_api_key": "", # Kimi/Moonshot API Key
+ "ark_api_key": "", # 豆包(火山方舟) API Key
"dashscope_api_key": "", # 百炼(通义千问)API Key
"claude_api_key": "", # Claude API Key
"claude_api_base": "https://api.anthropic.com/v1", # Claude API 地址,修改可接入三方代理平台
@@ -173,7 +175,7 @@ pip3 install -r requirements-optional.txt
2. 其他配置
-+ `model`: 模型名称,Agent模式下推荐使用 `glm-4.7`、`MiniMax-M2.1`、`qwen3-max`、`claude-opus-4-6`、`claude-sonnet-4-5`、`claude-sonnet-4-0`、`gemini-3-flash-preview`、`gemini-3-pro-preview`,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
++ `model`: 模型名称,Agent模式下推荐使用 `MiniMax-M2.5`、`glm-5`、`kimi-k2.5`、`qwen3-max`、`claude-sonnet-4-5`、`gemini-3-flash-preview`,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
+ `character_desc`:普通对话模式下的机器人系统提示词。在Agent模式下该配置不生效,由工作空间中的文件内容构成。
+ `subscribe_msg`:订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复, 可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
@@ -309,24 +311,24 @@ volumes:
```json
{
- "model": "MiniMax-M2.1",
+ "model": "MiniMax-M2.5",
"minimax_api_key": ""
}
```
- - `model`: 可填写 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat` 等
+ - `model`: 可填写 `MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat` 等
- `minimax_api_key`:MiniMax平台的API-KEY,在 [控制台](https://platform.minimaxi.com/user-center/basic-information/interface-key) 创建
方式二:OpenAI兼容方式接入,配置如下:
```json
{
"bot_type": "chatGPT",
- "model": "MiniMax-M2.1",
+ "model": "MiniMax-M2.5",
"open_ai_api_base": "https://api.minimaxi.com/v1",
"open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
-- `model`: 可填 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`,参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)
+- `model`: 可填 `MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`,参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)
- `open_ai_api_base`: MiniMax平台API的 BASE URL
- `open_ai_api_key`: MiniMax平台的API-KEY
@@ -338,24 +340,24 @@ volumes:
```json
{
- "model": "glm-4.7",
+ "model": "glm-5",
"zhipu_ai_api_key": ""
}
```
- - `model`: 可填 `glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm-4系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)
+ - `model`: 可填 `glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)
- `zhipu_ai_api_key`: 智谱AI平台的 API KEY,在 [控制台](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) 创建
方式二:OpenAI兼容方式接入,配置如下:
```json
{
"bot_type": "chatGPT",
- "model": "glm-4.7",
+ "model": "glm-5",
"open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
"open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
-- `model`: 可填 `glm-4.7、glm-4.6、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等
+- `model`: 可填 `glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等
- `open_ai_api_base`: 智谱AI平台的 BASE URL
- `open_ai_api_key`: 智谱AI平台的 API KEY
@@ -448,28 +450,46 @@ API Key创建:在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn)
```json
{
- "model": "moonshot-v1-128k",
+ "model": "kimi-k2.5",
"moonshot_api_key": ""
}
```
- - `model`: 可填写 `moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
+ - `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
- `moonshot_api_key`: Moonshot的API-KEY,在 [控制台](https://platform.moonshot.cn/console/api-keys) 创建
方式二:OpenAI兼容方式接入,配置如下:
```json
{
"bot_type": "chatGPT",
- "model": "moonshot-v1-128k",
+ "model": "kimi-k2.5",
"open_ai_api_base": "https://api.moonshot.cn/v1",
"open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
-- `model`: 可填写 `moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
+- `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
- `open_ai_api_base`: Moonshot的 BASE URL
- `open_ai_api_key`: Moonshot的 API-KEY
+
+豆包 (Doubao)
+
+1. API Key创建:在 [火山方舟控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建API Key
+
+2. 填写配置
+
+```json
+{
+ "model": "doubao-seed-2-0-code-preview-260215",
+ "ark_api_key": "YOUR_API_KEY"
+}
+```
+ - `model`: 可填写 `doubao-seed-2-0-code-preview-260215、doubao-seed-2-0-pro-260215、doubao-seed-2-0-lite-260215、doubao-seed-2-0-mini-260215` 等
+ - `ark_api_key`: 火山方舟平台的 API Key,在 [控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建
+ - `ark_base_url`: 可选,默认为 `https://ark.cn-beijing.volces.com/api/v3`
+
+
Azure
diff --git a/bridge/bridge.py b/bridge/bridge.py
index 63fbb47..a4a5011 100644
--- a/bridge/bridge.py
+++ b/bridge/bridge.py
@@ -58,6 +58,9 @@ class Bridge(object):
if model_type and model_type.startswith("kimi"):
self.btype["chat"] = const.MOONSHOT
+ if model_type and model_type.startswith("doubao"):
+ self.btype["chat"] = const.DOUBAO
+
if model_type in [const.MODELSCOPE]:
self.btype["chat"] = const.MODELSCOPE
diff --git a/common/const.py b/common/const.py
index bf93703..b44a081 100644
--- a/common/const.py
+++ b/common/const.py
@@ -83,12 +83,14 @@ QWEN3_MAX = "qwen3-max" # Qwen3 Max - Agent推荐模型
QWQ_PLUS = "qwq-plus"
# MiniMax
+MINIMAX_M2_5 = "MiniMax-M2.5" # MiniMax M2.5 - Latest
MINIMAX_M2_1 = "MiniMax-M2.1" # MiniMax M2.1 - Agent推荐模型
MINIMAX_M2_1_LIGHTNING = "MiniMax-M2.1-lightning" # MiniMax M2.1 极速版
MINIMAX_M2 = "MiniMax-M2" # MiniMax M2
MINIMAX_ABAB6_5 = "abab6.5-chat" # MiniMax abab6.5
# GLM (智谱AI)
+GLM_5 = "glm-5" # 智谱 GLM-5 - Latest
GLM_4 = "glm-4"
GLM_4_PLUS = "glm-4-plus"
GLM_4_flash = "glm-4-flash"
@@ -104,6 +106,13 @@ MOONSHOT = "moonshot"
KIMI_K2 = "kimi-k2"
KIMI_K2_5 = "kimi-k2.5"
+# Doubao (Volcengine Ark)
+DOUBAO = "doubao"
+DOUBAO_SEED_2_CODE = "doubao-seed-2-0-code-preview-260215"
+DOUBAO_SEED_2_PRO = "doubao-seed-2-0-pro-260215"
+DOUBAO_SEED_2_LITE = "doubao-seed-2-0-lite-260215"
+DOUBAO_SEED_2_MINI = "doubao-seed-2-0-mini-260215"
+
# 其他模型
WEN_XIN = "wenxin"
WEN_XIN_4 = "wenxin-4"
@@ -147,16 +156,19 @@ MODEL_LIST = [
QWEN, QWEN_TURBO, QWEN_PLUS, QWEN_MAX, QWEN_LONG, QWEN3_MAX,
# MiniMax
- MiniMax, MINIMAX_M2_1, MINIMAX_M2_1_LIGHTNING, MINIMAX_M2, MINIMAX_ABAB6_5,
-
+ MiniMax, MINIMAX_M2_5, MINIMAX_M2_1, MINIMAX_M2_1_LIGHTNING, MINIMAX_M2, MINIMAX_ABAB6_5,
+
# GLM
- ZHIPU_AI, GLM_4, GLM_4_PLUS, GLM_4_flash, GLM_4_LONG, GLM_4_ALLTOOLS,
+ ZHIPU_AI, GLM_5, GLM_4, GLM_4_PLUS, GLM_4_flash, GLM_4_LONG, GLM_4_ALLTOOLS,
GLM_4_0520, GLM_4_AIR, GLM_4_AIRX, GLM_4_7,
-
+
# Kimi
MOONSHOT, "moonshot-v1-8k", "moonshot-v1-32k", "moonshot-v1-128k",
KIMI_K2, KIMI_K2_5,
-
+
+ # Doubao
+ DOUBAO, DOUBAO_SEED_2_CODE, DOUBAO_SEED_2_PRO, DOUBAO_SEED_2_LITE, DOUBAO_SEED_2_MINI,
+
# 其他模型
WEN_XIN, WEN_XIN_4, XUNFEI,
LINKAI_35, LINKAI_4_TURBO, LINKAI_4o,
diff --git a/config-template.json b/config-template.json
index d09d32a..d7cf86e 100644
--- a/config-template.json
+++ b/config-template.json
@@ -1,15 +1,17 @@
{
"channel_type": "web",
- "model": "glm-4.7",
+ "model": "MiniMax-M2.5",
+ "minimax_api_key": "",
+ "zhipu_ai_api_key": "",
+ "ark_api_key": "",
+ "moonshot_api_key": "",
+ "dashscope_api_key": "",
"claude_api_key": "",
"claude_api_base": "https://api.anthropic.com/v1",
"open_ai_api_key": "",
"open_ai_api_base": "https://api.openai.com/v1",
"gemini_api_key": "",
"gemini_api_base": "https://generativelanguage.googleapis.com",
- "zhipu_ai_api_key": "",
- "minimax_api_key": "",
- "dashscope_api_key": "",
"voice_to_text": "openai",
"text_to_voice": "openai",
"voice_reply_voice": false,
diff --git a/docs/agent.md b/docs/agent.md
index 1bf8a82..2e4e68f 100644
--- a/docs/agent.md
+++ b/docs/agent.md
@@ -8,7 +8,7 @@ Cow项目从简单的聊天机器人全面升级为超级智能助理 **CowAgent
- **工具系统**:内置实现10+种工具,包括文件读写、bash终端、浏览器、定时任务、记忆管理等,通过Agent管理你的计算机或服务器
- **长期记忆**:自动将对话记忆持久化至本地文件和数据库中,包括全局记忆和天级记忆,支持关键词及向量检索
- **Skills系统**:新增Skill运行引擎,内置多种技能,并支持通过自然语言对话完成自定义Skills开发
-- **多渠道和多模型支持**:支持在Web、飞书、钉钉、企微等多渠道与Agent交互,支持Claude、Gemini、OpenAI、GLM、MiniMax、Qwen 等多种国内外主流模型
+- **多渠道和多模型支持**:支持在Web、飞书、钉钉、企微等多渠道与Agent交互,支持Claude、Gemini、OpenAI、GLM、MiniMax、Qwen、Kimi、Doubao 等多种国内外主流模型
- **安全和成本**:通过秘钥管理工具、提示词控制、系统权限等手段控制Agent的访问安全;通过最大记忆轮次、最大上下文token、工具执行步数对token成本进行限制
@@ -137,11 +137,13 @@ bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
Agent模式推荐使用以下模型,可根据效果及成本综合选择:
-- **MiniMax**: `MiniMax-M2.1`
-- **GLM**: `glm-4.7`
+- **MiniMax**: `MiniMax-M2.5`
+- **GLM**: `glm-5`
+- **Kimi**: `kimi-k2.5`
+- **Doubao**: `doubao-seed-2-0-code-preview-260215`
- **Qwen**: `qwen3-max`
-- **Claude**: `claude-sonnet-4-5`、`claude-sonnet-4-0`
-- **Gemini**: `gemini-3-flash-preview`、`gemini-3-pro-preview`
+- **Claude**: `claude-sonnet-4-5`
+- **Gemini**: `gemini-3-flash-preview`
详细模型配置方式参考 [README.md 模型说明](../README.md#模型说明)
diff --git a/models/bot_factory.py b/models/bot_factory.py
index 3027d47..2f83da9 100644
--- a/models/bot_factory.py
+++ b/models/bot_factory.py
@@ -69,5 +69,8 @@ def create_bot(bot_type):
from models.modelscope.modelscope_bot import ModelScopeBot
return ModelScopeBot()
+ elif bot_type == const.DOUBAO:
+ from models.doubao.doubao_bot import DoubaoBot
+ return DoubaoBot()
raise RuntimeError
diff --git a/models/doubao/__init__.py b/models/doubao/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/models/doubao/doubao_bot.py b/models/doubao/doubao_bot.py
new file mode 100644
index 0000000..987d718
--- /dev/null
+++ b/models/doubao/doubao_bot.py
@@ -0,0 +1,520 @@
+# encoding:utf-8
+
+import json
+import time
+
+import requests
+from models.bot import Bot
+from models.session_manager import SessionManager
+from bridge.context import ContextType
+from bridge.reply import Reply, ReplyType
+from common.log import logger
+from config import conf, load_config
+from .doubao_session import DoubaoSession
+
+
+# Doubao (火山方舟 / Volcengine Ark) API Bot
+class DoubaoBot(Bot):
+ def __init__(self):
+ super().__init__()
+ self.sessions = SessionManager(DoubaoSession, model=conf().get("model") or "doubao-seed-2-0-pro-260215")
+ model = conf().get("model") or "doubao-seed-2-0-pro-260215"
+ self.args = {
+ "model": model,
+ "temperature": conf().get("temperature", 0.8),
+ "top_p": conf().get("top_p", 1.0),
+ }
+ self.api_key = conf().get("ark_api_key")
+ self.base_url = conf().get("ark_base_url", "https://ark.cn-beijing.volces.com/api/v3")
+ # Ensure base_url does not end with /chat/completions
+ if self.base_url.endswith("/chat/completions"):
+ self.base_url = self.base_url.rsplit("/chat/completions", 1)[0]
+ if self.base_url.endswith("/"):
+ self.base_url = self.base_url.rstrip("/")
+
+ def reply(self, query, context=None):
+ # acquire reply content
+ if context.type == ContextType.TEXT:
+ logger.info("[DOUBAO] query={}".format(query))
+
+ session_id = context["session_id"]
+ reply = None
+ clear_memory_commands = conf().get("clear_memory_commands", ["#清除记忆"])
+ if query in clear_memory_commands:
+ self.sessions.clear_session(session_id)
+ reply = Reply(ReplyType.INFO, "记忆已清除")
+ elif query == "#清除所有":
+ self.sessions.clear_all_session()
+ reply = Reply(ReplyType.INFO, "所有人记忆已清除")
+ elif query == "#更新配置":
+ load_config()
+ reply = Reply(ReplyType.INFO, "配置已更新")
+ if reply:
+ return reply
+ session = self.sessions.session_query(query, session_id)
+ logger.debug("[DOUBAO] session query={}".format(session.messages))
+
+ model = context.get("doubao_model")
+ new_args = self.args.copy()
+ if model:
+ new_args["model"] = model
+
+ reply_content = self.reply_text(session, args=new_args)
+ logger.debug(
+ "[DOUBAO] new_query={}, session_id={}, reply_cont={}, completion_tokens={}".format(
+ session.messages,
+ session_id,
+ reply_content["content"],
+ reply_content["completion_tokens"],
+ )
+ )
+ if reply_content["completion_tokens"] == 0 and len(reply_content["content"]) > 0:
+ reply = Reply(ReplyType.ERROR, reply_content["content"])
+ elif reply_content["completion_tokens"] > 0:
+ self.sessions.session_reply(reply_content["content"], session_id, reply_content["total_tokens"])
+ reply = Reply(ReplyType.TEXT, reply_content["content"])
+ else:
+ reply = Reply(ReplyType.ERROR, reply_content["content"])
+ logger.debug("[DOUBAO] reply {} used 0 tokens.".format(reply_content))
+ return reply
+ else:
+ reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
+ return reply
+
+ def reply_text(self, session: DoubaoSession, args=None, retry_count: int = 0) -> dict:
+ """
+ Call Doubao chat completion API to get the answer
+ :param session: a conversation session
+ :param args: model args
+ :param retry_count: retry count
+ :return: {}
+ """
+ try:
+ headers = {
+ "Content-Type": "application/json",
+ "Authorization": "Bearer " + self.api_key
+ }
+ body = args.copy()
+ body["messages"] = session.messages
+ # Disable thinking by default for better efficiency
+ body["thinking"] = {"type": "disabled"}
+ res = requests.post(
+ f"{self.base_url}/chat/completions",
+ headers=headers,
+ json=body
+ )
+ if res.status_code == 200:
+ response = res.json()
+ return {
+ "total_tokens": response["usage"]["total_tokens"],
+ "completion_tokens": response["usage"]["completion_tokens"],
+ "content": response["choices"][0]["message"]["content"]
+ }
+ else:
+ response = res.json()
+ error = response.get("error", {})
+ logger.error(f"[DOUBAO] chat failed, status_code={res.status_code}, "
+ f"msg={error.get('message')}, type={error.get('type')}")
+
+ result = {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"}
+ need_retry = False
+ if res.status_code >= 500:
+ logger.warn(f"[DOUBAO] do retry, times={retry_count}")
+ need_retry = retry_count < 2
+ elif res.status_code == 401:
+ result["content"] = "授权失败,请检查API Key是否正确"
+ elif res.status_code == 429:
+ result["content"] = "请求过于频繁,请稍后再试"
+ need_retry = retry_count < 2
+ else:
+ need_retry = False
+
+ if need_retry:
+ time.sleep(3)
+ return self.reply_text(session, args, retry_count + 1)
+ else:
+ return result
+ except Exception as e:
+ logger.exception(e)
+ need_retry = retry_count < 2
+ result = {"completion_tokens": 0, "content": "我现在有点累了,等会再来吧"}
+ if need_retry:
+ return self.reply_text(session, args, retry_count + 1)
+ else:
+ return result
+
+ # ==================== Agent mode support ====================
+
+ def call_with_tools(self, messages, tools=None, stream: bool = False, **kwargs):
+ """
+ Call Doubao API with tool support for agent integration.
+
+ This method handles:
+ 1. Format conversion (Claude format -> OpenAI format)
+ 2. System prompt injection
+ 3. Streaming SSE response with tool_calls
+ 4. Thinking (reasoning) is disabled by default for efficiency
+
+ Args:
+ messages: List of messages (may be in Claude format from agent)
+ tools: List of tool definitions (may be in Claude format from agent)
+ stream: Whether to use streaming
+ **kwargs: Additional parameters (max_tokens, temperature, system, model, etc.)
+
+ Returns:
+ Generator yielding OpenAI-format chunks (for streaming)
+ """
+ try:
+ # Convert messages from Claude format to OpenAI format
+ converted_messages = self._convert_messages_to_openai_format(messages)
+
+ # Inject system prompt if provided
+ system_prompt = kwargs.pop("system", None)
+ if system_prompt:
+ if not converted_messages or converted_messages[0].get("role") != "system":
+ converted_messages.insert(0, {"role": "system", "content": system_prompt})
+ else:
+ converted_messages[0] = {"role": "system", "content": system_prompt}
+
+ # Convert tools from Claude format to OpenAI format
+ converted_tools = None
+ if tools:
+ converted_tools = self._convert_tools_to_openai_format(tools)
+
+ # Resolve model / temperature
+ model = kwargs.pop("model", None) or self.args["model"]
+ max_tokens = kwargs.pop("max_tokens", None)
+ # Don't pop temperature, just ignore it - let API use default
+ kwargs.pop("temperature", None)
+
+ # Build request body (omit temperature, let the API use its own default)
+ request_body = {
+ "model": model,
+ "messages": converted_messages,
+ "stream": stream,
+ }
+ if max_tokens is not None:
+ request_body["max_tokens"] = max_tokens
+
+ # Add tools
+ if converted_tools:
+ request_body["tools"] = converted_tools
+ request_body["tool_choice"] = "auto"
+
+ # Explicitly disable thinking to avoid reasoning_content issues
+ # in multi-turn tool calls
+ request_body["thinking"] = {"type": "disabled"}
+
+ logger.debug(f"[DOUBAO] API call: model={model}, "
+ f"tools={len(converted_tools) if converted_tools else 0}, stream={stream}")
+
+ if stream:
+ return self._handle_stream_response(request_body)
+ else:
+ return self._handle_sync_response(request_body)
+
+ except Exception as e:
+ logger.error(f"[DOUBAO] call_with_tools error: {e}")
+ import traceback
+ logger.error(traceback.format_exc())
+
+ def error_generator():
+ yield {"error": True, "message": str(e), "status_code": 500}
+ return error_generator()
+
+ # -------------------- streaming --------------------
+
+ def _handle_stream_response(self, request_body: dict):
+ """Handle streaming SSE response from Doubao API and yield OpenAI-format chunks."""
+ try:
+ headers = {
+ "Content-Type": "application/json",
+ "Authorization": f"Bearer {self.api_key}"
+ }
+
+ url = f"{self.base_url}/chat/completions"
+ response = requests.post(url, headers=headers, json=request_body, stream=True, timeout=120)
+
+ if response.status_code != 200:
+ error_msg = response.text
+ logger.error(f"[DOUBAO] API error: status={response.status_code}, msg={error_msg}")
+ yield {"error": True, "message": error_msg, "status_code": response.status_code}
+ return
+
+ current_tool_calls = {}
+ finish_reason = None
+
+ for line in response.iter_lines():
+ if not line:
+ continue
+
+ line = line.decode("utf-8")
+ if not line.startswith("data: "):
+ continue
+
+ data_str = line[6:] # Remove "data: " prefix
+ if data_str.strip() == "[DONE]":
+ break
+
+ try:
+ chunk = json.loads(data_str)
+ except json.JSONDecodeError as e:
+ logger.warning(f"[DOUBAO] JSON decode error: {e}, data: {data_str[:200]}")
+ continue
+
+ # Check for error in chunk
+ if chunk.get("error"):
+ error_data = chunk["error"]
+ error_msg = error_data.get("message", "Unknown error") if isinstance(error_data, dict) else str(error_data)
+ logger.error(f"[DOUBAO] stream error: {error_msg}")
+ yield {"error": True, "message": error_msg, "status_code": 500}
+ return
+
+ if not chunk.get("choices"):
+ continue
+
+ choice = chunk["choices"][0]
+ delta = choice.get("delta", {})
+
+ # Skip reasoning_content (thinking) - don't log or forward
+ if delta.get("reasoning_content"):
+ continue
+
+ # Handle text content
+ if "content" in delta and delta["content"]:
+ yield {
+ "choices": [{
+ "index": 0,
+ "delta": {
+ "role": "assistant",
+ "content": delta["content"]
+ }
+ }]
+ }
+
+ # Handle tool_calls (streamed incrementally)
+ if "tool_calls" in delta:
+ for tool_call_chunk in delta["tool_calls"]:
+ index = tool_call_chunk.get("index", 0)
+ if index not in current_tool_calls:
+ current_tool_calls[index] = {
+ "id": tool_call_chunk.get("id", ""),
+ "type": "tool_use",
+ "name": tool_call_chunk.get("function", {}).get("name", ""),
+ "input": ""
+ }
+
+ # Accumulate arguments
+ if "function" in tool_call_chunk and "arguments" in tool_call_chunk["function"]:
+ current_tool_calls[index]["input"] += tool_call_chunk["function"]["arguments"]
+
+ # Yield OpenAI-format tool call delta
+ yield {
+ "choices": [{
+ "index": 0,
+ "delta": {
+ "tool_calls": [tool_call_chunk]
+ }
+ }]
+ }
+
+ # Capture finish_reason
+ if choice.get("finish_reason"):
+ finish_reason = choice["finish_reason"]
+
+ # Final chunk with finish_reason
+ yield {
+ "choices": [{
+ "index": 0,
+ "delta": {},
+ "finish_reason": finish_reason
+ }]
+ }
+
+ except requests.exceptions.Timeout:
+ logger.error("[DOUBAO] Request timeout")
+ yield {"error": True, "message": "Request timeout", "status_code": 500}
+ except Exception as e:
+ logger.error(f"[DOUBAO] stream response error: {e}")
+ import traceback
+ logger.error(traceback.format_exc())
+ yield {"error": True, "message": str(e), "status_code": 500}
+
+ # -------------------- sync --------------------
+
+ def _handle_sync_response(self, request_body: dict):
+ """Handle synchronous API response and yield a single result dict."""
+ try:
+ headers = {
+ "Content-Type": "application/json",
+ "Authorization": f"Bearer {self.api_key}"
+ }
+
+ request_body.pop("stream", None)
+ url = f"{self.base_url}/chat/completions"
+ response = requests.post(url, headers=headers, json=request_body, timeout=120)
+
+ if response.status_code != 200:
+ error_msg = response.text
+ logger.error(f"[DOUBAO] API error: status={response.status_code}, msg={error_msg}")
+ yield {"error": True, "message": error_msg, "status_code": response.status_code}
+ return
+
+ result = response.json()
+ message = result["choices"][0]["message"]
+ finish_reason = result["choices"][0]["finish_reason"]
+
+ response_data = {"role": "assistant", "content": []}
+
+ # Add text content
+ if message.get("content"):
+ response_data["content"].append({
+ "type": "text",
+ "text": message["content"]
+ })
+
+ # Add tool calls
+ if message.get("tool_calls"):
+ for tool_call in message["tool_calls"]:
+ response_data["content"].append({
+ "type": "tool_use",
+ "id": tool_call["id"],
+ "name": tool_call["function"]["name"],
+ "input": json.loads(tool_call["function"]["arguments"])
+ })
+
+ # Map finish_reason
+ if finish_reason == "tool_calls":
+ response_data["stop_reason"] = "tool_use"
+ elif finish_reason == "stop":
+ response_data["stop_reason"] = "end_turn"
+ else:
+ response_data["stop_reason"] = finish_reason
+
+ yield response_data
+
+ except requests.exceptions.Timeout:
+ logger.error("[DOUBAO] Request timeout")
+ yield {"error": True, "message": "Request timeout", "status_code": 500}
+ except Exception as e:
+ logger.error(f"[DOUBAO] sync response error: {e}")
+ import traceback
+ logger.error(traceback.format_exc())
+ yield {"error": True, "message": str(e), "status_code": 500}
+
+ # -------------------- format conversion --------------------
+
+ def _convert_messages_to_openai_format(self, messages):
+ """
+ Convert messages from Claude format to OpenAI format.
+
+ Claude format uses content blocks: tool_use / tool_result / text
+ OpenAI format uses tool_calls in assistant, role=tool for results
+ """
+ if not messages:
+ return []
+
+ converted = []
+
+ for msg in messages:
+ role = msg.get("role")
+ content = msg.get("content")
+
+ # Already a simple string - pass through
+ if isinstance(content, str):
+ converted.append(msg)
+ continue
+
+ if not isinstance(content, list):
+ converted.append(msg)
+ continue
+
+ if role == "user":
+ text_parts = []
+ tool_results = []
+
+ for block in content:
+ if not isinstance(block, dict):
+ continue
+ if block.get("type") == "text":
+ text_parts.append(block.get("text", ""))
+ elif block.get("type") == "tool_result":
+ tool_call_id = block.get("tool_use_id") or ""
+ result_content = block.get("content", "")
+ if not isinstance(result_content, str):
+ result_content = json.dumps(result_content, ensure_ascii=False)
+ tool_results.append({
+ "role": "tool",
+ "tool_call_id": tool_call_id,
+ "content": result_content
+ })
+
+ # Tool results first (must come right after assistant with tool_calls)
+ for tr in tool_results:
+ converted.append(tr)
+
+ if text_parts:
+ converted.append({"role": "user", "content": "\n".join(text_parts)})
+
+ elif role == "assistant":
+ openai_msg = {"role": "assistant"}
+ text_parts = []
+ tool_calls = []
+
+ for block in content:
+ if not isinstance(block, dict):
+ continue
+ if block.get("type") == "text":
+ text_parts.append(block.get("text", ""))
+ elif block.get("type") == "tool_use":
+ tool_calls.append({
+ "id": block.get("id"),
+ "type": "function",
+ "function": {
+ "name": block.get("name"),
+ "arguments": json.dumps(block.get("input", {}))
+ }
+ })
+
+ if text_parts:
+ openai_msg["content"] = "\n".join(text_parts)
+ elif not tool_calls:
+ openai_msg["content"] = ""
+
+ if tool_calls:
+ openai_msg["tool_calls"] = tool_calls
+ if not text_parts:
+ openai_msg["content"] = None
+
+ converted.append(openai_msg)
+ else:
+ converted.append(msg)
+
+ return converted
+
+ def _convert_tools_to_openai_format(self, tools):
+ """
+ Convert tools from Claude format to OpenAI format.
+
+ Claude: {name, description, input_schema}
+ OpenAI: {type: "function", function: {name, description, parameters}}
+ """
+ if not tools:
+ return None
+
+ converted = []
+ for tool in tools:
+ # Already in OpenAI format
+ if "type" in tool and tool["type"] == "function":
+ converted.append(tool)
+ else:
+ converted.append({
+ "type": "function",
+ "function": {
+ "name": tool.get("name"),
+ "description": tool.get("description"),
+ "parameters": tool.get("input_schema", {})
+ }
+ })
+
+ return converted
diff --git a/models/doubao/doubao_session.py b/models/doubao/doubao_session.py
new file mode 100644
index 0000000..561347e
--- /dev/null
+++ b/models/doubao/doubao_session.py
@@ -0,0 +1,51 @@
+from models.session_manager import Session
+from common.log import logger
+
+
+class DoubaoSession(Session):
+ def __init__(self, session_id, system_prompt=None, model="doubao-seed-2-0-pro-260215"):
+ super().__init__(session_id, system_prompt)
+ self.model = model
+ self.reset()
+
+ def discard_exceeding(self, max_tokens, cur_tokens=None):
+ precise = True
+ try:
+ cur_tokens = self.calc_tokens()
+ except Exception as e:
+ precise = False
+ if cur_tokens is None:
+ raise e
+ logger.debug("Exception when counting tokens precisely for query: {}".format(e))
+ while cur_tokens > max_tokens:
+ if len(self.messages) > 2:
+ self.messages.pop(1)
+ elif len(self.messages) == 2 and self.messages[1]["role"] == "assistant":
+ self.messages.pop(1)
+ if precise:
+ cur_tokens = self.calc_tokens()
+ else:
+ cur_tokens = cur_tokens - max_tokens
+ break
+ elif len(self.messages) == 2 and self.messages[1]["role"] == "user":
+ logger.warn("user message exceed max_tokens. total_tokens={}".format(cur_tokens))
+ break
+ else:
+ logger.debug("max_tokens={}, total_tokens={}, len(messages)={}".format(
+ max_tokens, cur_tokens, len(self.messages)))
+ break
+ if precise:
+ cur_tokens = self.calc_tokens()
+ else:
+ cur_tokens = cur_tokens - max_tokens
+ return cur_tokens
+
+ def calc_tokens(self):
+ return num_tokens_from_messages(self.messages, self.model)
+
+
+def num_tokens_from_messages(messages, model):
+ tokens = 0
+ for msg in messages:
+ tokens += len(msg["content"])
+ return tokens