Merge branch 'master' into feat-config-update

This commit is contained in:
zhayujie
2026-02-20 18:58:21 +08:00
19 changed files with 1562 additions and 279 deletions

118
README.md
View File

@@ -18,7 +18,7 @@
-**长期记忆:** 自动将对话记忆持久化至本地文件和数据库中,包括全局记忆和天级记忆,支持关键词及向量检索
-**技能系统:** 实现了Skills创建和运行的引擎内置多种技能并支持通过自然语言对话完成自定义Skills开发
-**多模态消息:** 支持对文本、图片、语音、文件等多类型消息进行解析、处理、生成、发送等操作
-**多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi等国内外主流模型厂商
-**多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi、Doubao等国内外主流模型厂商
-**多端部署:** 支持运行在本地计算机或服务器,可集成到网页、飞书、钉钉、微信公众号、企业微信应用中使用
-**知识库:** 集成企业知识库能力让Agent成为专属数字员工基于[LinkAI](https://link-ai.tech)平台实现
@@ -90,7 +90,7 @@ bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
项目支持国内外主流厂商的模型接口,可选模型及配置说明参考:[模型说明](#模型说明)。
> Agent模式下推荐使用以下模型可根据效果及成本综合选择GLM(glm-4.7)、MiniMAx(MiniMax-M2.1)、Qwen(qwen3-max)、Claude(claude-opus-4-6、claude-sonnet-4-5、claude-sonnet-4-0)、Gemini(gemini-3-flash-preview、gemini-3-pro-preview)
> Agent模式下推荐使用以下模型可根据效果及成本综合选择MiniMax-M2.5、glm-5、kimi-k2.5、qwen3.5-plus、claude-sonnet-4-6、gemini-3.1-pro-preview
同时支持使用 **LinkAI平台** 接口,可灵活切换 OpenAI、Claude、Gemini、DeepSeek、Qwen、Kimi 等多种常用模型并支持知识库、工作流、插件等Agent能力参考 [接口文档](https://docs.link-ai.tech/platform/api)。
@@ -136,9 +136,11 @@ pip3 install -r requirements-optional.txt
# config.json 文件内容示例
{
"channel_type": "web", # 接入渠道类型默认为web支持修改为:feishu,dingtalk,wechatcom_app,terminal,wechatmp,wechatmp_service
"model": "MiniMax-M2.1", # 模型名称
"model": "MiniMax-M2.5", # 模型名称
"minimax_api_key": "", # MiniMax API Key
"zhipu_ai_api_key": "", # 智谱GLM API Key
"moonshot_api_key": "", # Kimi/Moonshot API Key
"ark_api_key": "", # 豆包(火山方舟) API Key
"dashscope_api_key": "", # 百炼(通义千问)API Key
"claude_api_key": "", # Claude API Key
"claude_api_base": "https://api.anthropic.com/v1", # Claude API 地址,修改可接入三方代理平台
@@ -173,13 +175,13 @@ pip3 install -r requirements-optional.txt
<details>
<summary>2. 其他配置</summary>
+ `model`: 模型名称Agent模式下推荐使用 `glm-4.7``MiniMax-M2.1``qwen3-max``claude-opus-4-6``claude-sonnet-4-5``claude-sonnet-4-0``gemini-3-flash-preview``gemini-3-pro-preview`,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
+ `model`: 模型名称Agent模式下推荐使用 `MiniMax-M2.5``glm-5``kimi-k2.5``qwen3.5-plus``claude-sonnet-4-6``gemini-3.1-pro-preview`,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
+ `character_desc`普通对话模式下的机器人系统提示词。在Agent模式下该配置不生效由工作空间中的文件内容构成。
+ `subscribe_msg`订阅消息公众号和企业微信channel中请填写当被订阅时会自动回复 可使用特殊占位符。目前支持的占位符有{trigger_prefix}在程序中它会自动替换成bot的触发词。
</details>
<details>
<summary>5. LinkAI配置</summary>
<summary>3. LinkAI配置</summary>
+ `use_linkai`: 是否使用LinkAI接口默认关闭设置为true后可对接LinkAI平台使用知识库、工作流、插件等能力, 参考[接口文档](https://docs.link-ai.tech/platform/api/chat)
+ `linkai_api_key`: LinkAI Api Key可在 [控制台](https://link-ai.tech/console/interface) 创建
@@ -309,24 +311,24 @@ volumes:
```json
{
"model": "MiniMax-M2.1",
"model": "MiniMax-M2.5",
"minimax_api_key": ""
}
```
- `model`: 可填写 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat`
- `model`: 可填写 `MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat`
- `minimax_api_key`MiniMax平台的API-KEY在 [控制台](https://platform.minimaxi.com/user-center/basic-information/interface-key) 创建
方式二OpenAI兼容方式接入配置如下
```json
{
"bot_type": "chatGPT",
"model": "MiniMax-M2.1",
"model": "MiniMax-M2.5",
"open_ai_api_base": "https://api.minimaxi.com/v1",
"open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`,参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)
- `model`: 可填 `MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`,参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)
- `open_ai_api_base`: MiniMax平台API的 BASE URL
- `open_ai_api_key`: MiniMax平台的API-KEY
</details>
@@ -338,24 +340,24 @@ volumes:
```json
{
"model": "glm-4.7",
"model": "glm-5",
"zhipu_ai_api_key": ""
}
```
- `model`: 可填 `glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm-4系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)
- `model`: 可填 `glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)
- `zhipu_ai_api_key`: 智谱AI平台的 API KEY在 [控制台](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) 创建
方式二OpenAI兼容方式接入配置如下
```json
{
"bot_type": "chatGPT",
"model": "glm-4.7",
"model": "glm-5",
"open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
"open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填 `glm-4.7、glm-4.6、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long`
- `model`: 可填 `glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long`
- `open_ai_api_base`: 智谱AI平台的 BASE URL
- `open_ai_api_key`: 智谱AI平台的 API KEY
</details>
@@ -367,18 +369,18 @@ volumes:
```json
{
"model": "qwen3-max",
"model": "qwen3.5-plus",
"dashscope_api_key": "sk-qVxxxxG"
}
```
- `model`: 可填写 `qwen3-max、qwen-max、qwen-plus、qwen-turbo、qwen-long、qwq-plus`
- `model`: 可填写 `qwen3.5-plus、qwen3-max、qwen-max、qwen-plus、qwen-turbo、qwen-long、qwq-plus`
- `dashscope_api_key`: 通义千问的 API-KEY参考 [官方文档](https://bailian.console.aliyun.com/?tab=api#/api) ,在 [控制台](https://bailian.console.aliyun.com/?tab=model#/api-key) 创建
方式二OpenAI兼容方式接入配置如下
```json
{
"bot_type": "chatGPT",
"model": "qwen3-max",
"model": "qwen3.5-plus",
"open_ai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"open_ai_api_key": "sk-qVxxxxG"
}
@@ -389,6 +391,53 @@ volumes:
- `open_ai_api_key`: 通义千问的 API-KEY
</details>
<details>
<summary>Kimi (Moonshot)</summary>
方式一:官方接入,配置如下:
```json
{
"model": "kimi-k2.5",
"moonshot_api_key": ""
}
```
- `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
- `moonshot_api_key`: Moonshot的API-KEY在 [控制台](https://platform.moonshot.cn/console/api-keys) 创建
方式二OpenAI兼容方式接入配置如下
```json
{
"bot_type": "chatGPT",
"model": "kimi-k2.5",
"open_ai_api_base": "https://api.moonshot.cn/v1",
"open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
- `open_ai_api_base`: Moonshot的 BASE URL
- `open_ai_api_key`: Moonshot的 API-KEY
</details>
<details>
<summary>豆包 (Doubao)</summary>
1. API Key创建在 [火山方舟控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建API Key
2. 填写配置
```json
{
"model": "doubao-seed-2-0-code-preview-260215",
"ark_api_key": "YOUR_API_KEY"
}
```
- `model`: 可填写 `doubao-seed-2-0-code-preview-260215、doubao-seed-2-0-pro-260215、doubao-seed-2-0-lite-260215、doubao-seed-2-0-mini-260215`
- `ark_api_key`: 火山方舟平台的 API Key在 [控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建
- `ark_base_url`: 可选,默认为 `https://ark.cn-beijing.volces.com/api/v3`
</details>
<details>
<summary>Claude</summary>
@@ -398,11 +447,11 @@ volumes:
```json
{
"model": "claude-sonnet-4-5",
"model": "claude-sonnet-4-6",
"claude_api_key": "YOUR_API_KEY"
}
```
- `model`: 参考 [官方模型ID](https://docs.anthropic.com/en/docs/about-claude/models/overview#model-aliases) ,支持 `claude-opus-4-6、claude-sonnet-4-5、claude-sonnet-4-0、claude-opus-4-0、claude-3-5-sonnet-latest`
- `model`: 参考 [官方模型ID](https://docs.anthropic.com/en/docs/about-claude/models/overview#model-aliases) ,支持 `claude-sonnet-4-6、claude-opus-4-6、claude-sonnet-4-5、claude-sonnet-4-0、claude-opus-4-0、claude-3-5-sonnet-latest`
</details>
<details>
@@ -411,11 +460,11 @@ volumes:
API Key创建在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn) 创建API Key ,配置如下
```json
{
"model": "gemini-3-flash-preview",
"model": "gemini-3.1-pro-preview",
"gemini_api_key": ""
}
```
- `model`: 参考[官方文档-模型列表](https://ai.google.dev/gemini-api/docs/models?hl=zh-cn),支持 `gemini-3-flash-preview、gemini-3-pro-preview、gemini-2.5-pro、gemini-2.0-flash`
- `model`: 参考[官方文档-模型列表](https://ai.google.dev/gemini-api/docs/models?hl=zh-cn),支持 `gemini-3.1-pro-preview、gemini-3-flash-preview、gemini-3-pro-preview、gemini-2.5-pro、gemini-2.0-flash`
</details>
<details>
@@ -441,35 +490,6 @@ API Key创建在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn)
- `open_ai_api_base`: DeepSeek平台 BASE URL
</details>
<details>
<summary>Kimi (Moonshot)</summary>
方式一:官方接入,配置如下:
```json
{
"model": "moonshot-v1-128k",
"moonshot_api_key": ""
}
```
- `model`: 可填写 `moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
- `moonshot_api_key`: Moonshot的API-KEY在 [控制台](https://platform.moonshot.cn/console/api-keys) 创建
方式二OpenAI兼容方式接入配置如下
```json
{
"bot_type": "chatGPT",
"model": "moonshot-v1-128k",
"open_ai_api_base": "https://api.moonshot.cn/v1",
"open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填写 `moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
- `open_ai_api_base`: Moonshot的 BASE URL
- `open_ai_api_key`: Moonshot的 API-KEY
</details>
<details>
<summary>Azure</summary>

View File

@@ -583,6 +583,11 @@ class AgentStreamExecutor:
if finish_reason:
stop_reason = finish_reason
# Skip reasoning_content (internal thinking from models like GLM-5)
reasoning_delta = delta.get("reasoning_content") or ""
# if reasoning_delta:
# logger.debug(f"🧠 [thinking] {reasoning_delta[:100]}...")
# Handle text content
content_delta = delta.get("content") or ""
if content_delta:

View File

@@ -55,6 +55,11 @@ class Bridge(object):
if model_type in [const.MOONSHOT, "moonshot-v1-8k", "moonshot-v1-32k", "moonshot-v1-128k"]:
self.btype["chat"] = const.MOONSHOT
if model_type and model_type.startswith("kimi"):
self.btype["chat"] = const.MOONSHOT
if model_type and model_type.startswith("doubao"):
self.btype["chat"] = const.DOUBAO
if model_type in [const.MODELSCOPE]:
self.btype["chat"] = const.MODELSCOPE

View File

@@ -20,7 +20,6 @@ from common.utils import compress_imgfile, fsize
from config import conf
from channel.wework.run import wework
from channel.wework import run
from PIL import Image
def get_wxid_by_name(room_members, group_wxid, name):
@@ -55,6 +54,7 @@ def download_and_compress_image(url, filename, quality=30):
image_storage.seek(0)
# 读取并保存图片
from PIL import Image
image = Image.open(image_storage)
image_path = os.path.join(directory, f"{filename}.png")
image.save(image_path, "png")

View File

@@ -26,8 +26,9 @@ CLAUDE_35_SONNET_1022 = "claude-3-5-sonnet-20241022" # 带具体日期的模型
CLAUDE_35_SONNET_0620 = "claude-3-5-sonnet-20240620"
CLAUDE_4_OPUS = "claude-opus-4-0"
CLAUDE_4_6_OPUS = "claude-opus-4-6" # Claude Opus 4.6 - Agent推荐模型
CLAUDE_4_SONNET = "claude-sonnet-4-0" # Claude Sonnet 4.0 - Agent推荐模型
CLAUDE_4_SONNET = "claude-sonnet-4-0" # Claude Sonnet 4.0
CLAUDE_4_5_SONNET = "claude-sonnet-4-5" # Claude Sonnet 4.5 - Agent推荐模型
CLAUDE_4_6_SONNET = "claude-sonnet-4-6" # Claude Sonnet 4.6 - Agent推荐模型
# Gemini (Google)
GEMINI_PRO = "gemini-1.0-pro"
@@ -35,10 +36,11 @@ GEMINI_15_flash = "gemini-1.5-flash"
GEMINI_15_PRO = "gemini-1.5-pro"
GEMINI_20_flash_exp = "gemini-2.0-flash-exp" # exp结尾为实验模型会逐步不再支持
GEMINI_20_FLASH = "gemini-2.0-flash" # 正式版模型
GEMINI_25_FLASH_PRE = "gemini-2.5-flash-preview-05-20" # preview为预览版模型主要是新能力体验
GEMINI_25_FLASH_PRE = "gemini-2.5-flash-preview-05-20"
GEMINI_25_PRO_PRE = "gemini-2.5-pro-preview-05-06"
GEMINI_3_FLASH_PRE = "gemini-3-flash-preview" # Gemini 3 Flash Preview - Agent推荐模型
GEMINI_3_PRO_PRE = "gemini-3-pro-preview" # Gemini 3 Pro Preview - Agent推荐模型
GEMINI_3_PRO_PRE = "gemini-3-pro-preview" # Gemini 3 Pro Preview
GEMINI_31_PRO_PRE = "gemini-3.1-pro-preview" # Gemini 3.1 Pro Preview - Agent推荐模型
# OpenAI
GPT35 = "gpt-3.5-turbo"
@@ -80,15 +82,18 @@ QWEN_PLUS = "qwen-plus"
QWEN_MAX = "qwen-max"
QWEN_LONG = "qwen-long"
QWEN3_MAX = "qwen3-max" # Qwen3 Max - Agent推荐模型
QWEN35_PLUS = "qwen3.5-plus" # Qwen3.5 Plus - Omni model (MultiModalConversation)
QWQ_PLUS = "qwq-plus"
# MiniMax
MINIMAX_M2_5 = "MiniMax-M2.5" # MiniMax M2.5 - Latest
MINIMAX_M2_1 = "MiniMax-M2.1" # MiniMax M2.1 - Agent推荐模型
MINIMAX_M2_1_LIGHTNING = "MiniMax-M2.1-lightning" # MiniMax M2.1 极速版
MINIMAX_M2 = "MiniMax-M2" # MiniMax M2
MINIMAX_ABAB6_5 = "abab6.5-chat" # MiniMax abab6.5
# GLM (智谱AI)
GLM_5 = "glm-5" # 智谱 GLM-5 - Latest
GLM_4 = "glm-4"
GLM_4_PLUS = "glm-4-plus"
GLM_4_flash = "glm-4-flash"
@@ -101,6 +106,15 @@ GLM_4_7 = "glm-4.7" # 智谱 GLM-4.7 - Agent推荐模型
# Kimi (Moonshot)
MOONSHOT = "moonshot"
KIMI_K2 = "kimi-k2"
KIMI_K2_5 = "kimi-k2.5"
# Doubao (Volcengine Ark)
DOUBAO = "doubao"
DOUBAO_SEED_2_CODE = "doubao-seed-2-0-code-preview-260215"
DOUBAO_SEED_2_PRO = "doubao-seed-2-0-pro-260215"
DOUBAO_SEED_2_LITE = "doubao-seed-2-0-lite-260215"
DOUBAO_SEED_2_MINI = "doubao-seed-2-0-mini-260215"
# 其他模型
WEN_XIN = "wenxin"
@@ -121,12 +135,12 @@ MODELSCOPE_MODEL_LIST = ["LLM-Research/c4ai-command-r-plus-08-2024","mistralai/M
MODEL_LIST = [
# Claude
CLAUDE3, CLAUDE_4_6_OPUS, CLAUDE_4_OPUS, CLAUDE_4_5_SONNET, CLAUDE_4_SONNET, CLAUDE_3_OPUS, CLAUDE_3_OPUS_0229,
CLAUDE3, CLAUDE_4_6_SONNET, CLAUDE_4_6_OPUS, CLAUDE_4_OPUS, CLAUDE_4_5_SONNET, CLAUDE_4_SONNET, CLAUDE_3_OPUS, CLAUDE_3_OPUS_0229,
CLAUDE_35_SONNET, CLAUDE_35_SONNET_1022, CLAUDE_35_SONNET_0620, CLAUDE_3_SONNET, CLAUDE_3_HAIKU,
"claude", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3.5-sonnet",
# Gemini
GEMINI_3_PRO_PRE, GEMINI_3_FLASH_PRE, GEMINI_25_PRO_PRE, GEMINI_25_FLASH_PRE,
GEMINI_31_PRO_PRE, GEMINI_3_PRO_PRE, GEMINI_3_FLASH_PRE, GEMINI_25_PRO_PRE, GEMINI_25_FLASH_PRE,
GEMINI_20_FLASH, GEMINI_20_flash_exp, GEMINI_15_PRO, GEMINI_15_flash, GEMINI_PRO, GEMINI,
# OpenAI
@@ -142,18 +156,22 @@ MODEL_LIST = [
DEEPSEEK_CHAT, DEEPSEEK_REASONER,
# Qwen
QWEN, QWEN_TURBO, QWEN_PLUS, QWEN_MAX, QWEN_LONG, QWEN3_MAX,
QWEN, QWEN_TURBO, QWEN_PLUS, QWEN_MAX, QWEN_LONG, QWEN3_MAX, QWEN35_PLUS,
# MiniMax
MiniMax, MINIMAX_M2_1, MINIMAX_M2_1_LIGHTNING, MINIMAX_M2, MINIMAX_ABAB6_5,
MiniMax, MINIMAX_M2_5, MINIMAX_M2_1, MINIMAX_M2_1_LIGHTNING, MINIMAX_M2, MINIMAX_ABAB6_5,
# GLM
ZHIPU_AI, GLM_4, GLM_4_PLUS, GLM_4_flash, GLM_4_LONG, GLM_4_ALLTOOLS,
ZHIPU_AI, GLM_5, GLM_4, GLM_4_PLUS, GLM_4_flash, GLM_4_LONG, GLM_4_ALLTOOLS,
GLM_4_0520, GLM_4_AIR, GLM_4_AIRX, GLM_4_7,
# Kimi
MOONSHOT, "moonshot-v1-8k", "moonshot-v1-32k", "moonshot-v1-128k",
KIMI_K2, KIMI_K2_5,
# Doubao
DOUBAO, DOUBAO_SEED_2_CODE, DOUBAO_SEED_2_PRO, DOUBAO_SEED_2_LITE, DOUBAO_SEED_2_MINI,
# 其他模型
WEN_XIN, WEN_XIN_4, XUNFEI,
LINKAI_35, LINKAI_4_TURBO, LINKAI_4o,

View File

@@ -2,7 +2,6 @@ import io
import os
import re
from urllib.parse import urlparse
from PIL import Image
from common.log import logger
def fsize(file):
@@ -23,6 +22,7 @@ def fsize(file):
def compress_imgfile(file, max_size):
if fsize(file) <= max_size:
return file
from PIL import Image
file.seek(0)
img = Image.open(file)
rgb_image = img.convert("RGB")

View File

@@ -1,15 +1,17 @@
{
"channel_type": "web",
"model": "glm-4.7",
"model": "MiniMax-M2.5",
"minimax_api_key": "",
"zhipu_ai_api_key": "",
"ark_api_key": "",
"moonshot_api_key": "",
"dashscope_api_key": "",
"claude_api_key": "",
"claude_api_base": "https://api.anthropic.com/v1",
"open_ai_api_key": "",
"open_ai_api_base": "https://api.openai.com/v1",
"gemini_api_key": "",
"gemini_api_base": "https://generativelanguage.googleapis.com",
"zhipu_ai_api_key": "",
"minimax_api_key": "",
"dashscope_api_key": "",
"voice_to_text": "openai",
"text_to_voice": "openai",
"voice_reply_voice": false,

View File

@@ -174,7 +174,10 @@ available_setting = {
"zhipu_ai_api_key": "",
"zhipu_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
"moonshot_api_key": "",
"moonshot_base_url": "https://api.moonshot.cn/v1/chat/completions",
"moonshot_base_url": "https://api.moonshot.cn/v1",
# 豆包(火山方舟) 平台配置
"ark_api_key": "",
"ark_base_url": "https://ark.cn-beijing.volces.com/api/v3",
#魔搭社区 平台配置
"modelscope_api_key": "",
"modelscope_base_url": "https://api-inference.modelscope.cn/v1/chat/completions",

View File

@@ -8,7 +8,7 @@ Cow项目从简单的聊天机器人全面升级为超级智能助理 **CowAgent
- **工具系统**内置实现10+种工具包括文件读写、bash终端、浏览器、定时任务、记忆管理等通过Agent管理你的计算机或服务器
- **长期记忆**:自动将对话记忆持久化至本地文件和数据库中,包括全局记忆和天级记忆,支持关键词及向量检索
- **Skills系统**新增Skill运行引擎内置多种技能并支持通过自然语言对话完成自定义Skills开发
- **多渠道和多模型支持**支持在Web、飞书、钉钉、企微等多渠道与Agent交互支持Claude、Gemini、OpenAI、GLM、MiniMax、Qwen 等多种国内外主流模型
- **多渠道和多模型支持**支持在Web、飞书、钉钉、企微等多渠道与Agent交互支持Claude、Gemini、OpenAI、GLM、MiniMax、Qwen、Kimi、Doubao 等多种国内外主流模型
- **安全和成本**通过秘钥管理工具、提示词控制、系统权限等手段控制Agent的访问安全通过最大记忆轮次、最大上下文token、工具执行步数对token成本进行限制
@@ -137,11 +137,13 @@ bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
Agent模式推荐使用以下模型可根据效果及成本综合选择
- **MiniMax**: `MiniMax-M2.1`
- **GLM**: `glm-4.7`
- **Qwen**: `qwen3-max`
- **Claude**: `claude-sonnet-4-5``claude-sonnet-4-0`
- **Gemini**: `gemini-3-flash-preview``gemini-3-pro-preview`
- **MiniMax**: `MiniMax-M2.5`
- **GLM**: `glm-5`
- **Kimi**: `kimi-k2.5`
- **Doubao**: `doubao-seed-2-0-code-preview-260215`
- **Qwen**: `qwen3.5-plus`
- **Claude**: `claude-sonnet-4-6`
- **Gemini**: `gemini-3.1-pro-preview`
详细模型配置方式参考 [README.md 模型说明](../README.md#模型说明)

View File

@@ -69,5 +69,8 @@ def create_bot(bot_type):
from models.modelscope.modelscope_bot import ModelScopeBot
return ModelScopeBot()
elif bot_type == const.DOUBAO:
from models.doubao.doubao_bot import DoubaoBot
return DoubaoBot()
raise RuntimeError

View File

@@ -10,25 +10,26 @@ from config import conf, load_config
from .dashscope_session import DashscopeSession
import os
import dashscope
from dashscope import MultiModalConversation
from http import HTTPStatus
# Legacy model name mapping for older dashscope SDK constants.
# New models don't need to be added here — they use their name string directly.
dashscope_models = {
"qwen-turbo": dashscope.Generation.Models.qwen_turbo,
"qwen-plus": dashscope.Generation.Models.qwen_plus,
"qwen-max": dashscope.Generation.Models.qwen_max,
"qwen-bailian-v1": dashscope.Generation.Models.bailian_v1,
# Qwen3 series models - use string directly as model name
"qwen3-max": "qwen3-max",
"qwen3-plus": "qwen3-plus",
"qwen3-turbo": "qwen3-turbo",
# Other new models
"qwen-long": "qwen-long",
"qwq-32b-preview": "qwq-32b-preview",
"qvq-72b-preview": "qvq-72b-preview"
}
# ZhipuAI对话模型API
# Model name prefixes that require MultiModalConversation API instead of Generation API.
# Qwen3.5+ series are omni models that only support MultiModalConversation.
MULTIMODAL_MODEL_PREFIXES = ("qwen3.5-",)
# Qwen对话模型API
class DashscopeBot(Bot):
def __init__(self):
super().__init__()
@@ -39,6 +40,11 @@ class DashscopeBot(Bot):
os.environ["DASHSCOPE_API_KEY"] = self.api_key
self.client = dashscope.Generation
@staticmethod
def _is_multimodal_model(model_name: str) -> bool:
"""Check if the model requires MultiModalConversation API"""
return model_name.startswith(MULTIMODAL_MODEL_PREFIXES)
def reply(self, query, context=None):
# acquire reply content
if context.type == ContextType.TEXT:
@@ -93,16 +99,33 @@ class DashscopeBot(Bot):
"""
try:
dashscope.api_key = self.api_key
response = self.client.call(
dashscope_models[self.model_name],
messages=session.messages,
result_format="message"
)
model = dashscope_models.get(self.model_name, self.model_name)
if self._is_multimodal_model(self.model_name):
mm_messages = self._prepare_messages_for_multimodal(session.messages)
response = MultiModalConversation.call(
model=model,
messages=mm_messages,
result_format="message"
)
else:
response = self.client.call(
model,
messages=session.messages,
result_format="message"
)
if response.status_code == HTTPStatus.OK:
content = response.output.choices[0]["message"]["content"]
resp_dict = self._response_to_dict(response)
choice = resp_dict["output"]["choices"][0]
content = choice.get("message", {}).get("content", "")
# Multimodal models may return content as a list of blocks
if isinstance(content, list):
content = "".join(
item.get("text", "") for item in content if isinstance(item, dict)
)
usage = resp_dict.get("usage", {})
return {
"total_tokens": response.usage["total_tokens"],
"completion_tokens": response.usage["output_tokens"],
"total_tokens": usage.get("total_tokens", 0),
"completion_tokens": usage.get("output_tokens", 0),
"content": content,
}
else:
@@ -232,36 +255,54 @@ class DashscopeBot(Bot):
try:
# Set API key before calling
dashscope.api_key = self.api_key
response = dashscope.Generation.call(
model=dashscope_models.get(model_name, model_name),
messages=messages,
**parameters
)
model = dashscope_models.get(model_name, model_name)
if self._is_multimodal_model(model_name):
messages = self._prepare_messages_for_multimodal(messages)
response = MultiModalConversation.call(
model=model,
messages=messages,
**parameters
)
else:
response = dashscope.Generation.call(
model=model,
messages=messages,
**parameters
)
if response.status_code == HTTPStatus.OK:
# Convert DashScope response to OpenAI-compatible format
choice = response.output.choices[0]
# Convert response to dict to avoid DashScope object KeyError issues
resp_dict = self._response_to_dict(response)
choice = resp_dict["output"]["choices"][0]
message = choice.get("message", {})
content = message.get("content", "")
# Multimodal models may return content as a list of blocks
if isinstance(content, list):
content = "".join(
item.get("text", "") for item in content if isinstance(item, dict)
)
usage = resp_dict.get("usage", {})
return {
"id": response.request_id,
"id": resp_dict.get("request_id"),
"object": "chat.completion",
"created": 0,
"model": model_name,
"choices": [{
"index": 0,
"message": {
"role": choice.message.role,
"content": choice.message.content,
"role": message.get("role", "assistant"),
"content": content,
"tool_calls": self._convert_tool_calls_to_openai_format(
choice.message.get("tool_calls")
message.get("tool_calls")
)
},
"finish_reason": choice.finish_reason
"finish_reason": choice.get("finish_reason")
}],
"usage": {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.total_tokens
"prompt_tokens": usage.get("input_tokens", 0),
"completion_tokens": usage.get("output_tokens", 0),
"total_tokens": usage.get("total_tokens", 0)
}
}
else:
@@ -271,7 +312,7 @@ class DashscopeBot(Bot):
"message": response.message,
"status_code": response.status_code
}
except Exception as e:
logger.error(f"[DASHSCOPE] sync response error: {e}")
return {
@@ -285,48 +326,52 @@ class DashscopeBot(Bot):
try:
# Set API key before calling
dashscope.api_key = self.api_key
responses = dashscope.Generation.call(
model=dashscope_models.get(model_name, model_name),
messages=messages,
stream=True,
**parameters
)
model = dashscope_models.get(model_name, model_name)
if self._is_multimodal_model(model_name):
messages = self._prepare_messages_for_multimodal(messages)
responses = MultiModalConversation.call(
model=model,
messages=messages,
stream=True,
**parameters
)
else:
responses = dashscope.Generation.call(
model=model,
messages=messages,
stream=True,
**parameters
)
# Stream chunks to caller, converting to OpenAI format
for response in responses:
if response.status_code != HTTPStatus.OK:
logger.error(f"[DASHSCOPE] Stream error: {response.code} - {response.message}")
# Convert to dict first to avoid DashScope proxy object KeyError
resp_dict = self._response_to_dict(response)
status_code = resp_dict.get("status_code", 200)
if status_code != HTTPStatus.OK:
err_code = resp_dict.get("code", "")
err_msg = resp_dict.get("message", "Unknown error")
logger.error(f"[DASHSCOPE] Stream error: {err_code} - {err_msg}")
yield {
"error": True,
"message": response.message,
"status_code": response.status_code
"message": err_msg,
"status_code": status_code
}
continue
# Get choice - use try-except because DashScope raises KeyError on hasattr()
try:
if isinstance(response.output, dict):
choice = response.output['choices'][0]
else:
choice = response.output.choices[0]
except (KeyError, AttributeError, IndexError) as e:
logger.warning(f"[DASHSCOPE] Cannot get choice: {e}")
choices = resp_dict.get("output", {}).get("choices", [])
if not choices:
continue
# Get finish_reason safely
finish_reason = None
try:
if isinstance(choice, dict):
finish_reason = choice.get('finish_reason')
else:
finish_reason = choice.finish_reason
except (KeyError, AttributeError):
pass
choice = choices[0]
finish_reason = choice.get("finish_reason")
message = choice.get("message", {})
# Convert to OpenAI-compatible format
openai_chunk = {
"id": response.request_id,
"id": resp_dict.get("request_id"),
"object": "chat.completion.chunk",
"created": 0,
"model": model_name,
@@ -336,66 +381,90 @@ class DashscopeBot(Bot):
"finish_reason": finish_reason
}]
}
# Get message safely - use try-except
message = {}
try:
if isinstance(choice, dict):
message = choice.get('message', {})
else:
message = choice.message
except (KeyError, AttributeError):
pass
# Add role if present
role = None
try:
if isinstance(message, dict):
role = message.get('role')
else:
role = message.role
except (KeyError, AttributeError):
pass
# Add role
role = message.get("role")
if role:
openai_chunk["choices"][0]["delta"]["role"] = role
# Add content if present
content = None
try:
if isinstance(message, dict):
content = message.get('content')
else:
content = message.content
except (KeyError, AttributeError):
pass
# Add reasoning_content (thinking process from models like qwen3.5)
reasoning_content = message.get("reasoning_content")
if reasoning_content:
openai_chunk["choices"][0]["delta"]["reasoning_content"] = reasoning_content
# Add content (multimodal models may return list of blocks)
content = message.get("content")
if isinstance(content, list):
content = "".join(
item.get("text", "") for item in content if isinstance(item, dict)
)
if content:
openai_chunk["choices"][0]["delta"]["content"] = content
# Add tool_calls if present
# DashScope's response object raises KeyError on hasattr() if attr doesn't exist
# So we use try-except instead
tool_calls = None
try:
if isinstance(message, dict):
tool_calls = message.get('tool_calls')
else:
tool_calls = message.tool_calls
except (KeyError, AttributeError):
pass
# Add tool_calls
tool_calls = message.get("tool_calls")
if tool_calls:
openai_chunk["choices"][0]["delta"]["tool_calls"] = self._convert_tool_calls_to_openai_format(tool_calls)
yield openai_chunk
except Exception as e:
logger.error(f"[DASHSCOPE] stream response error: {e}")
logger.error(f"[DASHSCOPE] stream response error: {e}", exc_info=True)
yield {
"error": True,
"message": str(e),
"status_code": 500
}
@staticmethod
def _response_to_dict(response) -> dict:
"""
Convert DashScope response object to a plain dict.
DashScope SDK wraps responses in proxy objects whose __getattr__
delegates to __getitem__, raising KeyError (not AttributeError)
when an attribute is missing. Standard hasattr / getattr only
catch AttributeError, so we must use try-except everywhere.
"""
_SENTINEL = object()
def _safe_getattr(obj, name, default=_SENTINEL):
"""getattr that also catches KeyError from DashScope proxy objects."""
try:
return getattr(obj, name)
except (AttributeError, KeyError, TypeError):
return default
def _has_attr(obj, name):
return _safe_getattr(obj, name) is not _SENTINEL
def _to_dict(obj):
if isinstance(obj, (str, int, float, bool, type(None))):
return obj
if isinstance(obj, dict):
return {k: _to_dict(v) for k, v in obj.items()}
if isinstance(obj, (list, tuple)):
return [_to_dict(i) for i in obj]
# DashScope response objects behave like dicts (have .keys())
if _has_attr(obj, "keys"):
try:
return {k: _to_dict(obj[k]) for k in obj.keys()}
except Exception:
pass
return obj
result = {}
# Extract known top-level fields safely
for attr in ("request_id", "status_code", "code", "message", "output", "usage"):
val = _safe_getattr(response, attr)
if val is _SENTINEL:
try:
val = response[attr]
except (KeyError, TypeError, IndexError):
continue
result[attr] = _to_dict(val)
return result
def _convert_tools_to_dashscope_format(self, tools):
"""
Convert tools from Claude format to DashScope format
@@ -424,6 +493,37 @@ class DashscopeBot(Bot):
return dashscope_tools
@staticmethod
def _prepare_messages_for_multimodal(messages: list) -> list:
"""
Ensure messages are compatible with MultiModalConversation API.
MultiModalConversation._preprocess_messages iterates every message
with ``content = message["content"]; for elem in content: ...``,
which means:
1. Every message MUST have a 'content' key.
2. 'content' MUST be an iterable (list), not a plain string.
The expected format is [{"text": "..."}, ...].
Meanwhile the DashScope API requires role='tool' messages to follow
assistant tool_calls, so we must NOT convert them to role='user'.
We just ensure they have a list-typed 'content'.
"""
result = []
for msg in messages:
msg = dict(msg) # shallow copy
# Normalize content to list format [{"text": "..."}]
content = msg.get("content")
if content is None or (isinstance(content, str) and content == ""):
msg["content"] = [{"text": ""}]
elif isinstance(content, str):
msg["content"] = [{"text": content}]
# If content is already a list, keep as-is (already in multimodal format)
result.append(msg)
return result
def _convert_messages_to_dashscope_format(self, messages):
"""
Convert messages from Claude format to DashScope format

View File

520
models/doubao/doubao_bot.py Normal file
View File

@@ -0,0 +1,520 @@
# encoding:utf-8
import json
import time
import requests
from models.bot import Bot
from models.session_manager import SessionManager
from bridge.context import ContextType
from bridge.reply import Reply, ReplyType
from common.log import logger
from config import conf, load_config
from .doubao_session import DoubaoSession
# Doubao (火山方舟 / Volcengine Ark) API Bot
class DoubaoBot(Bot):
def __init__(self):
super().__init__()
self.sessions = SessionManager(DoubaoSession, model=conf().get("model") or "doubao-seed-2-0-pro-260215")
model = conf().get("model") or "doubao-seed-2-0-pro-260215"
self.args = {
"model": model,
"temperature": conf().get("temperature", 0.8),
"top_p": conf().get("top_p", 1.0),
}
self.api_key = conf().get("ark_api_key")
self.base_url = conf().get("ark_base_url", "https://ark.cn-beijing.volces.com/api/v3")
# Ensure base_url does not end with /chat/completions
if self.base_url.endswith("/chat/completions"):
self.base_url = self.base_url.rsplit("/chat/completions", 1)[0]
if self.base_url.endswith("/"):
self.base_url = self.base_url.rstrip("/")
def reply(self, query, context=None):
# acquire reply content
if context.type == ContextType.TEXT:
logger.info("[DOUBAO] query={}".format(query))
session_id = context["session_id"]
reply = None
clear_memory_commands = conf().get("clear_memory_commands", ["#清除记忆"])
if query in clear_memory_commands:
self.sessions.clear_session(session_id)
reply = Reply(ReplyType.INFO, "记忆已清除")
elif query == "#清除所有":
self.sessions.clear_all_session()
reply = Reply(ReplyType.INFO, "所有人记忆已清除")
elif query == "#更新配置":
load_config()
reply = Reply(ReplyType.INFO, "配置已更新")
if reply:
return reply
session = self.sessions.session_query(query, session_id)
logger.debug("[DOUBAO] session query={}".format(session.messages))
model = context.get("doubao_model")
new_args = self.args.copy()
if model:
new_args["model"] = model
reply_content = self.reply_text(session, args=new_args)
logger.debug(
"[DOUBAO] new_query={}, session_id={}, reply_cont={}, completion_tokens={}".format(
session.messages,
session_id,
reply_content["content"],
reply_content["completion_tokens"],
)
)
if reply_content["completion_tokens"] == 0 and len(reply_content["content"]) > 0:
reply = Reply(ReplyType.ERROR, reply_content["content"])
elif reply_content["completion_tokens"] > 0:
self.sessions.session_reply(reply_content["content"], session_id, reply_content["total_tokens"])
reply = Reply(ReplyType.TEXT, reply_content["content"])
else:
reply = Reply(ReplyType.ERROR, reply_content["content"])
logger.debug("[DOUBAO] reply {} used 0 tokens.".format(reply_content))
return reply
else:
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
return reply
def reply_text(self, session: DoubaoSession, args=None, retry_count: int = 0) -> dict:
"""
Call Doubao chat completion API to get the answer
:param session: a conversation session
:param args: model args
:param retry_count: retry count
:return: {}
"""
try:
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer " + self.api_key
}
body = args.copy()
body["messages"] = session.messages
# Disable thinking by default for better efficiency
body["thinking"] = {"type": "disabled"}
res = requests.post(
f"{self.base_url}/chat/completions",
headers=headers,
json=body
)
if res.status_code == 200:
response = res.json()
return {
"total_tokens": response["usage"]["total_tokens"],
"completion_tokens": response["usage"]["completion_tokens"],
"content": response["choices"][0]["message"]["content"]
}
else:
response = res.json()
error = response.get("error", {})
logger.error(f"[DOUBAO] chat failed, status_code={res.status_code}, "
f"msg={error.get('message')}, type={error.get('type')}")
result = {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"}
need_retry = False
if res.status_code >= 500:
logger.warn(f"[DOUBAO] do retry, times={retry_count}")
need_retry = retry_count < 2
elif res.status_code == 401:
result["content"] = "授权失败请检查API Key是否正确"
elif res.status_code == 429:
result["content"] = "请求过于频繁,请稍后再试"
need_retry = retry_count < 2
else:
need_retry = False
if need_retry:
time.sleep(3)
return self.reply_text(session, args, retry_count + 1)
else:
return result
except Exception as e:
logger.exception(e)
need_retry = retry_count < 2
result = {"completion_tokens": 0, "content": "我现在有点累了,等会再来吧"}
if need_retry:
return self.reply_text(session, args, retry_count + 1)
else:
return result
# ==================== Agent mode support ====================
def call_with_tools(self, messages, tools=None, stream: bool = False, **kwargs):
"""
Call Doubao API with tool support for agent integration.
This method handles:
1. Format conversion (Claude format -> OpenAI format)
2. System prompt injection
3. Streaming SSE response with tool_calls
4. Thinking (reasoning) is disabled by default for efficiency
Args:
messages: List of messages (may be in Claude format from agent)
tools: List of tool definitions (may be in Claude format from agent)
stream: Whether to use streaming
**kwargs: Additional parameters (max_tokens, temperature, system, model, etc.)
Returns:
Generator yielding OpenAI-format chunks (for streaming)
"""
try:
# Convert messages from Claude format to OpenAI format
converted_messages = self._convert_messages_to_openai_format(messages)
# Inject system prompt if provided
system_prompt = kwargs.pop("system", None)
if system_prompt:
if not converted_messages or converted_messages[0].get("role") != "system":
converted_messages.insert(0, {"role": "system", "content": system_prompt})
else:
converted_messages[0] = {"role": "system", "content": system_prompt}
# Convert tools from Claude format to OpenAI format
converted_tools = None
if tools:
converted_tools = self._convert_tools_to_openai_format(tools)
# Resolve model / temperature
model = kwargs.pop("model", None) or self.args["model"]
max_tokens = kwargs.pop("max_tokens", None)
# Don't pop temperature, just ignore it - let API use default
kwargs.pop("temperature", None)
# Build request body (omit temperature, let the API use its own default)
request_body = {
"model": model,
"messages": converted_messages,
"stream": stream,
}
if max_tokens is not None:
request_body["max_tokens"] = max_tokens
# Add tools
if converted_tools:
request_body["tools"] = converted_tools
request_body["tool_choice"] = "auto"
# Explicitly disable thinking to avoid reasoning_content issues
# in multi-turn tool calls
request_body["thinking"] = {"type": "disabled"}
logger.debug(f"[DOUBAO] API call: model={model}, "
f"tools={len(converted_tools) if converted_tools else 0}, stream={stream}")
if stream:
return self._handle_stream_response(request_body)
else:
return self._handle_sync_response(request_body)
except Exception as e:
logger.error(f"[DOUBAO] call_with_tools error: {e}")
import traceback
logger.error(traceback.format_exc())
def error_generator():
yield {"error": True, "message": str(e), "status_code": 500}
return error_generator()
# -------------------- streaming --------------------
def _handle_stream_response(self, request_body: dict):
"""Handle streaming SSE response from Doubao API and yield OpenAI-format chunks."""
try:
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}"
}
url = f"{self.base_url}/chat/completions"
response = requests.post(url, headers=headers, json=request_body, stream=True, timeout=120)
if response.status_code != 200:
error_msg = response.text
logger.error(f"[DOUBAO] API error: status={response.status_code}, msg={error_msg}")
yield {"error": True, "message": error_msg, "status_code": response.status_code}
return
current_tool_calls = {}
finish_reason = None
for line in response.iter_lines():
if not line:
continue
line = line.decode("utf-8")
if not line.startswith("data: "):
continue
data_str = line[6:] # Remove "data: " prefix
if data_str.strip() == "[DONE]":
break
try:
chunk = json.loads(data_str)
except json.JSONDecodeError as e:
logger.warning(f"[DOUBAO] JSON decode error: {e}, data: {data_str[:200]}")
continue
# Check for error in chunk
if chunk.get("error"):
error_data = chunk["error"]
error_msg = error_data.get("message", "Unknown error") if isinstance(error_data, dict) else str(error_data)
logger.error(f"[DOUBAO] stream error: {error_msg}")
yield {"error": True, "message": error_msg, "status_code": 500}
return
if not chunk.get("choices"):
continue
choice = chunk["choices"][0]
delta = choice.get("delta", {})
# Skip reasoning_content (thinking) - don't log or forward
if delta.get("reasoning_content"):
continue
# Handle text content
if "content" in delta and delta["content"]:
yield {
"choices": [{
"index": 0,
"delta": {
"role": "assistant",
"content": delta["content"]
}
}]
}
# Handle tool_calls (streamed incrementally)
if "tool_calls" in delta:
for tool_call_chunk in delta["tool_calls"]:
index = tool_call_chunk.get("index", 0)
if index not in current_tool_calls:
current_tool_calls[index] = {
"id": tool_call_chunk.get("id", ""),
"type": "tool_use",
"name": tool_call_chunk.get("function", {}).get("name", ""),
"input": ""
}
# Accumulate arguments
if "function" in tool_call_chunk and "arguments" in tool_call_chunk["function"]:
current_tool_calls[index]["input"] += tool_call_chunk["function"]["arguments"]
# Yield OpenAI-format tool call delta
yield {
"choices": [{
"index": 0,
"delta": {
"tool_calls": [tool_call_chunk]
}
}]
}
# Capture finish_reason
if choice.get("finish_reason"):
finish_reason = choice["finish_reason"]
# Final chunk with finish_reason
yield {
"choices": [{
"index": 0,
"delta": {},
"finish_reason": finish_reason
}]
}
except requests.exceptions.Timeout:
logger.error("[DOUBAO] Request timeout")
yield {"error": True, "message": "Request timeout", "status_code": 500}
except Exception as e:
logger.error(f"[DOUBAO] stream response error: {e}")
import traceback
logger.error(traceback.format_exc())
yield {"error": True, "message": str(e), "status_code": 500}
# -------------------- sync --------------------
def _handle_sync_response(self, request_body: dict):
"""Handle synchronous API response and yield a single result dict."""
try:
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}"
}
request_body.pop("stream", None)
url = f"{self.base_url}/chat/completions"
response = requests.post(url, headers=headers, json=request_body, timeout=120)
if response.status_code != 200:
error_msg = response.text
logger.error(f"[DOUBAO] API error: status={response.status_code}, msg={error_msg}")
yield {"error": True, "message": error_msg, "status_code": response.status_code}
return
result = response.json()
message = result["choices"][0]["message"]
finish_reason = result["choices"][0]["finish_reason"]
response_data = {"role": "assistant", "content": []}
# Add text content
if message.get("content"):
response_data["content"].append({
"type": "text",
"text": message["content"]
})
# Add tool calls
if message.get("tool_calls"):
for tool_call in message["tool_calls"]:
response_data["content"].append({
"type": "tool_use",
"id": tool_call["id"],
"name": tool_call["function"]["name"],
"input": json.loads(tool_call["function"]["arguments"])
})
# Map finish_reason
if finish_reason == "tool_calls":
response_data["stop_reason"] = "tool_use"
elif finish_reason == "stop":
response_data["stop_reason"] = "end_turn"
else:
response_data["stop_reason"] = finish_reason
yield response_data
except requests.exceptions.Timeout:
logger.error("[DOUBAO] Request timeout")
yield {"error": True, "message": "Request timeout", "status_code": 500}
except Exception as e:
logger.error(f"[DOUBAO] sync response error: {e}")
import traceback
logger.error(traceback.format_exc())
yield {"error": True, "message": str(e), "status_code": 500}
# -------------------- format conversion --------------------
def _convert_messages_to_openai_format(self, messages):
"""
Convert messages from Claude format to OpenAI format.
Claude format uses content blocks: tool_use / tool_result / text
OpenAI format uses tool_calls in assistant, role=tool for results
"""
if not messages:
return []
converted = []
for msg in messages:
role = msg.get("role")
content = msg.get("content")
# Already a simple string - pass through
if isinstance(content, str):
converted.append(msg)
continue
if not isinstance(content, list):
converted.append(msg)
continue
if role == "user":
text_parts = []
tool_results = []
for block in content:
if not isinstance(block, dict):
continue
if block.get("type") == "text":
text_parts.append(block.get("text", ""))
elif block.get("type") == "tool_result":
tool_call_id = block.get("tool_use_id") or ""
result_content = block.get("content", "")
if not isinstance(result_content, str):
result_content = json.dumps(result_content, ensure_ascii=False)
tool_results.append({
"role": "tool",
"tool_call_id": tool_call_id,
"content": result_content
})
# Tool results first (must come right after assistant with tool_calls)
for tr in tool_results:
converted.append(tr)
if text_parts:
converted.append({"role": "user", "content": "\n".join(text_parts)})
elif role == "assistant":
openai_msg = {"role": "assistant"}
text_parts = []
tool_calls = []
for block in content:
if not isinstance(block, dict):
continue
if block.get("type") == "text":
text_parts.append(block.get("text", ""))
elif block.get("type") == "tool_use":
tool_calls.append({
"id": block.get("id"),
"type": "function",
"function": {
"name": block.get("name"),
"arguments": json.dumps(block.get("input", {}))
}
})
if text_parts:
openai_msg["content"] = "\n".join(text_parts)
elif not tool_calls:
openai_msg["content"] = ""
if tool_calls:
openai_msg["tool_calls"] = tool_calls
if not text_parts:
openai_msg["content"] = None
converted.append(openai_msg)
else:
converted.append(msg)
return converted
def _convert_tools_to_openai_format(self, tools):
"""
Convert tools from Claude format to OpenAI format.
Claude: {name, description, input_schema}
OpenAI: {type: "function", function: {name, description, parameters}}
"""
if not tools:
return None
converted = []
for tool in tools:
# Already in OpenAI format
if "type" in tool and tool["type"] == "function":
converted.append(tool)
else:
converted.append({
"type": "function",
"function": {
"name": tool.get("name"),
"description": tool.get("description"),
"parameters": tool.get("input_schema", {})
}
})
return converted

View File

@@ -0,0 +1,51 @@
from models.session_manager import Session
from common.log import logger
class DoubaoSession(Session):
def __init__(self, session_id, system_prompt=None, model="doubao-seed-2-0-pro-260215"):
super().__init__(session_id, system_prompt)
self.model = model
self.reset()
def discard_exceeding(self, max_tokens, cur_tokens=None):
precise = True
try:
cur_tokens = self.calc_tokens()
except Exception as e:
precise = False
if cur_tokens is None:
raise e
logger.debug("Exception when counting tokens precisely for query: {}".format(e))
while cur_tokens > max_tokens:
if len(self.messages) > 2:
self.messages.pop(1)
elif len(self.messages) == 2 and self.messages[1]["role"] == "assistant":
self.messages.pop(1)
if precise:
cur_tokens = self.calc_tokens()
else:
cur_tokens = cur_tokens - max_tokens
break
elif len(self.messages) == 2 and self.messages[1]["role"] == "user":
logger.warn("user message exceed max_tokens. total_tokens={}".format(cur_tokens))
break
else:
logger.debug("max_tokens={}, total_tokens={}, len(messages)={}".format(
max_tokens, cur_tokens, len(self.messages)))
break
if precise:
cur_tokens = self.calc_tokens()
else:
cur_tokens = cur_tokens - max_tokens
return cur_tokens
def calc_tokens(self):
return num_tokens_from_messages(self.messages, self.model)
def num_tokens_from_messages(messages, model):
tokens = 0
for msg in messages:
tokens += len(msg["content"])
return tokens

View File

@@ -6,11 +6,14 @@ Google gemini bot
"""
# encoding:utf-8
import base64
import json
import mimetypes
import os
import re
import time
import requests
from models.bot import Bot
import google.generativeai as genai
from models.session_manager import SessionManager
from bridge.context import ContextType, Context
from bridge.reply import Reply, ReplyType
@@ -18,7 +21,6 @@ from common.log import logger
from config import conf
from models.chatgpt.chat_gpt_session import ChatGPTSession
from models.baidu.baidu_wenxin_session import BaiduWenxinSession
from google.generativeai.types import HarmCategory, HarmBlockThreshold
# OpenAI对话模型API (可用)
@@ -43,6 +45,7 @@ class GoogleGeminiBot(Bot):
self.api_base = "https://generativelanguage.googleapis.com"
def reply(self, query, context: Context = None) -> Reply:
session_id = None
try:
if context.type != ContextType.TEXT:
logger.warn(f"[Gemini] Unsupported message type, type={context.type}")
@@ -50,43 +53,47 @@ class GoogleGeminiBot(Bot):
logger.info(f"[Gemini] query={query}")
session_id = context["session_id"]
session = self.sessions.session_query(query, session_id)
gemini_messages = self._convert_to_gemini_messages(self.filter_messages(session.messages))
logger.debug(f"[Gemini] messages={gemini_messages}")
genai.configure(api_key=self.api_key)
model = genai.GenerativeModel(self.model)
# 添加安全设置
safety_settings = {
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
}
# 生成回复,包含安全设置
response = model.generate_content(
gemini_messages,
safety_settings=safety_settings
filtered_messages = self.filter_messages(session.messages)
logger.debug(f"[Gemini] messages={filtered_messages}")
response = self.call_with_tools(
messages=filtered_messages,
tools=None,
stream=False,
model=self.model
)
if response.candidates and response.candidates[0].content:
reply_text = response.candidates[0].content.parts[0].text
logger.info(f"[Gemini] reply={reply_text}")
self.sessions.session_reply(reply_text, session_id)
return Reply(ReplyType.TEXT, reply_text)
else:
# 没有有效响应内容,可能内容被屏蔽,输出安全评分
logger.warning("[Gemini] No valid response generated. Checking safety ratings.")
if hasattr(response, 'candidates') and response.candidates:
for rating in response.candidates[0].safety_ratings:
logger.warning(f"Safety rating: {rating.category} - {rating.probability}")
error_message = "No valid response generated due to safety constraints."
if isinstance(response, dict) and response.get("error"):
error_message = response.get("message", "Failed to invoke [Gemini] api!")
logger.error(f"[Gemini] API error: {error_message}")
self.sessions.session_reply(error_message, session_id)
return Reply(ReplyType.ERROR, error_message)
choices = response.get("choices", []) if isinstance(response, dict) else []
if choices and choices[0].get("message"):
reply_text = choices[0]["message"].get("content")
if reply_text:
logger.info(f"[Gemini] reply={reply_text}")
self.sessions.session_reply(reply_text, session_id)
return Reply(ReplyType.TEXT, reply_text)
logger.warning("[Gemini] No valid response generated. Checking safety ratings.")
safety_ratings = response.get("safety_ratings", []) if isinstance(response, dict) else []
if safety_ratings:
for rating in safety_ratings:
category = rating.get("category", "UNKNOWN")
probability = rating.get("probability", "UNKNOWN")
logger.warning(f"[Gemini] Safety rating: {category} - {probability}")
error_message = "No valid response generated due to safety constraints."
self.sessions.session_reply(error_message, session_id)
return Reply(ReplyType.ERROR, error_message)
except Exception as e:
logger.error(f"[Gemini] Error generating response: {str(e)}", exc_info=True)
error_message = "Failed to invoke [Gemini] api!"
self.sessions.session_reply(error_message, session_id)
if session_id:
self.sessions.session_reply(error_message, session_id)
return Reply(ReplyType.ERROR, error_message)
def _convert_to_gemini_messages(self, messages: list):
@@ -127,6 +134,93 @@ class GoogleGeminiBot(Bot):
turn = "user"
return res
@staticmethod
def _extract_image_paths_from_text(content: str):
if not isinstance(content, str):
return "", []
pattern = r"\[图片:\s*([^\]]+)\]"
image_paths = [m.strip().strip("'\"") for m in re.findall(pattern, content) if m.strip()]
cleaned_text = re.sub(pattern, "", content)
cleaned_text = re.sub(r"\n{3,}", "\n\n", cleaned_text).strip()
return cleaned_text, image_paths
@staticmethod
def _build_image_inline_part(image_path: str):
if not image_path:
return None
try:
if image_path.startswith("file://"):
image_path = image_path[7:]
image_path = os.path.expanduser(image_path)
if not os.path.exists(image_path):
logger.warning(f"[Gemini] Image file not found: {image_path}")
return None
with open(image_path, "rb") as f:
image_bytes = f.read()
mime_type = mimetypes.guess_type(image_path)[0] or "image/png"
if not mime_type.startswith("image/"):
mime_type = "image/png"
return {
"inlineData": {
"mimeType": mime_type,
"data": base64.b64encode(image_bytes).decode("utf-8")
}
}
except Exception as e:
logger.warning(f"[Gemini] Failed to build inline image part from path={image_path}, err={e}")
return None
@staticmethod
def _build_inline_part_from_image_url(image_url):
if not image_url:
return None
if isinstance(image_url, dict):
image_url = image_url.get("url")
if not image_url or not isinstance(image_url, str):
return None
if image_url.startswith("data:"):
match = re.match(r"^data:([^;]+);base64,(.+)$", image_url, re.DOTALL)
if not match:
logger.warning("[Gemini] Invalid data URL for image block")
return None
return {
"inlineData": {
"mimeType": match.group(1),
"data": match.group(2).strip()
}
}
if image_url.startswith("file://") or os.path.exists(os.path.expanduser(image_url)):
return GoogleGeminiBot._build_image_inline_part(image_url)
if image_url.startswith("http://") or image_url.startswith("https://"):
try:
response = requests.get(image_url, timeout=20)
if response.status_code != 200:
logger.warning(f"[Gemini] Failed to fetch remote image: status={response.status_code}, url={image_url}")
return None
mime_type = response.headers.get("Content-Type", "image/png").split(";")[0].strip()
if not mime_type.startswith("image/"):
mime_type = "image/png"
return {
"inlineData": {
"mimeType": mime_type,
"data": base64.b64encode(response.content).decode("utf-8")
}
}
except Exception as e:
logger.warning(f"[Gemini] Failed to download remote image: url={image_url}, err={e}")
return None
logger.warning(f"[Gemini] Unsupported image URL format: {image_url[:120]}")
return None
def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
"""
Call Gemini API with tool support using REST API (following official docs)
@@ -145,6 +239,15 @@ class GoogleGeminiBot(Bot):
# Build REST API payload
payload = {"contents": []}
inline_image_count = 0
# Keep legacy behavior: disable Gemini safety blocking like old SDK path.
payload["safetySettings"] = [
{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"},
]
# Extract and set system instruction
system_prompt = kwargs.get("system", "")
@@ -174,8 +277,19 @@ class GoogleGeminiBot(Bot):
parts = []
if isinstance(content, str):
# Simple text content
parts.append({"text": content})
# Text with optional [图片: /path/to/file] markers
cleaned_text, image_paths = self._extract_image_paths_from_text(content)
if cleaned_text:
parts.append({"text": cleaned_text})
image_added = False
for image_path in image_paths:
image_part = self._build_image_inline_part(image_path)
if image_part:
parts.append(image_part)
image_added = True
inline_image_count += 1
if not cleaned_text and not image_added and content:
parts.append({"text": content})
elif isinstance(content, list):
# List of content blocks (Claude format)
@@ -188,8 +302,39 @@ class GoogleGeminiBot(Bot):
block_type = block.get("type")
if block_type == "text":
# Text block
parts.append({"text": block.get("text", "")})
# Text block with optional image markers
block_text = block.get("text", "")
cleaned_text, image_paths = self._extract_image_paths_from_text(block_text)
if cleaned_text:
parts.append({"text": cleaned_text})
for image_path in image_paths:
image_part = self._build_image_inline_part(image_path)
if image_part:
parts.append(image_part)
elif block_type in ["image", "image_url"]:
# OpenAI format: {"type":"image_url","image_url":{"url":"..."}}
# Claude format: {"type":"image","source":{"type":"base64","media_type":"...","data":"..."}}
image_part = None
if block_type == "image":
source = block.get("source", {})
if isinstance(source, dict) and source.get("type") == "base64" and source.get("data"):
image_part = {
"inlineData": {
"mimeType": source.get("media_type", "image/png"),
"data": source.get("data")
}
}
elif block.get("image_url"):
image_part = self._build_inline_part_from_image_url(block.get("image_url"))
else:
image_part = self._build_inline_part_from_image_url(block.get("image_url"))
if image_part:
parts.append(image_part)
inline_image_count += 1
else:
logger.warning(f"[Gemini] Skip invalid image block: {str(block)[:200]}")
elif block_type == "tool_result":
# Convert Claude tool_result to Gemini functionResponse
@@ -237,6 +382,9 @@ class GoogleGeminiBot(Bot):
"role": gemini_role,
"parts": parts
})
if inline_image_count > 0:
logger.info(f"[Gemini] Multimodal request includes {inline_image_count} image part(s)")
# Generation config
gen_config = {}
@@ -363,15 +511,18 @@ class GoogleGeminiBot(Bot):
candidates = data.get("candidates", [])
if not candidates:
logger.warning("[Gemini] No candidates in response")
prompt_feedback = data.get("promptFeedback", {})
return {
"error": True,
"message": "No candidates in response",
"status_code": 500
"status_code": 500,
"safety_ratings": prompt_feedback.get("safetyRatings", [])
}
candidate = candidates[0]
content = candidate.get("content", {})
parts = content.get("parts", [])
safety_ratings = candidate.get("safetyRatings", [])
logger.debug(f"[Gemini] Candidate parts count: {len(parts)}")
@@ -419,7 +570,8 @@ class GoogleGeminiBot(Bot):
"message": message_dict,
"finish_reason": "tool_calls" if tool_calls else "stop"
}],
"usage": data.get("usageMetadata", {})
"usage": data.get("usageMetadata", {}),
"safety_ratings": safety_ratings
}
except Exception as e:

View File

@@ -1,9 +1,9 @@
# encoding:utf-8
import json
import time
import openai
import openai.error
import requests
from models.bot import Bot
from models.session_manager import SessionManager
from bridge.context import ContextType
@@ -11,10 +11,9 @@ from bridge.reply import Reply, ReplyType
from common.log import logger
from config import conf, load_config
from .moonshot_session import MoonshotSession
import requests
# ZhipuAI对话模型API
# Moonshot (Kimi) API Bot
class MoonshotBot(Bot):
def __init__(self):
super().__init__()
@@ -23,17 +22,22 @@ class MoonshotBot(Bot):
if model == "moonshot":
model = "moonshot-v1-32k"
self.args = {
"model": model, # 对话模型的名称
"temperature": conf().get("temperature", 0.3), # 如果设置,值域须为 [0, 1] 我们推荐 0.3,以达到较合适的效果。
"top_p": conf().get("top_p", 1.0), # 使用默认值
"model": model,
"temperature": conf().get("temperature", 0.3),
"top_p": conf().get("top_p", 1.0),
}
self.api_key = conf().get("moonshot_api_key")
self.base_url = conf().get("moonshot_base_url", "https://api.moonshot.cn/v1/chat/completions")
self.base_url = conf().get("moonshot_base_url", "https://api.moonshot.cn/v1")
# Ensure base_url does not end with /chat/completions (backward compat)
if self.base_url.endswith("/chat/completions"):
self.base_url = self.base_url.rsplit("/chat/completions", 1)[0]
if self.base_url.endswith("/"):
self.base_url = self.base_url.rstrip("/")
def reply(self, query, context=None):
# acquire reply content
if context.type == ContextType.TEXT:
logger.info("[MOONSHOT_AI] query={}".format(query))
logger.info("[MOONSHOT] query={}".format(query))
session_id = context["session_id"]
reply = None
@@ -50,19 +54,16 @@ class MoonshotBot(Bot):
if reply:
return reply
session = self.sessions.session_query(query, session_id)
logger.debug("[MOONSHOT_AI] session query={}".format(session.messages))
logger.debug("[MOONSHOT] session query={}".format(session.messages))
model = context.get("moonshot_model")
new_args = self.args.copy()
if model:
new_args["model"] = model
# if context.get('stream'):
# # reply in stream
# return self.reply_text_stream(query, new_query, session_id)
reply_content = self.reply_text(session, args=new_args)
logger.debug(
"[MOONSHOT_AI] new_query={}, session_id={}, reply_cont={}, completion_tokens={}".format(
"[MOONSHOT] new_query={}, session_id={}, reply_cont={}, completion_tokens={}".format(
session.messages,
session_id,
reply_content["content"],
@@ -76,17 +77,17 @@ class MoonshotBot(Bot):
reply = Reply(ReplyType.TEXT, reply_content["content"])
else:
reply = Reply(ReplyType.ERROR, reply_content["content"])
logger.debug("[MOONSHOT_AI] reply {} used 0 tokens.".format(reply_content))
logger.debug("[MOONSHOT] reply {} used 0 tokens.".format(reply_content))
return reply
else:
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
return reply
def reply_text(self, session: MoonshotSession, args=None, retry_count=0) -> dict:
def reply_text(self, session: MoonshotSession, args=None, retry_count: int = 0) -> dict:
"""
call openai's ChatCompletion to get the answer
Call Moonshot chat completion API to get the answer
:param session: a conversation session
:param session_id: session id
:param args: model args
:param retry_count: retry count
:return: {}
"""
@@ -97,10 +98,8 @@ class MoonshotBot(Bot):
}
body = args
body["messages"] = session.messages
# logger.debug("[MOONSHOT_AI] response={}".format(response))
# logger.info("[MOONSHOT_AI] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
res = requests.post(
self.base_url,
f"{self.base_url}/chat/completions",
headers=headers,
json=body
)
@@ -114,14 +113,13 @@ class MoonshotBot(Bot):
else:
response = res.json()
error = response.get("error")
logger.error(f"[MOONSHOT_AI] chat failed, status_code={res.status_code}, "
logger.error(f"[MOONSHOT] chat failed, status_code={res.status_code}, "
f"msg={error.get('message')}, type={error.get('type')}")
result = {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"}
need_retry = False
if res.status_code >= 500:
# server error, need retry
logger.warn(f"[MOONSHOT_AI] do retry, times={retry_count}")
logger.warn(f"[MOONSHOT] do retry, times={retry_count}")
need_retry = retry_count < 2
elif res.status_code == 401:
result["content"] = "授权失败请检查API Key是否正确"
@@ -144,3 +142,380 @@ class MoonshotBot(Bot):
return self.reply_text(session, args, retry_count + 1)
else:
return result
# ==================== Agent mode support ====================
def call_with_tools(self, messages, tools=None, stream: bool = False, **kwargs):
"""
Call Moonshot API with tool support for agent integration.
This method handles:
1. Format conversion (Claude format -> OpenAI format)
2. System prompt injection
3. Streaming SSE response with tool_calls
4. Thinking (reasoning) is disabled by default to avoid tool_choice conflicts
Args:
messages: List of messages (may be in Claude format from agent)
tools: List of tool definitions (may be in Claude format from agent)
stream: Whether to use streaming
**kwargs: Additional parameters (max_tokens, temperature, system, model, etc.)
Returns:
Generator yielding OpenAI-format chunks (for streaming)
"""
try:
# Convert messages from Claude format to OpenAI format
converted_messages = self._convert_messages_to_openai_format(messages)
# Inject system prompt if provided
system_prompt = kwargs.pop("system", None)
if system_prompt:
if not converted_messages or converted_messages[0].get("role") != "system":
converted_messages.insert(0, {"role": "system", "content": system_prompt})
else:
converted_messages[0] = {"role": "system", "content": system_prompt}
# Convert tools from Claude format to OpenAI format
converted_tools = None
if tools:
converted_tools = self._convert_tools_to_openai_format(tools)
# Resolve model / temperature
model = kwargs.pop("model", None) or self.args["model"]
max_tokens = kwargs.pop("max_tokens", None)
# Don't pop temperature, just ignore it
kwargs.pop("temperature", None)
# Build request body (omit temperature, let the API use its own default)
request_body = {
"model": model,
"messages": converted_messages,
"stream": stream,
}
if max_tokens is not None:
request_body["max_tokens"] = max_tokens
# Add tools
if converted_tools:
request_body["tools"] = converted_tools
request_body["tool_choice"] = "auto"
# Explicitly disable thinking to avoid reasoning_content issues in multi-turn tool calls.
# kimi-k2.5 may enable thinking by default; without preserving reasoning_content
# in conversation history the API will reject subsequent requests.
request_body["thinking"] = {"type": "disabled"}
logger.debug(f"[MOONSHOT] API call: model={model}, "
f"tools={len(converted_tools) if converted_tools else 0}, stream={stream}")
if stream:
return self._handle_stream_response(request_body)
else:
return self._handle_sync_response(request_body)
except Exception as e:
logger.error(f"[MOONSHOT] call_with_tools error: {e}")
import traceback
logger.error(traceback.format_exc())
def error_generator():
yield {"error": True, "message": str(e), "status_code": 500}
return error_generator()
# -------------------- streaming --------------------
def _handle_stream_response(self, request_body: dict):
"""Handle streaming SSE response from Moonshot API and yield OpenAI-format chunks."""
try:
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}"
}
url = f"{self.base_url}/chat/completions"
response = requests.post(url, headers=headers, json=request_body, stream=True, timeout=120)
if response.status_code != 200:
error_msg = response.text
logger.error(f"[MOONSHOT] API error: status={response.status_code}, msg={error_msg}")
yield {"error": True, "message": error_msg, "status_code": response.status_code}
return
current_tool_calls = {}
finish_reason = None
for line in response.iter_lines():
if not line:
continue
line = line.decode("utf-8")
if not line.startswith("data: "):
continue
data_str = line[6:] # Remove "data: " prefix
if data_str.strip() == "[DONE]":
break
try:
chunk = json.loads(data_str)
except json.JSONDecodeError as e:
logger.warning(f"[MOONSHOT] JSON decode error: {e}, data: {data_str[:200]}")
continue
# Check for error in chunk
if chunk.get("error"):
error_data = chunk["error"]
error_msg = error_data.get("message", "Unknown error") if isinstance(error_data, dict) else str(error_data)
logger.error(f"[MOONSHOT] stream error: {error_msg}")
yield {"error": True, "message": error_msg, "status_code": 500}
return
if not chunk.get("choices"):
continue
choice = chunk["choices"][0]
delta = choice.get("delta", {})
# Skip reasoning_content (thinking) don't log or forward
if delta.get("reasoning_content"):
continue
# Handle text content
if "content" in delta and delta["content"]:
yield {
"choices": [{
"index": 0,
"delta": {
"role": "assistant",
"content": delta["content"]
}
}]
}
# Handle tool_calls (streamed incrementally)
if "tool_calls" in delta:
for tool_call_chunk in delta["tool_calls"]:
index = tool_call_chunk.get("index", 0)
if index not in current_tool_calls:
current_tool_calls[index] = {
"id": tool_call_chunk.get("id", ""),
"type": "tool_use",
"name": tool_call_chunk.get("function", {}).get("name", ""),
"input": ""
}
# Accumulate arguments
if "function" in tool_call_chunk and "arguments" in tool_call_chunk["function"]:
current_tool_calls[index]["input"] += tool_call_chunk["function"]["arguments"]
# Yield OpenAI-format tool call delta
yield {
"choices": [{
"index": 0,
"delta": {
"tool_calls": [tool_call_chunk]
}
}]
}
# Capture finish_reason
if choice.get("finish_reason"):
finish_reason = choice["finish_reason"]
# Final chunk with finish_reason
yield {
"choices": [{
"index": 0,
"delta": {},
"finish_reason": finish_reason
}]
}
except requests.exceptions.Timeout:
logger.error("[MOONSHOT] Request timeout")
yield {"error": True, "message": "Request timeout", "status_code": 500}
except Exception as e:
logger.error(f"[MOONSHOT] stream response error: {e}")
import traceback
logger.error(traceback.format_exc())
yield {"error": True, "message": str(e), "status_code": 500}
# -------------------- sync --------------------
def _handle_sync_response(self, request_body: dict):
"""Handle synchronous API response and yield a single result dict."""
try:
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}"
}
request_body.pop("stream", None)
url = f"{self.base_url}/chat/completions"
response = requests.post(url, headers=headers, json=request_body, timeout=120)
if response.status_code != 200:
error_msg = response.text
logger.error(f"[MOONSHOT] API error: status={response.status_code}, msg={error_msg}")
yield {"error": True, "message": error_msg, "status_code": response.status_code}
return
result = response.json()
message = result["choices"][0]["message"]
finish_reason = result["choices"][0]["finish_reason"]
response_data = {"role": "assistant", "content": []}
# Add text content
if message.get("content"):
response_data["content"].append({
"type": "text",
"text": message["content"]
})
# Add tool calls
if message.get("tool_calls"):
for tool_call in message["tool_calls"]:
response_data["content"].append({
"type": "tool_use",
"id": tool_call["id"],
"name": tool_call["function"]["name"],
"input": json.loads(tool_call["function"]["arguments"])
})
# Map finish_reason
if finish_reason == "tool_calls":
response_data["stop_reason"] = "tool_use"
elif finish_reason == "stop":
response_data["stop_reason"] = "end_turn"
else:
response_data["stop_reason"] = finish_reason
yield response_data
except requests.exceptions.Timeout:
logger.error("[MOONSHOT] Request timeout")
yield {"error": True, "message": "Request timeout", "status_code": 500}
except Exception as e:
logger.error(f"[MOONSHOT] sync response error: {e}")
import traceback
logger.error(traceback.format_exc())
yield {"error": True, "message": str(e), "status_code": 500}
# -------------------- format conversion --------------------
def _convert_messages_to_openai_format(self, messages):
"""
Convert messages from Claude format to OpenAI format.
Claude format uses content blocks: tool_use / tool_result / text
OpenAI format uses tool_calls in assistant, role=tool for results
"""
if not messages:
return []
converted = []
for msg in messages:
role = msg.get("role")
content = msg.get("content")
# Already a simple string pass through
if isinstance(content, str):
converted.append(msg)
continue
if not isinstance(content, list):
converted.append(msg)
continue
if role == "user":
text_parts = []
tool_results = []
for block in content:
if not isinstance(block, dict):
continue
if block.get("type") == "text":
text_parts.append(block.get("text", ""))
elif block.get("type") == "tool_result":
tool_call_id = block.get("tool_use_id") or ""
result_content = block.get("content", "")
if not isinstance(result_content, str):
result_content = json.dumps(result_content, ensure_ascii=False)
tool_results.append({
"role": "tool",
"tool_call_id": tool_call_id,
"content": result_content
})
# Tool results first (must come right after assistant with tool_calls)
for tr in tool_results:
converted.append(tr)
if text_parts:
converted.append({"role": "user", "content": "\n".join(text_parts)})
elif role == "assistant":
openai_msg = {"role": "assistant"}
text_parts = []
tool_calls = []
for block in content:
if not isinstance(block, dict):
continue
if block.get("type") == "text":
text_parts.append(block.get("text", ""))
elif block.get("type") == "tool_use":
tool_calls.append({
"id": block.get("id"),
"type": "function",
"function": {
"name": block.get("name"),
"arguments": json.dumps(block.get("input", {}))
}
})
if text_parts:
openai_msg["content"] = "\n".join(text_parts)
elif not tool_calls:
openai_msg["content"] = ""
if tool_calls:
openai_msg["tool_calls"] = tool_calls
if not text_parts:
openai_msg["content"] = None
converted.append(openai_msg)
else:
converted.append(msg)
return converted
def _convert_tools_to_openai_format(self, tools):
"""
Convert tools from Claude format to OpenAI format.
Claude: {name, description, input_schema}
OpenAI: {type: "function", function: {name, description, parameters}}
"""
if not tools:
return None
converted = []
for tool in tools:
# Already in OpenAI format
if "type" in tool and tool["type"] == "function":
converted.append(tool)
else:
converted.append({
"type": "function",
"function": {
"name": tool.get("name"),
"description": tool.get("description"),
"parameters": tool.get("input_schema", {})
}
})
return converted

View File

@@ -310,13 +310,9 @@ class ZHIPUAIBot(Bot, ZhipuAIImage):
if hasattr(delta, 'content') and delta.content:
openai_chunk["choices"][0]["delta"]["content"] = delta.content
# Add reasoning_content if present (GLM-4.7 specific)
# Add reasoning_content as separate field if present (GLM-5/GLM-4.7 thinking)
if hasattr(delta, 'reasoning_content') and delta.reasoning_content:
# Store reasoning in content or metadata
if "content" not in openai_chunk["choices"][0]["delta"]:
openai_chunk["choices"][0]["delta"]["content"] = ""
# Prepend reasoning to content
openai_chunk["choices"][0]["delta"]["content"] = delta.reasoning_content + openai_chunk["choices"][0]["delta"].get("content", "")
openai_chunk["choices"][0]["delta"]["reasoning_content"] = delta.reasoning_content
# Add tool_calls if present
if hasattr(delta, 'tool_calls') and delta.tool_calls:

View File

@@ -1,4 +1,5 @@
openai==0.27.8
aiohttp>=3.8.6,<3.10
HTMLParser>=0.0.2
PyQRCode==1.2.1
qrcode==7.4.2

80
run.sh
View File

@@ -270,24 +270,26 @@ select_model() {
echo -e "${CYAN}${BOLD}=========================================${NC}"
echo -e "${CYAN}${BOLD} Select AI Model${NC}"
echo -e "${CYAN}${BOLD}=========================================${NC}"
echo -e "${YELLOW}1) MiniMax (MiniMax-M2.1, MiniMax-M2.1-lightning, etc.)${NC}"
echo -e "${YELLOW}2) Zhipu AI (glm-4.7, glm-4.6, etc.)${NC}"
echo -e "${YELLOW}3) Qwen (qwen3-max, qwen-plus, qwq-plus, etc.)${NC}"
echo -e "${YELLOW}4) Claude (claude-sonnet-4-5, claude-opus-4-0, etc.)${NC}"
echo -e "${YELLOW}5) Gemini (gemini-3-flash-preview, gemini-2.5-pro, etc.)${NC}"
echo -e "${YELLOW}6) OpenAI GPT (gpt-5.2, gpt-4.1, etc.)${NC}"
echo -e "${YELLOW}7) LinkAI (access multiple models via one API)${NC}"
echo -e "${YELLOW}1) MiniMax (MiniMax-M2.5, MiniMax-M2.1, etc.)${NC}"
echo -e "${YELLOW}2) Zhipu AI (glm-5, glm-4.7, etc.)${NC}"
echo -e "${YELLOW}3) Kimi (kimi-k2.5, kimi-k2, etc.)${NC}"
echo -e "${YELLOW}4) Doubao (doubao-seed-2-0-code-preview-260215, etc.)${NC}"
echo -e "${YELLOW}5) Qwen (qwen3.5-plus, qwen3-max, qwq-plus, etc.)${NC}"
echo -e "${YELLOW}6) Claude (claude-sonnet-4-6, claude-opus-4-6, etc.)${NC}"
echo -e "${YELLOW}7) Gemini (gemini-3.1-pro-preview, gemini-3-flash-preview, etc.)${NC}"
echo -e "${YELLOW}8) OpenAI GPT (gpt-5.2, gpt-4.1, etc.)${NC}"
echo -e "${YELLOW}9) LinkAI (access multiple models via one API)${NC}"
echo ""
while true; do
read -p "Enter your choice [press Enter for default: 1 - MiniMax]: " model_choice
model_choice=${model_choice:-1}
case "$model_choice" in
1|2|3|4|5|6|7)
1|2|3|4|5|6|7|8|9)
break
;;
*)
echo -e "${RED}Invalid choice. Please enter 1-7.${NC}"
echo -e "${RED}Invalid choice. Please enter 1-9.${NC}"
;;
esac
done
@@ -300,8 +302,8 @@ configure_model() {
# MiniMax
echo -e "${GREEN}Configuring MiniMax...${NC}"
read -p "Enter MiniMax API Key: " minimax_key
read -p "Enter model name [press Enter for default: MiniMax-M2.1]: " model_name
model_name=${model_name:-MiniMax-M2.1}
read -p "Enter model name [press Enter for default: MiniMax-M2.5]: " model_name
model_name=${model_name:-MiniMax-M2.5}
MODEL_NAME="$model_name"
MINIMAX_KEY="$minimax_key"
@@ -310,28 +312,48 @@ configure_model() {
# Zhipu AI
echo -e "${GREEN}Configuring Zhipu AI...${NC}"
read -p "Enter Zhipu AI API Key: " zhipu_key
read -p "Enter model name [press Enter for default: glm-4.7]: " model_name
model_name=${model_name:-glm-4.7}
read -p "Enter model name [press Enter for default: glm-5]: " model_name
model_name=${model_name:-glm-5}
MODEL_NAME="$model_name"
ZHIPU_KEY="$zhipu_key"
;;
3)
# Kimi (Moonshot)
echo -e "${GREEN}Configuring Kimi (Moonshot)...${NC}"
read -p "Enter Moonshot API Key: " moonshot_key
read -p "Enter model name [press Enter for default: kimi-k2.5]: " model_name
model_name=${model_name:-kimi-k2.5}
MODEL_NAME="$model_name"
MOONSHOT_KEY="$moonshot_key"
;;
4)
# Doubao (Volcengine Ark)
echo -e "${GREEN}Configuring Doubao (Volcengine Ark)...${NC}"
read -p "Enter Ark API Key: " ark_key
read -p "Enter model name [press Enter for default: doubao-seed-2-0-code-preview-260215]: " model_name
model_name=${model_name:-doubao-seed-2-0-code-preview-260215}
MODEL_NAME="$model_name"
ARK_KEY="$ark_key"
;;
5)
# Qwen (DashScope)
echo -e "${GREEN}Configuring Qwen (DashScope)...${NC}"
read -p "Enter DashScope API Key: " dashscope_key
read -p "Enter model name [press Enter for default: qwen3-max]: " model_name
model_name=${model_name:-qwen3-max}
read -p "Enter model name [press Enter for default: qwen3.5-plus]: " model_name
model_name=${model_name:-qwen3.5-plus}
MODEL_NAME="$model_name"
DASHSCOPE_KEY="$dashscope_key"
;;
4)
6)
# Claude
echo -e "${GREEN}Configuring Claude...${NC}"
read -p "Enter Claude API Key: " claude_key
read -p "Enter model name [press Enter for default: claude-sonnet-4-5]: " model_name
model_name=${model_name:-claude-sonnet-4-5}
read -p "Enter model name [press Enter for default: claude-sonnet-4-6]: " model_name
model_name=${model_name:-claude-sonnet-4-6}
read -p "Enter API Base URL [press Enter for default: https://api.anthropic.com/v1]: " api_base
api_base=${api_base:-https://api.anthropic.com/v1}
@@ -339,12 +361,12 @@ configure_model() {
CLAUDE_KEY="$claude_key"
CLAUDE_BASE="$api_base"
;;
5)
7)
# Gemini
echo -e "${GREEN}Configuring Gemini...${NC}"
read -p "Enter Gemini API Key: " gemini_key
read -p "Enter model name [press Enter for default: gemini-3-flash-preview]: " model_name
model_name=${model_name:-gemini-3-flash-preview}
read -p "Enter model name [press Enter for default: gemini-3.1-pro-preview]: " model_name
model_name=${model_name:-gemini-3.1-pro-preview}
read -p "Enter API Base URL [press Enter for default: https://generativelanguage.googleapis.com]: " api_base
api_base=${api_base:-https://generativelanguage.googleapis.com}
@@ -352,7 +374,7 @@ configure_model() {
GEMINI_KEY="$gemini_key"
GEMINI_BASE="$api_base"
;;
6)
8)
# OpenAI
echo -e "${GREEN}Configuring OpenAI GPT...${NC}"
read -p "Enter OpenAI API Key: " openai_key
@@ -365,12 +387,12 @@ configure_model() {
OPENAI_KEY="$openai_key"
OPENAI_BASE="$api_base"
;;
7)
9)
# LinkAI
echo -e "${GREEN}Configuring LinkAI...${NC}"
read -p "Enter LinkAI API Key: " linkai_key
read -p "Enter model name [press Enter for default: MiniMax-M2.1]: " model_name
model_name=${model_name:-MiniMax-M2.1}
read -p "Enter model name [press Enter for default: MiniMax-M2.5]: " model_name
model_name=${model_name:-MiniMax-M2.5}
MODEL_NAME="$model_name"
USE_LINKAI="true"
@@ -483,6 +505,8 @@ create_config_file() {
"gemini_api_key": "${GEMINI_KEY:-}",
"gemini_api_base": "${GEMINI_BASE:-https://generativelanguage.googleapis.com}",
"zhipu_ai_api_key": "${ZHIPU_KEY:-}",
"moonshot_api_key": "${MOONSHOT_KEY:-}",
"ark_api_key": "${ARK_KEY:-}",
"dashscope_api_key": "${DASHSCOPE_KEY:-}",
"minimax_api_key": "${MINIMAX_KEY:-}",
"voice_to_text": "openai",
@@ -518,6 +542,8 @@ EOF
"gemini_api_key": "${GEMINI_KEY:-}",
"gemini_api_base": "${GEMINI_BASE:-https://generativelanguage.googleapis.com}",
"zhipu_ai_api_key": "${ZHIPU_KEY:-}",
"moonshot_api_key": "${MOONSHOT_KEY:-}",
"ark_api_key": "${ARK_KEY:-}",
"dashscope_api_key": "${DASHSCOPE_KEY:-}",
"minimax_api_key": "${MINIMAX_KEY:-}",
"voice_to_text": "openai",
@@ -552,6 +578,8 @@ EOF
"gemini_api_key": "${GEMINI_KEY:-}",
"gemini_api_base": "${GEMINI_BASE:-https://generativelanguage.googleapis.com}",
"zhipu_ai_api_key": "${ZHIPU_KEY:-}",
"moonshot_api_key": "${MOONSHOT_KEY:-}",
"ark_api_key": "${ARK_KEY:-}",
"dashscope_api_key": "${DASHSCOPE_KEY:-}",
"minimax_api_key": "${MINIMAX_KEY:-}",
"voice_to_text": "openai",
@@ -592,6 +620,8 @@ EOF
"gemini_api_key": "${GEMINI_KEY:-}",
"gemini_api_base": "${GEMINI_BASE:-https://generativelanguage.googleapis.com}",
"zhipu_ai_api_key": "${ZHIPU_KEY:-}",
"moonshot_api_key": "${MOONSHOT_KEY:-}",
"ark_api_key": "${ARK_KEY:-}",
"dashscope_api_key": "${DASHSCOPE_KEY:-}",
"minimax_api_key": "${MINIMAX_KEY:-}",
"voice_to_text": "openai",