mirror of
https://github.com/zhayujie/chatgpt-on-wechat.git
synced 2026-02-20 09:40:36 +08:00
fix: minimax reasoning content optimization
This commit is contained in:
192
README.md
192
README.md
@@ -18,7 +18,7 @@
|
||||
- ✅ **长期记忆:** 自动将对话记忆持久化至本地文件和数据库中,包括全局记忆和天级记忆,支持关键词及向量检索
|
||||
- ✅ **技能系统:** 实现了Skills创建和运行的引擎,内置多种技能,并支持通过自然语言对话完成自定义Skills开发
|
||||
- ✅ **多模态消息:** 支持对文本、图片、语音、文件等多类型消息进行解析、处理、生成、发送等操作
|
||||
- ✅ **多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、通义千问, Kimi等国内外主流模型厂商
|
||||
- ✅ **多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi等国内外主流模型厂商
|
||||
- ✅ **多端部署:** 支持运行在本地计算机或服务器,可集成到网页、飞书、钉钉、微信公众号、企业微信应用中使用
|
||||
- ✅ **知识库:** 集成企业知识库能力,让Agent成为专属数字员工,基于[LinkAI](https://link-ai.tech)平台实现
|
||||
|
||||
@@ -90,7 +90,7 @@ bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
|
||||
项目支持国内外主流厂商的模型接口,可选模型及配置说明参考:[模型说明](#模型说明)。
|
||||
|
||||
> 注:Agent模式下推荐使用以下模型,可根据效果及成本综合选择: Claude(claude-sonnet-4-5、claude-sonnet-4-0)、Gemini(gemini-3-flash-preview、gemini-3-pro-preview)、GLM(glm-4.7)、MiniMAx(MiniMax-M2.1)、Qwen(qwen3-max)
|
||||
> 注:Agent模式下推荐使用以下模型,可根据效果及成本综合选择:MiniMAx(MiniMax-M2.1)、GLM(glm-4.7)、Qwen(qwen3-max)、Claude(claude-sonnet-4-5、claude-sonnet-4-0)、Gemini(gemini-3-flash-preview、gemini-3-pro-preview)
|
||||
|
||||
同时支持使用 **LinkAI平台** 接口,可灵活切换 OpenAI、Claude、Gemini、DeepSeek、Qwen、Kimi 等多种常用模型,并支持知识库、工作流、插件等Agent能力,参考 [接口文档](https://docs.link-ai.tech/platform/api)。
|
||||
|
||||
@@ -136,16 +136,16 @@ pip3 install -r requirements-optional.txt
|
||||
# config.json 文件内容示例
|
||||
{
|
||||
"channel_type": "web", # 接入渠道类型,默认为web,支持修改为:feishu,dingtalk,wechatcom_app,terminal,wechatmp,wechatmp_service
|
||||
"model": "claude-sonnet-4-5", # 模型名称
|
||||
"model": "MiniMax-M2.1", # 模型名称
|
||||
"minimax_api_key": "", # MiniMax API Key
|
||||
"zhipu_ai_api_key": "", # 智谱GLM API Key
|
||||
"dashscope_api_key": "", # 百炼(通义千问)API Key
|
||||
"claude_api_key": "", # Claude API Key
|
||||
"claude_api_base": "https://api.anthropic.com/v1", # Claude API 地址,修改可接入三方代理平台
|
||||
"open_ai_api_key": "", # OpenAI API Key
|
||||
"open_ai_api_base": "https://api.openai.com/v1", # OpenAI API 地址
|
||||
"gemini_api_key": "", # Gemini API Key
|
||||
"gemini_api_base": "https://generativelanguage.googleapis.com", # Gemini API地址
|
||||
"zhipu_ai_api_key": "", # 智谱GLM API Key
|
||||
"minimax_api_key": "", # MiniMax API Key
|
||||
"dashscope_api_key": "", # 百炼(通义千问)API Key
|
||||
"open_ai_api_key": "", # OpenAI API Key
|
||||
"open_ai_api_base": "https://api.openai.com/v1", # OpenAI API 地址
|
||||
"linkai_api_key": "", # LinkAI API Key
|
||||
"proxy": "", # 代理客户端的ip和端口,国内环境需要开启代理的可填写该项,如 "127.0.0.1:7890"
|
||||
"speech_recognition": false, # 是否开启语音识别
|
||||
@@ -173,7 +173,7 @@ pip3 install -r requirements-optional.txt
|
||||
<details>
|
||||
<summary>2. 其他配置</summary>
|
||||
|
||||
+ `model`: 模型名称,Agent模式下推荐使用 `claude-sonnet-4-5`、`claude-sonnet-4-0`、`gemini-3-flash-preview`、`gemini-3-pro-preview`、`glm-4.7`、`MiniMax-M2.1`、`qwen3-max`,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
|
||||
+ `model`: 模型名称,Agent模式下推荐使用 `MiniMax-M2.1`、`glm-4.7`、`qwen3-max`、`claude-sonnet-4-5`、`claude-sonnet-4-0`、`gemini-3-flash-preview`、`gemini-3-pro-preview`,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
|
||||
+ `character_desc`:普通对话模式下的机器人系统提示词。在Agent模式下该配置不生效,由工作空间中的文件内容构成。
|
||||
+ `subscribe_msg`:订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复, 可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
|
||||
</details>
|
||||
@@ -302,6 +302,93 @@ volumes:
|
||||
+ `model`: model字段填写空则直接使用智能体的模型,可在平台中灵活切换,[模型列表](https://link-ai.tech/console/models)中的全部模型均可使用
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>MiniMax</summary>
|
||||
|
||||
方式一:官方接入,配置如下(推荐):
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "MiniMax-M2.1",
|
||||
"minimax_api_key": ""
|
||||
}
|
||||
```
|
||||
- `model`: 可填写 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat` 等
|
||||
- `minimax_api_key`:MiniMax平台的API-KEY,在 [控制台](https://platform.minimaxi.com/user-center/basic-information/interface-key) 创建
|
||||
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"model": "MiniMax-M2.1",
|
||||
"open_ai_api_base": "https://api.minimaxi.com/v1",
|
||||
"open_ai_api_key": ""
|
||||
}
|
||||
```
|
||||
- `bot_type`: OpenAI兼容方式
|
||||
- `model`: 可填 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`,参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)
|
||||
- `open_ai_api_base`: MiniMax平台API的 BASE URL
|
||||
- `open_ai_api_key`: MiniMax平台的API-KEY
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>智谱AI (GLM)</summary>
|
||||
|
||||
方式一:官方接入,配置如下(推荐):
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "glm-4.7",
|
||||
"zhipu_ai_api_key": ""
|
||||
}
|
||||
```
|
||||
- `model`: 可填 `glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm-4系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)
|
||||
- `zhipu_ai_api_key`: 智谱AI平台的 API KEY,在 [控制台](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) 创建
|
||||
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"model": "glm-4.7",
|
||||
"open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
|
||||
"open_ai_api_key": ""
|
||||
}
|
||||
```
|
||||
- `bot_type`: OpenAI兼容方式
|
||||
- `model`: 可填 `glm-4.7、glm-4.6、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等
|
||||
- `open_ai_api_base`: 智谱AI平台的 BASE URL
|
||||
- `open_ai_api_key`: 智谱AI平台的 API KEY
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>通义千问 (Qwen)</summary>
|
||||
|
||||
方式一:官方SDK接入,配置如下(推荐):
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "qwen3-max",
|
||||
"dashscope_api_key": "sk-qVxxxxG"
|
||||
}
|
||||
```
|
||||
- `model`: 可填写 `qwen3-max、qwen-max、qwen-plus、qwen-turbo、qwen-long、qwq-plus` 等
|
||||
- `dashscope_api_key`: 通义千问的 API-KEY,参考 [官方文档](https://bailian.console.aliyun.com/?tab=api#/api) ,在 [控制台](https://bailian.console.aliyun.com/?tab=model#/api-key) 创建
|
||||
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"model": "qwen3-max",
|
||||
"open_ai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
|
||||
"open_ai_api_key": "sk-qVxxxxG"
|
||||
}
|
||||
```
|
||||
- `bot_type`: OpenAI兼容方式
|
||||
- `model`: 支持官方所有模型,参考[模型列表](https://help.aliyun.com/zh/model-studio/models?spm=a2c4g.11186623.0.0.78d84823Kth5on#9f8890ce29g5u)
|
||||
- `open_ai_api_base`: 通义千问API的 BASE URL
|
||||
- `open_ai_api_key`: 通义千问的 API-KEY
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Claude</summary>
|
||||
|
||||
@@ -354,93 +441,6 @@ API Key创建:在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn)
|
||||
- `open_ai_api_base`: DeepSeek平台 BASE URL
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>通义千问 (Qwen)</summary>
|
||||
|
||||
方式一:官方SDK接入,配置如下(推荐):
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "qwen3-max",
|
||||
"dashscope_api_key": "sk-qVxxxxG"
|
||||
}
|
||||
```
|
||||
- `model`: 可填写 `qwen3-max、qwen-max、qwen-plus、qwen-turbo、qwen-long、qwq-plus` 等
|
||||
- `dashscope_api_key`: 通义千问的 API-KEY,参考 [官方文档](https://bailian.console.aliyun.com/?tab=api#/api) ,在 [控制台](https://bailian.console.aliyun.com/?tab=model#/api-key) 创建
|
||||
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"model": "qwen3-max",
|
||||
"open_ai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
|
||||
"open_ai_api_key": "sk-qVxxxxG"
|
||||
}
|
||||
```
|
||||
- `bot_type`: OpenAI兼容方式
|
||||
- `model`: 支持官方所有模型,参考[模型列表](https://help.aliyun.com/zh/model-studio/models?spm=a2c4g.11186623.0.0.78d84823Kth5on#9f8890ce29g5u)
|
||||
- `open_ai_api_base`: 通义千问API的 BASE URL
|
||||
- `open_ai_api_key`: 通义千问的 API-KEY
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>MiniMax</summary>
|
||||
|
||||
方式一:官方接入,配置如下(推荐):
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "MiniMax-M2.1",
|
||||
"minimax_api_key": ""
|
||||
}
|
||||
```
|
||||
- `model`: 可填写 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat` 等
|
||||
- `minimax_api_key`:MiniMax平台的API-KEY,在 [控制台](https://platform.minimaxi.com/user-center/basic-information/interface-key) 创建
|
||||
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"model": "MiniMax-M2.1",
|
||||
"open_ai_api_base": "https://api.minimaxi.com/v1",
|
||||
"open_ai_api_key": ""
|
||||
}
|
||||
```
|
||||
- `bot_type`: OpenAI兼容方式
|
||||
- `model`: 可填 `MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`,参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)
|
||||
- `open_ai_api_base`: MiniMax平台API的 BASE URL
|
||||
- `open_ai_api_key`: MiniMax平台的API-KEY
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>智谱AI (GLM)</summary>
|
||||
|
||||
方式一:官方接入,配置如下(推荐):
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "glm-4.7",
|
||||
"zhipu_ai_api_key": ""
|
||||
}
|
||||
```
|
||||
- `model`: 可填 `glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm-4系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)
|
||||
- `zhipu_ai_api_key`: 智谱AI平台的 API KEY,在 [控制台](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) 创建
|
||||
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"model": "glm-4.7",
|
||||
"open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
|
||||
"open_ai_api_key": ""
|
||||
}
|
||||
```
|
||||
- `bot_type`: OpenAI兼容方式
|
||||
- `model`: 可填 `glm-4.7、glm-4.6、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等
|
||||
- `open_ai_api_base`: 智谱AI平台的 BASE URL
|
||||
- `open_ai_api_key`: 智谱AI平台的 API KEY
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Kimi (Moonshot)</summary>
|
||||
|
||||
|
||||
@@ -76,6 +76,20 @@ class AgentStreamExecutor:
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"Event callback error: {e}")
|
||||
|
||||
def _filter_think_tags(self, text: str) -> str:
|
||||
"""
|
||||
Remove <think> and </think> tags but keep the content inside.
|
||||
Some LLM providers (e.g., MiniMax) may return thinking process wrapped in <think> tags.
|
||||
We only remove the tags themselves, keeping the actual thinking content.
|
||||
"""
|
||||
if not text:
|
||||
return text
|
||||
import re
|
||||
# Remove only the <think> and </think> tags, keep the content
|
||||
text = re.sub(r'<think>', '', text)
|
||||
text = re.sub(r'</think>', '', text)
|
||||
return text
|
||||
|
||||
def _hash_args(self, args: dict) -> str:
|
||||
"""Generate a simple hash for tool arguments"""
|
||||
@@ -562,8 +576,11 @@ class AgentStreamExecutor:
|
||||
# Handle text content
|
||||
content_delta = delta.get("content") or ""
|
||||
if content_delta:
|
||||
full_content += content_delta
|
||||
self._emit_event("message_update", {"delta": content_delta})
|
||||
# Filter out <think> tags from content
|
||||
filtered_delta = self._filter_think_tags(content_delta)
|
||||
full_content += filtered_delta
|
||||
if filtered_delta: # Only emit if there's content after filtering
|
||||
self._emit_event("message_update", {"delta": filtered_delta})
|
||||
|
||||
# Handle tool calls
|
||||
if "tool_calls" in delta and delta["tool_calls"]:
|
||||
@@ -707,6 +724,9 @@ class AgentStreamExecutor:
|
||||
max_retries=max_retries
|
||||
)
|
||||
|
||||
# Filter full_content one more time (in case tags were split across chunks)
|
||||
full_content = self._filter_think_tags(full_content)
|
||||
|
||||
# Add assistant message to history (Claude format uses content blocks)
|
||||
assistant_msg = {"role": "assistant", "content": []}
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"channel_type": "web",
|
||||
"model": "claude-sonnet-4-5",
|
||||
"model": "MiniMax-M2.1",
|
||||
"claude_api_key": "",
|
||||
"claude_api_base": "https://api.anthropic.com/v1",
|
||||
"open_ai_api_key": "",
|
||||
|
||||
@@ -137,11 +137,11 @@ bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
|
||||
Agent模式推荐使用以下模型,可根据效果及成本综合选择:
|
||||
|
||||
- **MiniMax**: `MiniMax-M2.1`
|
||||
- **GLM**: `glm-4.7`
|
||||
- **Qwen**: `qwen3-max`
|
||||
- **Claude**: `claude-sonnet-4-5`、`claude-sonnet-4-0`
|
||||
- **Gemini**: `gemini-3-flash-preview`、`gemini-3-pro-preview`
|
||||
- **GLM**: `glm-4.7`
|
||||
- **MiniMax**: `MiniMax-M2.1`
|
||||
- **Qwen**: `qwen3-max`
|
||||
|
||||
详细模型配置方式参考 [README.md 模型说明](../README.md#模型说明)
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import time
|
||||
import json
|
||||
from pydantic.types import T
|
||||
import requests
|
||||
|
||||
from models.bot import Bot
|
||||
@@ -235,7 +236,7 @@ class MinimaxBot(Bot):
|
||||
logger.debug(f"[MINIMAX] API call: model={model}, tools={len(converted_tools) if converted_tools else 0}, stream={stream}")
|
||||
|
||||
# Check if we should show thinking process
|
||||
show_thinking = kwargs.pop("show_thinking", conf().get("minimax_show_thinking", False))
|
||||
show_thinking = kwargs.pop("show_thinking", conf().get("minimax_show_thinking", True))
|
||||
|
||||
if stream:
|
||||
return self._handle_stream_response(request_body, show_thinking=show_thinking)
|
||||
@@ -276,7 +277,7 @@ class MinimaxBot(Bot):
|
||||
if isinstance(content, list):
|
||||
# Extract text from content blocks
|
||||
text_parts = []
|
||||
tool_result = None
|
||||
tool_results = []
|
||||
|
||||
for block in content:
|
||||
if isinstance(block, dict):
|
||||
@@ -284,11 +285,11 @@ class MinimaxBot(Bot):
|
||||
text_parts.append(block.get("text", ""))
|
||||
elif block.get("type") == "tool_result":
|
||||
# Tool result should be a separate message with role="tool"
|
||||
tool_result = {
|
||||
tool_results.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": block.get("tool_use_id"),
|
||||
"content": str(block.get("content", ""))
|
||||
}
|
||||
})
|
||||
|
||||
if text_parts:
|
||||
converted.append({
|
||||
@@ -296,7 +297,8 @@ class MinimaxBot(Bot):
|
||||
"content": "\n".join(text_parts)
|
||||
})
|
||||
|
||||
if tool_result:
|
||||
# Add all tool results (not just the last one)
|
||||
for tool_result in tool_results:
|
||||
converted.append(tool_result)
|
||||
else:
|
||||
# Simple text content
|
||||
@@ -546,16 +548,14 @@ class MinimaxBot(Bot):
|
||||
|
||||
# Optionally yield thinking as visible content
|
||||
if show_thinking:
|
||||
# Format thinking text for display
|
||||
formatted_thinking = f"💭 {reasoning_text}"
|
||||
|
||||
# Yield as OpenAI-format content delta
|
||||
# Yield thinking text as-is (without emoji decoration)
|
||||
# The reasoning text will be displayed to users
|
||||
yield {
|
||||
"choices": [{
|
||||
"index": 0,
|
||||
"delta": {
|
||||
"role": "assistant",
|
||||
"content": formatted_thinking
|
||||
"content": reasoning_text
|
||||
}
|
||||
}]
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user