初始提交

This commit is contained in:
Zylan
2025-04-23 13:30:10 +08:00
commit db26c07bb3
49 changed files with 40973 additions and 0 deletions

14
.gitignore vendored Normal file
View File

@@ -0,0 +1,14 @@
.*
!.gitignore
!.github/
*pyc
__pycache__
logs/
*.log
*.log.*
config.yaml
duel_ranks.json
data/

268
README.MD Normal file
View File

@@ -0,0 +1,268 @@
# Bubbles-WechatAI
> 我叫 泡泡Bubbles - 一个个人微信助手
![版本](https://img.shields.io/badge/版本-1.0.0-red)
![wcferry](https://img.shields.io/badge/wcferry-39.5.1-blue)
![Python](https://img.shields.io/badge/Python-3.8+-green)
![License](https://img.shields.io/badge/License-Apache2.0-yellow)
## 📝 项目简介
Bubbles 是一个功能丰富的微信机器人框架,基于 [wcferry](https://github.com/lich0821/wcferry) 开发支持接入多种大型语言模型LLM提供丰富的交互功能和定时任务。该项目旨在将微信客户端转变为一个智能的个人助手可以执行多种实用功能带来便捷的用户体验。
## ✨ 核心特性
### 🤖 灵活的模型配置
- 支持为不同的群聊和私聊设置不同的 AI 模型和 system prompt
- OpenAI (ChatGPT)
- Google Gemini
- 智谱 AI (ChatGLM)
- 科大讯飞星火大模型
- 阿里云通义千问
- TigerBot
- DeepSeek
- Perplexity
- Ollama (本地部署的模型)
### 🛠️ 丰富的命令系统
- 强大的命令路由系统,让功能新增无比简单
- 支持自定义命令及参数
- 预设 [多种实用和娱乐命令](#可用命令)
### 🎨 AI 图像生成
- 支持调用多种 AI 绘图模型生成图片
### ⏰ A定时任务与提醒功能
- 每日天气预报推送
- 每日新闻资讯推送
- 工作日报/周报/月报提醒
- 个人自定义提醒系统(通过自然语言设置定时提醒)
### 📊 对话管理
- 智能消息总结功能
- 处理各类微信消息(文本、图片、小程序、链接等)
### 🔧 实用工具
- 自动接受好友请求并打招呼
- 自动响应群聊和私聊消息
## 🛠️ 安装指南
### 系统要求
- Python 3.8 或更高版本
- Windows 操作系统wcferry 要求)
- 微信 PC 版客户端
### 安装步骤
1. **克隆仓库**
```bash
git clone https://github.com/zippland/Bubbles-WechatAI.git
cd Bubbles-WechatAI
```
2. **创建并激活虚拟环境(可选但推荐)**
```bash
python -m venv .venv
.venv\Scripts\activate
```
3. **安装依赖**
```bash
pip install -r requirements.txt
```
4. **配置项目**
```bash
# 复制配置模板
cp config.yaml.template config.yaml
# 编辑配置文件,填入您的 API 密钥等信息
notepad config.yaml
```
## ⚙️ 配置说明
配置文件 `config.yaml` 包含以下主要部分:
### AI 模型配置
每个 AI 模型都有自己的配置部分,例如:
```yaml
# ChatGPT 配置
CHATGPT:
key: "your-openai-api-key"
base_url: "https://api.openai.com/v1"
model: "gpt-4o-mini" # 可选gpt-4, gpt-3.5-turbo 等
temperature: 0.7
max_tokens: 2000
system_prompt: "你是一个有用的助手。"
proxy: "http://127.0.0.1:7890" # 可选:如需代理请填写
```
### 群组/私聊模型映射
您可以为不同的群聊或私聊指定不同的 AI 模型:
```yaml
# 群组模型配置
GROUP_MODELS:
# 默认模型 ID
default: 21 # 2 代表 CHATGPT
# 群聊模型映射
mapping:
- room_id: "12345678@chatroom" # 群聊 ID
model: 2 # 2 代表 CHATGPT
# 私聊模型映射
private_mapping:
- wxid: "wxid_abc123" # 用户 wxid
model: 8 # 8 代表 Deepseek
```
### 功能开关
您可以启用或禁用各种功能:
```yaml
# 功能开关
news_report # 每日新闻推送
weather_report # 每日天气推送
report_reminder # 日报周报月报提醒
image_generation # AI生图
goblin_gift # 古灵阁妖精的馈赠
perplexity # perplexity
```
## 🚀 使用方法
### 启动机器人
```bash
python main.py
```
### 可用命令
机器人支持多种命令,按功能分类如下:
#### 基础系统命令
- `info`、`帮助`、`指令` - 显示机器人的帮助信息
- `骂一下 @用户名` - 让机器人骂指定用户(仅群聊)
- `reset`、`重置` - 重置机器人缓存的上下文历史
#### Perplexity AI 命令
- `ask 问题内容` - 使用 Perplexity AI 进行深度查询(需@机器人)
#### 消息管理命令
- `summary`、`/总结` - 总结群聊最近的消息(仅群聊)
- `clearmessages`、`/清除历史` - 从数据库中清除群聊的历史消息记录(仅群聊)
#### 天气和新闻工具
- `天气预报 城市名`、`预报 城市名` - 查询指定城市未来几天的天气预报
- `天气 城市名`、`温度 城市名` - 查询指定城市的当前天气
- `新闻` - 获取最新新闻
#### 决斗系统命令
- `决斗 @用户名` - 发起决斗(仅群聊)
- `偷袭 @用户名`、`偷分 @用户名` - 偷袭其他玩家(仅群聊)
- `决斗排行`、`决斗排名`、`排行榜` - 查看决斗排行榜(仅群聊)
- `决斗战绩`、`我的战绩`、`战绩查询` - 查看决斗战绩(仅群聊)
- `我的装备`、`查看装备` - 查看自己的装备(仅群聊)
- `改名 旧名称 新名称` - 更改昵称(仅群聊)
#### 成语系统命令
- `#成语` 或 `?成语` 或 `?成语` - 成语接龙与查询
#### 提醒功能
- `提醒xxxxx` - 用自然语言设置一个提醒
- `查看提醒`、`我的提醒`、`提醒列表` - 查看您设置的所有提醒
- `删除提醒 ID:xxxx`、`取消提醒 all` - 删除指定的(或所有)提醒
## 🎮 游戏功能详解
### 决斗系统
Bubbles 内置了一个有趣的决斗游戏系统,用户可以在群聊中挑战其他成员:
- **开始决斗**:使用 `决斗 @用户` 开始一场决斗
- **偷袭玩家**:使用 `偷袭 @用户` 偷袭其他玩家
- **查看排名**:使用 `决斗排行` 查看全服决斗排行榜
- **个人统计**:使用 `决斗战绩` 查看个人决斗数据
- **查看装备**:使用 `我的装备` 查看自己当前的装备
- **更改名称**:使用 `改名 旧名称 新名称` 更改自己在决斗系统中的显示名称
### 成语接龙
使用 `#成语` 来查询成语或参与成语接龙游戏。还可以使用 `?成语` 查询成语的详细释义。
### 古灵阁妖精馈赠
这是一个随机事件系统,机器人会在聊天中随机触发"古灵阁妖精馈赠"事件,为用户提供惊喜奖励。
## 📋 项目结构
```
Bubbles-WechatAI/
├── ai_providers/ # AI 模型接口实现
├── commands/ # 命令系统实现
├── data/ # 数据文件
├── function/ # 功能模块
│ ├── func_chengyu.py # 成语功能
│ ├── func_duel.py # 决斗功能
│ ├── func_news.py # 新闻功能
│ ├── func_weather.py # 天气功能
│ └── ...
├── image/ # 图像生成相关
├── logs/ # 日志目录
├── config.yaml # 配置文件
├── config.yaml.template # 配置模板
├── constants.py # 常量定义
├── main.py # 入口文件
├── robot.py # 机器人核心实现
└── requirements.txt # 项目依赖
```
## 🤝 贡献指南
欢迎对本项目做出贡献!您可以通过以下方式参与:
1. **报告问题**:提交 issue 报告 bug 或提出功能建议
2. **提交代码**:通过 Pull Request 提交您的改进
3. **完善文档**:帮助改进项目文档
## 📄 许可证
本项目采用 Apache 2.0 许可证,详情请参阅 [LICENSE](LICENSE) 文件。
## 🙏 致谢
- [wcferry](https://github.com/lich0821/wcferry) - 提供微信机器人底层支持
- 所有贡献者和用户
## ❓ 常见问题
**Q: 如何获取群聊 ID**
A: 在群聊中发送一条消息,机器人日志会显示该消息的来源群聊 ID。
**Q: 如何添加新的 AI 模型?**
A: 在 `ai_providers` 目录下创建新的模型接口实现,然后在 `robot.py` 中注册该模型。
**Q: 出现 "AI 模型未响应" 错误怎么办?**
A: 检查相应 AI 模型的 API 密钥配置和网络连接,确保 API 可访问。
**Q: 机器人不回复消息怎么办?**
A: 检查 wcferry 服务是否正常运行,查看日志文件了解详细错误信息。
## 📞 联系方式
如有任何问题或建议,请通过以下方式联系我们:
- GitHub Issues: [提交问题](https://github.com/zippland/Bubbles-WechatAI/issues)
- Email: zylanjian@example.com
---
**注意**:本项目仅供学习和个人使用,请遵守微信使用条款,不要用于任何违反法律法规的活动。

5
ai_providers/__init__.py Normal file
View File

@@ -0,0 +1,5 @@
"""
AI Providers Module
这个包包含了与各种 AI 服务提供商的集成实现。
"""

44
ai_providers/ai_bard.py Normal file
View File

@@ -0,0 +1,44 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import google.generativeai as genai
class BardAssistant:
def __init__(self, conf: dict) -> None:
self._api_key = conf["api_key"]
self._model_name = conf["model_name"]
self._prompt = conf['prompt']
self._proxy = conf['proxy']
genai.configure(api_key=self._api_key)
self._bard = genai.GenerativeModel(self._model_name)
def __repr__(self):
return 'BardAssistant'
@staticmethod
def value_check(conf: dict) -> bool:
if conf:
if conf.get("api_key") and conf.get("model_name") and conf.get("prompt"):
return True
return False
def get_answer(self, msg: str, sender: str = None) -> str:
response = self._bard.generate_content([{'role': 'user', 'parts': [msg]}])
return response.text
if __name__ == "__main__":
from configuration import Config
config = Config().BardAssistant
if not config:
exit(0)
bard_assistant = BardAssistant(config)
if bard_assistant._proxy:
os.environ['HTTP_PROXY'] = bard_assistant._proxy
os.environ['HTTPS_PROXY'] = bard_assistant._proxy
rsp = bard_assistant.get_answer(bard_assistant._prompt)
print(rsp)

199
ai_providers/ai_chatglm.py Normal file
View File

@@ -0,0 +1,199 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import os
import random
import logging
from datetime import datetime
from typing import Optional
import httpx
from openai import OpenAI
from ai_providers.chatglm.code_kernel import CodeKernel, execute
from ai_providers.chatglm.tool_registry import dispatch_tool, extract_code, get_tools
from wcferry import Wcf
# 获取模块级 logger
logger = logging.getLogger(__name__)
functions = get_tools()
class ChatGLM:
def __init__(self, config={}, wcf: Optional[Wcf] = None, max_retry=5) -> None:
key = config.get("key", 'empty')
api = config.get("api")
proxy = config.get("proxy")
if proxy:
self.client = OpenAI(api_key=key, base_url=api, http_client=httpx.Client(proxy=proxy))
else:
self.client = OpenAI(api_key=key, base_url=api)
self.conversation_list = {}
self.chat_type = {}
self.max_retry = max_retry
self.wcf = wcf
self.filePath = config["file_path"]
self.kernel = CodeKernel()
self.system_content_msg = {"chat": [{"role": "system", "content": config["prompt"]}],
"tool": [{"role": "system",
"content": "Answer the following questions as best as you can. You have access to the following tools:"}],
"code": [{"role": "system",
"content": "你是一位智能AI助手你叫ChatGLM你连接着一台电脑但请注意不能联网。在使用Python解决任务时你可以运行代码并得到结果如果运行结果有错误你需要尽可能对代码进行改进。你可以处理用户上传到电脑上的文件文件默认存储路径是{}".format(
self.filePath)}]}
def __repr__(self):
return 'ChatGLM'
@staticmethod
def value_check(conf: dict) -> bool:
if conf:
if conf.get("api") and conf.get("prompt") and conf.get("file_path"):
return True
return False
def get_answer(self, question: str, wxid: str) -> str:
# wxid或者roomid,个人时为微信id群消息时为群id
if '#帮助' == question:
return '本助手有三种模式,#聊天模式 = #1 #工具模式 = #2 #代码模式 = #3 , #清除模式会话 = #4 , #清除全部会话 = #5 可用发送#对应模式 或者 #编号 进行切换'
elif '#聊天模式' == question or '#1' == question:
self.chat_type[wxid] = 'chat'
return '已切换#聊天模式'
elif '#工具模式' == question or '#2' == question:
self.chat_type[wxid] = 'tool'
return '已切换#工具模式 \n工具有:查看天气,日期,新闻,comfyUI文生图。例如\n帮我生成一张小鸟的图片,提示词必须是英文'
elif '#代码模式' == question or '#3' == question:
self.chat_type[wxid] = 'code'
return '已切换#代码模式 \n代码模式可以用于写python代码例如\n用python画一个爱心'
elif '#清除模式会话' == question or '#4' == question:
self.conversation_list[wxid][self.chat_type[wxid]
] = self.system_content_msg[self.chat_type[wxid]]
return '已清除'
elif '#清除全部会话' == question or '#5' == question:
self.conversation_list[wxid] = self.system_content_msg
return '已清除'
self.updateMessage(wxid, question, "user")
try:
params = dict(model="chatglm3", temperature=1.0,
messages=self.conversation_list[wxid][self.chat_type[wxid]], stream=False)
if 'tool' == self.chat_type[wxid]:
params["tools"] = [dict(type='function', function=d) for d in functions.values()]
response = self.client.chat.completions.create(**params)
for _ in range(self.max_retry):
if response.choices[0].message.get("function_call"):
function_call = response.choices[0].message.function_call
logger.debug(
f"Function Call Response: {function_call.to_dict_recursive()}")
function_args = json.loads(function_call.arguments)
observation = dispatch_tool(
function_call.name, function_args)
if isinstance(observation, dict):
res_type = observation['res_type'] if 'res_type' in observation else 'text'
res = observation['res'] if 'res_type' in observation else str(
observation)
if res_type == 'image':
filename = observation['filename']
filePath = os.path.join(self.filePath, filename)
res.save(filePath)
self.wcf and self.wcf.send_image(filePath, wxid)
tool_response = '[Image]' if res_type == 'image' else res
else:
tool_response = observation if isinstance(
observation, str) else str(observation)
logger.debug(f"Tool Call Response: {tool_response}")
params["messages"].append(response.choices[0].message)
params["messages"].append(
{
"role": "function",
"name": function_call.name,
"content": tool_response, # 调用函数返回结果
}
)
self.updateMessage(wxid, tool_response, "function")
response = self.client.chat.completions.create(**params)
elif response.choices[0].message.content.find('interpreter') != -1:
output_text = response.choices[0].message.content
code = extract_code(output_text)
self.wcf and self.wcf.send_text('代码如下:\n' + code, wxid)
self.wcf and self.wcf.send_text('执行代码...', wxid)
try:
res_type, res = execute(code, self.kernel)
except Exception as e:
rsp = f'代码执行错误: {e}'
break
if res_type == 'image':
filename = '{}.png'.format(''.join(random.sample(
'abcdefghijklmnopqrstuvwxyz1234567890', 8)))
filePath = os.path.join(self.filePath, filename)
res.save(filePath)
self.wcf and self.wcf.send_image(filePath, wxid)
else:
self.wcf and self.wcf.send_text("执行结果:\n" + res, wxid)
tool_response = '[Image]' if res_type == 'image' else res
logger.debug("Received: %s %s", res_type, res)
params["messages"].append(response.choices[0].message)
params["messages"].append(
{
"role": "function",
"name": "interpreter",
"content": tool_response, # 调用函数返回结果
}
)
self.updateMessage(wxid, tool_response, "function")
response = self.client.chat.completions.create(**params)
else:
rsp = response.choices[0].message.content
break
self.updateMessage(wxid, rsp, "assistant")
except Exception as e0:
rsp = "发生未知错误:" + str(e0)
return rsp
def updateMessage(self, wxid: str, question: str, role: str) -> None:
now_time = str(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# 初始化聊天记录,组装系统信息
if wxid not in self.conversation_list.keys():
self.conversation_list[wxid] = self.system_content_msg
if wxid not in self.chat_type.keys():
self.chat_type[wxid] = 'chat'
# 当前问题
content_question_ = {"role": role, "content": question}
self.conversation_list[wxid][self.chat_type[wxid]].append(
content_question_)
# 只存储10条记录超过滚动清除
i = len(self.conversation_list[wxid][self.chat_type[wxid]])
if i > 10:
logger.info("滚动清除微信记录:%s", wxid)
# 删除多余的记录,倒着删,且跳过第一个的系统消息
del self.conversation_list[wxid][self.chat_type[wxid]][1]
if __name__ == "__main__":
from configuration import Config
config = Config().CHATGLM
if not config:
exit(0)
chat = ChatGLM(config)
while True:
q = input(">>> ")
try:
time_start = datetime.now() # 记录开始时间
logger.info(chat.get_answer(q, "wxid"))
time_end = datetime.now() # 记录结束时间
# 计算的时间差为程序的执行时间,单位为秒/s
logger.info(f"{round((time_end - time_start).total_seconds(), 2)}s")
except Exception as e:
logger.error("错误: %s", str(e), exc_info=True)

268
ai_providers/ai_chatgpt.py Normal file
View File

@@ -0,0 +1,268 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
import base64
import os
from datetime import datetime
import httpx
from openai import APIConnectionError, APIError, AuthenticationError, OpenAI
class ChatGPT():
def __init__(self, conf: dict) -> None:
key = conf.get("key")
api = conf.get("api")
proxy = conf.get("proxy")
prompt = conf.get("prompt")
self.model = conf.get("model", "gpt-3.5-turbo")
self.LOG = logging.getLogger("ChatGPT")
if proxy:
self.client = OpenAI(api_key=key, base_url=api, http_client=httpx.Client(proxy=proxy))
else:
self.client = OpenAI(api_key=key, base_url=api)
self.conversation_list = {}
self.system_content_msg = {"role": "system", "content": prompt}
# 确认是否使用支持视觉的模型
self.support_vision = self.model == "gpt-4-vision-preview" or self.model == "gpt-4o" or "-vision" in self.model
def __repr__(self):
return 'ChatGPT'
@staticmethod
def value_check(conf: dict) -> bool:
if conf:
if conf.get("key") and conf.get("api") and conf.get("prompt"):
return True
return False
def get_answer(self, question: str, wxid: str, system_prompt_override=None) -> str:
# wxid或者roomid,个人时为微信id群消息时为群id
# 检查是否是新对话
is_new_conversation = wxid not in self.conversation_list
# 保存临时系统提示的状态
temp_system_used = False
original_prompt = None
if system_prompt_override:
# 只有新对话才临时修改系统提示
if is_new_conversation:
# 临时保存原始系统提示,以便可以恢复
original_prompt = self.system_content_msg["content"]
# 设置临时系统提示
self.system_content_msg["content"] = system_prompt_override
temp_system_used = True
self.LOG.debug(f"为新对话 {wxid} 临时设置系统提示")
else:
# 对于已存在的对话我们将在API调用时临时使用覆盖提示而不修改对话历史
self.LOG.debug(f"对话 {wxid} 已存在系统提示覆盖将仅用于本次API调用")
# 添加用户问题到对话历史
self.updateMessage(wxid, question, "user")
# 如果修改了系统提示,现在恢复它
if temp_system_used and original_prompt is not None:
self.system_content_msg["content"] = original_prompt
self.LOG.debug(f"已恢复默认系统提示")
rsp = ""
try:
# 准备API调用的消息列表
api_messages = []
# 对于已存在的对话,临时应用系统提示覆盖(如果有)
if not is_new_conversation and system_prompt_override:
# 第一个消息可能是系统提示
has_system = self.conversation_list[wxid][0]["role"] == "system"
# 使用临时系统提示替代原始系统提示
if has_system:
# 复制除了系统提示外的所有消息
api_messages = [{"role": "system", "content": system_prompt_override}]
api_messages.extend(self.conversation_list[wxid][1:])
else:
# 如果没有系统提示,添加一个
api_messages = [{"role": "system", "content": system_prompt_override}]
api_messages.extend(self.conversation_list[wxid])
else:
# 对于新对话或没有临时系统提示的情况,使用原始对话历史
api_messages = self.conversation_list[wxid]
# o系列模型不支持自定义temperature只能使用默认值1
params = {
"model": self.model,
"messages": api_messages
}
# 只有非o系列模型才设置temperature
if not self.model.startswith("o"):
params["temperature"] = 0.2
ret = self.client.chat.completions.create(**params)
rsp = ret.choices[0].message.content
rsp = rsp[2:] if rsp.startswith("\n\n") else rsp
rsp = rsp.replace("\n\n", "\n")
self.updateMessage(wxid, rsp, "assistant")
except AuthenticationError:
self.LOG.error("OpenAI API 认证失败,请检查 API 密钥是否正确")
except APIConnectionError:
self.LOG.error("无法连接到 OpenAI API请检查网络连接")
except APIError as e1:
self.LOG.error(f"OpenAI API 返回了错误:{str(e1)}")
rsp = "无法从 ChatGPT 获得答案"
except Exception as e0:
self.LOG.error(f"发生未知错误:{str(e0)}")
rsp = "无法从 ChatGPT 获得答案"
return rsp
def encode_image_to_base64(self, image_path: str) -> str:
"""将图片文件转换为Base64编码
Args:
image_path (str): 图片文件路径
Returns:
str: Base64编码的图片数据
"""
try:
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
except Exception as e:
self.LOG.error(f"图片编码失败: {str(e)}")
return ""
def get_image_description(self, image_path: str, prompt: str = "请详细描述这张图片中的内容") -> str:
"""使用GPT-4 Vision分析图片内容
Args:
image_path (str): 图片文件路径
prompt (str, optional): 提示词. 默认为"请详细描述这张图片中的内容"
Returns:
str: 模型对图片的描述
"""
if not self.support_vision:
self.LOG.error(f"当前模型 {self.model} 不支持图片理解请使用gpt-4-vision-preview或gpt-4o")
return "当前模型不支持图片理解功能请联系管理员配置支持视觉的模型如gpt-4-vision-preview或gpt-4o"
if not os.path.exists(image_path):
self.LOG.error(f"图片文件不存在: {image_path}")
return "无法读取图片文件"
try:
base64_image = self.encode_image_to_base64(image_path)
if not base64_image:
return "图片编码失败"
# 构建带有图片的消息
messages = [
{"role": "system", "content": "你是一个图片分析专家,擅长分析图片内容并提供详细描述。"},
{
"role": "user",
"content": [
{"type": "text", "text": prompt},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
]
# 使用GPT-4 Vision模型
params = {
"model": self.model,
"messages": messages,
"max_tokens": 1000 # 限制输出长度
}
# 支持视觉的模型可能有不同参数要求
if not self.model.startswith("o"):
params["temperature"] = 0.7
response = self.client.chat.completions.create(**params)
description = response.choices[0].message.content
description = description[2:] if description.startswith("\n\n") else description
description = description.replace("\n\n", "\n")
return description
except AuthenticationError:
self.LOG.error("OpenAI API 认证失败,请检查 API 密钥是否正确")
return "API认证失败无法分析图片"
except APIConnectionError:
self.LOG.error("无法连接到 OpenAI API请检查网络连接")
return "网络连接错误,无法分析图片"
except APIError as e1:
self.LOG.error(f"OpenAI API 返回了错误:{str(e1)}")
return f"API错误{str(e1)}"
except Exception as e0:
self.LOG.error(f"分析图片时发生未知错误:{str(e0)}")
return f"处理图片时出错:{str(e0)}"
def updateMessage(self, wxid: str, content: str, role: str) -> None:
now_time = str(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
time_mk = "当需要回答时间时请直接参考回复:"
# 初始化聊天记录,组装系统信息
if wxid not in self.conversation_list.keys():
# 此时self.system_content_msg可能已经被get_answer临时修改
# 但这没关系因为在get_answer结束前会恢复
question_ = [
self.system_content_msg,
{"role": "system", "content": "" + time_mk + now_time}
]
self.conversation_list[wxid] = question_
# 当前问题或回答
content_message = {"role": role, "content": content}
self.conversation_list[wxid].append(content_message)
# 更新时间标记
for cont in self.conversation_list[wxid]:
if cont["role"] != "system":
continue
if cont["content"].startswith(time_mk):
cont["content"] = time_mk + now_time
# 控制对话历史长度
# 只存储10条记录超过滚动清除
max_history = 12 # 包括1个系统提示和1个时间标记
i = len(self.conversation_list[wxid])
if i > max_history:
# 计算需要删除多少条记录
if self.conversation_list[wxid][0]["role"] == "system" and self.conversation_list[wxid][1]["role"] == "system":
# 如果前两条都是系统消息,保留它们,删除较早的用户和助手消息
to_delete = i - max_history
del self.conversation_list[wxid][2:2+to_delete]
self.LOG.debug(f"滚动清除微信记录:{wxid},删除了{to_delete}条历史消息")
else:
# 如果结构不符合预期,简单地保留最近的消息
self.conversation_list[wxid] = self.conversation_list[wxid][-max_history:]
self.LOG.debug(f"滚动清除微信记录:{wxid},只保留最近{max_history}条消息")
if __name__ == "__main__":
from configuration import Config
config = Config().CHATGPT
if not config:
exit(0)
chat = ChatGPT(config)
while True:
q = input(">>> ")
try:
time_start = datetime.now() # 记录开始时间
print(chat.get_answer(q, "wxid"))
time_end = datetime.now() # 记录结束时间
print(f"{round((time_end - time_start).total_seconds(), 2)}s") # 计算的时间差为程序的执行时间,单位为秒/s
except Exception as e:
print(e)

164
ai_providers/ai_deepseek.py Normal file
View File

@@ -0,0 +1,164 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
from datetime import datetime
import httpx
from openai import APIConnectionError, APIError, AuthenticationError, OpenAI
class DeepSeek():
def __init__(self, conf: dict) -> None:
key = conf.get("key")
api = conf.get("api", "https://api.deepseek.com")
proxy = conf.get("proxy")
prompt = conf.get("prompt")
self.model = conf.get("model", "deepseek-chat")
self.LOG = logging.getLogger("DeepSeek")
self.reasoning_supported = (self.model == "deepseek-reasoner")
if conf.get("enable_reasoning", False) and not self.reasoning_supported:
self.LOG.warning("思维链功能只在使用 deepseek-reasoner 模型时可用,当前模型不支持此功能")
self.enable_reasoning = conf.get("enable_reasoning", False) and self.reasoning_supported
self.show_reasoning = conf.get("show_reasoning", False) and self.enable_reasoning
if proxy:
self.client = OpenAI(api_key=key, base_url=api, http_client=httpx.Client(proxy=proxy))
else:
self.client = OpenAI(api_key=key, base_url=api)
self.conversation_list = {}
self.system_content_msg = {"role": "system", "content": prompt}
def __repr__(self):
return 'DeepSeek'
@staticmethod
def value_check(conf: dict) -> bool:
if conf:
if conf.get("key") and conf.get("prompt"):
return True
return False
def get_answer(self, question: str, wxid: str, system_prompt_override=None) -> str:
if question == "#清除对话":
if wxid in self.conversation_list.keys():
del self.conversation_list[wxid]
return "已清除上下文"
if question.lower() in ["#开启思维链", "#enable reasoning"]:
if not self.reasoning_supported:
return "当前模型不支持思维链功能,请使用 deepseek-reasoner 模型"
self.enable_reasoning = True
self.show_reasoning = True
return "已开启思维链模式,将显示完整的推理过程"
if question.lower() in ["#关闭思维链", "#disable reasoning"]:
if not self.reasoning_supported:
return "当前模型不支持思维链功能,无需关闭"
self.enable_reasoning = False
self.show_reasoning = False
return "已关闭思维链模式"
if question.lower() in ["#隐藏思维链", "#hide reasoning"]:
if not self.enable_reasoning:
return "思维链功能未开启,无法设置隐藏/显示"
self.show_reasoning = False
return "已设置隐藏思维链,但模型仍会进行深度思考"
if question.lower() in ["#显示思维链", "#show reasoning"]:
if not self.enable_reasoning:
return "思维链功能未开启,无法设置隐藏/显示"
self.show_reasoning = True
return "已设置显示思维链"
# 初始化对话历史(只在首次时添加系统提示)
if wxid not in self.conversation_list:
self.conversation_list[wxid] = []
# 只有在这里才添加默认的系统提示到对话历史中
if self.system_content_msg["content"]:
self.conversation_list[wxid].append(self.system_content_msg)
# 添加用户问题到对话历史
self.conversation_list[wxid].append({"role": "user", "content": question})
try:
# 准备API调用的消息列表
api_messages = []
# 检查是否需要使用临时系统提示
if system_prompt_override:
# 如果提供了临时系统提示在API调用时使用它不修改对话历史
api_messages.append({"role": "system", "content": system_prompt_override})
# 添加除了系统提示外的所有历史消息
for msg in self.conversation_list[wxid]:
if msg["role"] != "system":
api_messages.append({"role": msg["role"], "content": msg["content"]})
else:
# 如果没有临时系统提示,使用完整的对话历史
for msg in self.conversation_list[wxid]:
api_messages.append({"role": msg["role"], "content": msg["content"]})
response = self.client.chat.completions.create(
model=self.model,
messages=api_messages,
stream=False
)
if self.reasoning_supported and self.enable_reasoning:
# deepseek-reasoner模型返回的特殊字段: reasoning_content和content
# 单独处理思维链模式的响应
reasoning_content = getattr(response.choices[0].message, "reasoning_content", None)
content = response.choices[0].message.content
if self.show_reasoning and reasoning_content:
final_response = f"🤔思考过程:\n{reasoning_content}\n\n🎉最终答案:\n{content}"
#最好不要删除表情,因为微信内的信息没有办法做自定义显示,这里是为了做两个分隔,来区分思考过程和最终答案!💡
else:
final_response = content
self.conversation_list[wxid].append({"role": "assistant", "content": content})
else:
final_response = response.choices[0].message.content
self.conversation_list[wxid].append({"role": "assistant", "content": final_response})
# 控制对话长度,保留最近的历史记录
# 系统消息(如果有) + 最近9轮对话(问答各算一轮)
max_history = 11
if len(self.conversation_list[wxid]) > max_history:
has_system = self.conversation_list[wxid][0]["role"] == "system"
if has_system:
self.conversation_list[wxid] = [self.conversation_list[wxid][0]] + self.conversation_list[wxid][-(max_history-1):]
else:
self.conversation_list[wxid] = self.conversation_list[wxid][-max_history:]
return final_response
except (APIConnectionError, APIError, AuthenticationError) as e1:
self.LOG.error(f"DeepSeek API 返回了错误:{str(e1)}")
return f"DeepSeek API 返回了错误:{str(e1)}"
except Exception as e0:
self.LOG.error(f"发生未知错误:{str(e0)}")
return "抱歉,处理您的请求时出现了错误"
if __name__ == "__main__":
from configuration import Config
config = Config().DEEPSEEK
if not config:
exit(0)
chat = DeepSeek(config)
while True:
q = input(">>> ")
try:
time_start = datetime.now()
print(chat.get_answer(q, "wxid"))
time_end = datetime.now()
print(f"{round((time_end - time_start).total_seconds(), 2)}s")
except Exception as e:
print(e)

81
ai_providers/ai_ollama.py Normal file
View File

@@ -0,0 +1,81 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
from datetime import datetime
import re
import ollama
class Ollama():
def __init__(self, conf: dict) -> None:
enable = conf.get("enable")
self.model = conf.get("model")
self.prompt = conf.get("prompt")
self.LOG = logging.getLogger("Ollama")
self.conversation_list = {}
def __repr__(self):
return 'Ollama'
@staticmethod
def value_check(conf: dict) -> bool:
if conf:
if conf.get("enable") and conf.get("model") and conf.get("prompt"):
return True
return False
def get_answer(self, question: str, wxid: str) -> str:
try:
self.conversation_list[wxid]
except KeyError:
res=ollama.generate(model=self.model, prompt=self.prompt, keep_alive="30m")
self.updateMessage(wxid, res["context"], "assistant")
# wxid或者roomid,个人时为微信id群消息时为群id
rsp = ""
try:
res=ollama.generate(model=self.model, prompt=question, context=self.conversation_list[wxid], keep_alive="30m")
self.updateMessage(wxid, res["context"], "user")
res_message = res["response"]
# 去除<think>标签对与内部内容
# res_message = res_message.split("</think>")[-1]
# 去除开头的\n和空格
# return res_message[2:]
return res_message
except Exception as e0:
self.LOG.error(f"发生未知错误:{str(e0)}", exc_info=True)
return rsp
def updateMessage(self, wxid: str, context: str, role: str) -> None:
# 当前问题
self.conversation_list[wxid] = context
if __name__ == "__main__":
from configuration import Config
# 设置测试用的日志配置
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s'
)
config = Config().OLLAMA
if not config:
exit(0)
chat = Ollama(config)
while True:
q = input(">>> ")
try:
time_start = datetime.now() # 记录开始时间
logger = logging.getLogger(__name__)
logger.info(chat.get_answer(q, "wxid"))
time_end = datetime.now() # 记录结束时间
logger.info(f"{round((time_end - time_start).total_seconds(), 2)}s") # 计算的时间差为程序的执行时间,单位为秒/s
except Exception as e:
logger.error(f"错误: {e}", exc_info=True)

View File

@@ -0,0 +1,448 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import logging
import re
import time
from typing import Optional, Dict, Callable, List
import os
from threading import Thread, Lock
from openai import OpenAI
class PerplexityThread(Thread):
"""处理Perplexity请求的线程"""
def __init__(self, perplexity_instance, prompt, chat_id, send_text_func, receiver, at_user=None):
"""初始化Perplexity处理线程
Args:
perplexity_instance: Perplexity实例
prompt: 查询内容
chat_id: 聊天ID
send_text_func: 发送消息的函数,接受(消息内容, 接收者ID, @用户ID)参数
receiver: 接收消息的ID
at_user: 被@的用户ID
"""
super().__init__(daemon=True)
self.perplexity = perplexity_instance
self.prompt = prompt
self.chat_id = chat_id
self.send_text_func = send_text_func
self.receiver = receiver
self.at_user = at_user
self.LOG = logging.getLogger("PerplexityThread")
# 检查是否使用reasoning模型
self.is_reasoning_model = False
if hasattr(self.perplexity, 'config'):
model_name = self.perplexity.config.get('model', 'sonar').lower()
self.is_reasoning_model = 'reasoning' in model_name
self.LOG.info(f"Perplexity使用模型: {model_name}, 是否为reasoning模型: {self.is_reasoning_model}")
def run(self):
"""线程执行函数"""
try:
self.LOG.info(f"开始处理Perplexity请求: {self.prompt[:30]}...")
# 获取回答
response = self.perplexity.get_answer(self.prompt, self.chat_id)
# 处理sonar-reasoning和sonar-reasoning-pro模型的<think>标签
if response:
# 只有对reasoning模型才应用清理逻辑
if self.is_reasoning_model:
response = self.remove_thinking_content(response)
# 移除Markdown格式符号
response = self.remove_markdown_formatting(response)
self.send_text_func(response, at_list=self.at_user)
else:
self.send_text_func("无法从Perplexity获取回答", at_list=self.at_user)
self.LOG.info(f"Perplexity请求处理完成: {self.prompt[:30]}...")
except Exception as e:
self.LOG.error(f"处理Perplexity请求时出错: {e}")
self.send_text_func(f"处理请求时出错: {e}", at_list=self.at_user)
def remove_thinking_content(self, text):
"""移除<think></think>标签之间的思考内容
Args:
text: 原始响应文本
Returns:
str: 处理后的文本
"""
try:
# 检查是否包含思考标签
has_thinking = '<think>' in text or '</think>' in text
if has_thinking:
self.LOG.info("检测到思考内容标签,准备移除...")
# 导入正则表达式库
import re
# 移除不完整的标签对情况
if text.count('<think>') != text.count('</think>'):
self.LOG.warning(f"检测到不匹配的思考标签: <think>数量={text.count('<think>')}, </think>数量={text.count('</think>')}")
# 提取思考内容用于日志记录
thinking_pattern = re.compile(r'<think>(.*?)</think>', re.DOTALL)
thinking_matches = thinking_pattern.findall(text)
if thinking_matches:
for i, thinking in enumerate(thinking_matches):
short_thinking = thinking[:100] + '...' if len(thinking) > 100 else thinking
self.LOG.debug(f"思考内容 #{i+1}: {short_thinking}")
# 替换所有的<think>...</think>内容 - 使用非贪婪模式
cleaned_text = re.sub(r'<think>.*?</think>', '', text, flags=re.DOTALL)
# 处理不完整的标签
cleaned_text = re.sub(r'<think>.*?$', '', cleaned_text, flags=re.DOTALL) # 处理未闭合的开始标签
cleaned_text = re.sub(r'^.*?</think>', '', cleaned_text, flags=re.DOTALL) # 处理未开始的闭合标签
# 处理可能的多余空行
cleaned_text = re.sub(r'\n{3,}', '\n\n', cleaned_text)
# 移除前后空白
cleaned_text = cleaned_text.strip()
self.LOG.info(f"思考内容已移除,原文本长度: {len(text)} -> 清理后: {len(cleaned_text)}")
# 如果清理后文本为空,返回一个提示信息
if not cleaned_text:
return "回答内容为空,可能是模型仅返回了思考过程。请重新提问。"
return cleaned_text
else:
return text # 没有思考标签,直接返回原文本
except Exception as e:
self.LOG.error(f"清理思考内容时出错: {e}")
return text # 出错时返回原始文本
def remove_markdown_formatting(self, text):
"""移除Markdown格式符号如*和#
Args:
text: 包含Markdown格式的文本
Returns:
str: 移除Markdown格式后的文本
"""
try:
# 导入正则表达式库
import re
self.LOG.info("开始移除Markdown格式符号...")
# 保存原始文本长度
original_length = len(text)
# 移除标题符号 (#)
# 替换 # 开头的标题,保留文本内容
cleaned_text = re.sub(r'^\s*#{1,6}\s+(.+)$', r'\1', text, flags=re.MULTILINE)
# 移除强调符号 (*)
# 替换 **粗体** 和 *斜体* 格式,保留文本内容
cleaned_text = re.sub(r'\*\*(.*?)\*\*', r'\1', cleaned_text)
cleaned_text = re.sub(r'\*(.*?)\*', r'\1', cleaned_text)
# 处理可能的多余空行
cleaned_text = re.sub(r'\n{3,}', '\n\n', cleaned_text)
# 移除前后空白
cleaned_text = cleaned_text.strip()
self.LOG.info(f"Markdown格式符号已移除原文本长度: {original_length} -> 清理后: {len(cleaned_text)}")
return cleaned_text
except Exception as e:
self.LOG.error(f"移除Markdown格式符号时出错: {e}")
return text # 出错时返回原始文本
class PerplexityManager:
"""管理Perplexity请求线程的类"""
def __init__(self):
self.threads = {}
self.lock = Lock()
self.LOG = logging.getLogger("PerplexityManager")
def start_request(self, perplexity_instance, prompt, chat_id, send_text_func, receiver, at_user=None):
"""启动Perplexity请求线程
Args:
perplexity_instance: Perplexity实例
prompt: 查询内容
chat_id: 聊天ID
send_text_func: 发送消息的函数
receiver: 接收消息的ID
at_user: 被@的用户ID
Returns:
bool: 是否成功启动线程
"""
thread_key = f"{receiver}_{chat_id}"
with self.lock:
# 检查是否已有正在处理的相同请求
if thread_key in self.threads and self.threads[thread_key].is_alive():
send_text_func("⚠️ 已有一个Perplexity请求正在处理中请稍后再试", at_list=at_user)
return False
# 发送等待消息
send_text_func("正在使用Perplexity进行深度研究请稍候...", at_list=at_user)
# 创建并启动新线程处理请求
perplexity_thread = PerplexityThread(
perplexity_instance=perplexity_instance,
prompt=prompt,
chat_id=chat_id,
send_text_func=send_text_func,
receiver=receiver,
at_user=at_user
)
# 添加线程完成回调,自动清理线程
def thread_finished_callback():
with self.lock:
if thread_key in self.threads:
del self.threads[thread_key]
self.LOG.info(f"已清理Perplexity线程: {thread_key}")
# 保存线程引用
self.threads[thread_key] = perplexity_thread
# 启动线程
perplexity_thread.start()
self.LOG.info(f"已启动Perplexity请求线程: {thread_key}")
return True
def cleanup_threads(self):
"""清理所有Perplexity线程"""
with self.lock:
active_threads = []
for thread_key, thread in self.threads.items():
if thread.is_alive():
active_threads.append(thread_key)
if active_threads:
self.LOG.info(f"等待{len(active_threads)}个Perplexity线程结束: {active_threads}")
# 等待所有线程结束但最多等待10秒
for i in range(10):
active_count = 0
for thread_key, thread in self.threads.items():
if thread.is_alive():
active_count += 1
if active_count == 0:
break
time.sleep(1)
# 记录未能结束的线程
still_active = [thread_key for thread_key, thread in self.threads.items() if thread.is_alive()]
if still_active:
self.LOG.warning(f"以下Perplexity线程在退出时仍在运行: {still_active}")
# 清空线程字典
self.threads.clear()
self.LOG.info("Perplexity线程管理已清理")
class Perplexity:
def __init__(self, config):
self.config = config
self.api_key = config.get('key')
self.api_base = config.get('api', 'https://api.perplexity.ai')
self.proxy = config.get('proxy')
self.prompt = config.get('prompt', '你是智能助手Perplexity')
self.trigger_keyword = config.get('trigger_keyword', 'ask')
self.fallback_prompt = config.get('fallback_prompt', "请像 Perplexity 一样以专业、客观、信息丰富的方式回答问题。不要使用任何tex或者md格式,纯文本输出。")
self.LOG = logging.getLogger('Perplexity')
# 权限控制 - 允许使用Perplexity的群聊和个人ID
self.allowed_groups = config.get('allowed_groups', [])
self.allowed_users = config.get('allowed_users', [])
# 可选的全局白名单模式 - 如果为True则允许所有群聊和用户使用Perplexity
self.allow_all = config.get('allow_all', False)
# 设置编码环境变量确保处理Unicode字符
os.environ["PYTHONIOENCODING"] = "utf-8"
# 创建线程管理器
self.thread_manager = PerplexityManager()
# 创建OpenAI客户端
self.client = None
if self.api_key:
try:
self.client = OpenAI(
api_key=self.api_key,
base_url=self.api_base
)
# 如果有代理设置
if self.proxy:
# OpenAI客户端不直接支持代理设置需要通过环境变量
os.environ["HTTPS_PROXY"] = self.proxy
os.environ["HTTP_PROXY"] = self.proxy
self.LOG.info("Perplexity 客户端已初始化")
# 记录权限配置信息
if self.allow_all:
self.LOG.info("Perplexity配置为允许所有群聊和用户访问")
else:
self.LOG.info(f"Perplexity允许的群聊: {len(self.allowed_groups)}")
self.LOG.info(f"Perplexity允许的用户: {len(self.allowed_users)}")
except Exception as e:
self.LOG.error(f"初始化Perplexity客户端失败: {str(e)}")
else:
self.LOG.warning("未配置Perplexity API密钥")
def is_allowed(self, chat_id, sender, from_group):
"""检查是否允许使用Perplexity功能
Args:
chat_id: 聊天ID群ID或用户ID
sender: 发送者ID
from_group: 是否来自群聊
Returns:
bool: 是否允许使用Perplexity
"""
# 全局白名单模式
if self.allow_all:
return True
# 群聊消息
if from_group:
return chat_id in self.allowed_groups
# 私聊消息
else:
return sender in self.allowed_users
@staticmethod
def value_check(args: dict) -> bool:
if args:
return all(value is not None for key, value in args.items() if key != 'proxy')
return False
def get_answer(self, prompt, session_id=None):
"""获取Perplexity回答
Args:
prompt: 用户输入的问题
session_id: 会话ID用于区分不同会话
Returns:
str: Perplexity的回答
"""
try:
if not self.api_key or not self.client:
return "Perplexity API key 未配置或客户端初始化失败"
# 构建消息列表
messages = [
{"role": "system", "content": self.prompt},
{"role": "user", "content": prompt}
]
# 获取模型
model = self.config.get('model', 'sonar')
# 使用json序列化确保正确处理Unicode
self.LOG.info(f"发送到Perplexity的消息: {json.dumps(messages, ensure_ascii=False)}")
# 创建聊天完成
response = self.client.chat.completions.create(
model=model,
messages=messages
)
# 返回回答内容
return response.choices[0].message.content
except Exception as e:
self.LOG.error(f"调用Perplexity API时发生错误: {str(e)}")
return f"发生错误: {str(e)}"
def process_message(self, content, chat_id, sender, roomid, from_group, send_text_func):
"""处理可能包含Perplexity触发词的消息
Args:
content: 消息内容
chat_id: 聊天ID
sender: 发送者ID
roomid: 群聊ID如果是群聊
from_group: 是否来自群聊
send_text_func: 发送消息的函数
Returns:
tuple[bool, Optional[str]]:
- bool: 是否已处理该消息
- Optional[str]: 无权限时的备选prompt其他情况为None
"""
# 检查是否包含触发词
if content.startswith(self.trigger_keyword):
# 检查权限
if not self.is_allowed(chat_id, sender, from_group):
# 不在允许列表中返回False让普通AI处理请求
# 但同时返回备选 prompt
self.LOG.info(f"用户/群聊 {chat_id} 无Perplexity权限将使用 fallback_prompt 转由普通AI处理")
# 获取实际要问的问题内容
prompt = content[len(self.trigger_keyword):].strip()
if prompt: # 确保确实有提问内容
return False, self.fallback_prompt # 返回 False 表示未处理,并带上备选 prompt
else:
# 如果只有触发词没有问题,还是按原逻辑处理(发送提示消息)
send_text_func(f"请在{self.trigger_keyword}后面添加您的问题",
roomid if from_group else sender,
sender if from_group else None)
return True, None # 已处理(发送了错误提示)
prompt = content[len(self.trigger_keyword):].strip()
if prompt:
# 确定接收者和@用户
receiver = roomid if from_group else sender
at_user = sender if from_group else None
# 启动请求处理
request_started = self.thread_manager.start_request(
perplexity_instance=self,
prompt=prompt,
chat_id=chat_id,
send_text_func=send_text_func,
receiver=receiver,
at_user=at_user
)
return request_started, None # 返回启动结果无备选prompt
else:
# 触发词后没有内容
send_text_func(f"请在{self.trigger_keyword}后面添加您的问题",
roomid if from_group else sender,
sender if from_group else None)
return True, None # 已处理(发送了错误提示)
# 不包含触发词
return False, None # 未处理无备选prompt
def cleanup(self):
"""清理所有资源"""
self.thread_manager.cleanup_threads()
def __str__(self):
return "Perplexity"

View File

@@ -0,0 +1,49 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
import requests
from random import randint
class TigerBot:
def __init__(self, tbconf=None) -> None:
self.LOG = logging.getLogger(__file__)
self.tburl = "https://api.tigerbot.com/bot-service/ai_service/gpt"
self.tbheaders = {"Authorization": "Bearer " + tbconf["key"]}
self.tbmodel = tbconf["model"]
self.fallback = ["", "快滚", "赶紧滚"]
def __repr__(self):
return 'TigerBot'
@staticmethod
def value_check(conf: dict) -> bool:
if conf:
return all(conf.values())
return False
def get_answer(self, msg: str, sender: str = None) -> str:
payload = {
"text": msg,
"modelVersion": self.tbmodel
}
rsp = ""
try:
rsp = requests.post(self.tburl, headers=self.tbheaders, json=payload).json()
rsp = rsp["data"]["result"][0]
except Exception as e:
self.LOG.error(f"{e}: {payload}\n{rsp}")
idx = randint(0, len(self.fallback) - 1)
rsp = self.fallback[idx]
return rsp
if __name__ == "__main__":
from configuration import Config
c = Config()
tbot = TigerBot(c.TIGERBOT)
rsp = tbot.get_answer("你还活着?")
print(rsp)

View File

@@ -0,0 +1,38 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
from sparkdesk_web.core import SparkWeb
class XinghuoWeb:
def __init__(self, xhconf=None) -> None:
self._sparkWeb = SparkWeb(
cookie=xhconf["cookie"],
fd=xhconf["fd"],
GtToken=xhconf["GtToken"],
)
self._chat = self._sparkWeb.create_continuous_chat()
# 如果有提示词
if xhconf["prompt"]:
self._chat.chat(xhconf["prompt"])
def __repr__(self):
return 'XinghuoWeb'
@staticmethod
def value_check(conf: dict) -> bool:
if conf:
return all(conf.values())
return False
def get_answer(self, msg: str, sender: str = None) -> str:
answer = self._chat.chat(msg)
return answer
if __name__ == "__main__":
from configuration import Config
c = Config()
xinghuo = XinghuoWeb(c.XINGHUO_WEB)
rsp = xinghuo.get_answer("你还活着?")
print(rsp)

46
ai_providers/ai_zhipu.py Normal file
View File

@@ -0,0 +1,46 @@
from zhipuai import ZhipuAI
class ZhiPu():
def __init__(self, conf: dict) -> None:
self.api_key = conf.get("api_key")
self.model = conf.get("model", "glm-4") # 默认使用 glm-4 模型
self.client = ZhipuAI(api_key=self.api_key)
self.converstion_list = {}
@staticmethod
def value_check(conf: dict) -> bool:
if conf and conf.get("api_key"):
return True
return False
def __repr__(self):
return 'ZhiPu'
def get_answer(self, msg: str, wxid: str, **args) -> str:
self._update_message(wxid, str(msg), "user")
response = self.client.chat.completions.create(
model=self.model,
messages=self.converstion_list[wxid]
)
resp_msg = response.choices[0].message
answer = resp_msg.content
self._update_message(wxid, answer, "assistant")
return answer
def _update_message(self, wxid: str, msg: str, role: str) -> None:
if wxid not in self.converstion_list.keys():
self.converstion_list[wxid] = []
content = {"role": role, "content": str(msg)}
self.converstion_list[wxid].append(content)
if __name__ == "__main__":
from configuration import Config
config = Config().ZHIPU
if not config:
exit(0)
zhipu = ZhiPu(config)
rsp = zhipu.get_answer("你好")
print(rsp)

View File

@@ -0,0 +1,45 @@
# ChatGLM3 集成使用说明
1. 需要取消配置中 chatglm 的注释, 并配置对应信息,使用 [ChatGLM3](https://github.com/THUDM/ChatGLM3), 启用最新版 ChatGLM3 根目录下 openai_api.py 获取 api 地址:
```yaml
# 如果要使用 chatglm取消下面的注释并填写相关内容
chatglm:
key: sk-012345678901234567890123456789012345678901234567 # 根据需要自己做key校验
api: http://localhost:8000/v1 # 根据自己的chatglm地址修改
proxy: # 如果你在国内你可能需要魔法大概长这样http://域名或者IP地址:端口号
prompt: 你是智能聊天机器人,你叫小薇 # 根据需要对角色进行设定
file_path: F:/Pictures/temp #设定生成图片和代码使用的文件夹路径
```
2. 修改 chatglm/tool_registry.py 工具里面的一下配置comfyUI 地址或者根据需要自己配置一些工具,函数名上需要加 @register_tool, 函数里面需要叫'''函数描述''',参数需要用 Annotated[str,'',True] 修饰,分别是类型,参数说明,是否必填,再加 ->加上对应的返回类型
```python
@register_tool
def get_confyui_image(prompt: Annotated[str, '要生成图片的提示词,注意必须是英文', True]) -> dict:
'''
生成图片
'''
with open("func_chatglm\\base.json", "r", encoding="utf-8") as f:
data2 = json.load(f)
data2['prompt']['3']['inputs']['seed'] = ''.join(
random.sample('123456789012345678901234567890', 14))
# 模型名称
data2['prompt']['4']['inputs']['ckpt_name'] = 'chilloutmix_NiPrunedFp32Fix.safetensors'
data2['prompt']['6']['inputs']['text'] = prompt # 正向提示词
# data2['prompt']['7']['inputs']['text']='' #反向提示词
cfui = ComfyUIApi(server_address="127.0.0.1:8188") # 根据自己comfyUI地址修改
images = cfui.get_images(data2['prompt'])
return {'res': images[0]['image'], 'res_type': 'image', 'filename': images[0]['filename']}
```
3. 使用 Code Interpreter 还需要安装 Jupyter 内核,默认名称叫 chatglm3
```
ipython kernel install --name chatglm3 --user
```
如果名称需要自定义可以配置系统环境变量IPYKERNEL 或者修改 chatglm/code_kernel.py
```
IPYKERNEL = os.environ.get('IPYKERNEL', 'chatglm3')
```
4. 启动后,发送 #帮助 可以查看 模式和常用指令

View File

@@ -0,0 +1,13 @@
import sys
class UnsupportedPythonVersionError(Exception):
def __init__(self, error_msg: str):
super().__init__(error_msg)
python_version_info = sys.version_info
if not sys.version_info >= (3, 9):
msg = "当前Python版本: " + ".".join(map(str, python_version_info[:3])) + (', 需要python版本 >= 3.9, 前往下载: '
'https://www.python.org/downloads/')
raise UnsupportedPythonVersionError(msg)

View File

@@ -0,0 +1,88 @@
{
"prompt": {
"3": {
"inputs": {
"seed": 1000573256060686,
"steps": 20,
"cfg": 8,
"sampler_name": "euler",
"scheduler": "normal",
"denoise": 1,
"model": [
"4",
0
],
"positive": [
"6",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSampler"
},
"4": {
"inputs": {
"ckpt_name": "(修复)512-inpainting-ema.safetensors"
},
"class_type": "CheckpointLoaderSimple"
},
"5": {
"inputs": {
"width": 512,
"height": 512,
"batch_size": 1
},
"class_type": "EmptyLatentImage"
},
"6": {
"inputs": {
"text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,dress, ",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode"
},
"7": {
"inputs": {
"text": "text, watermark",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode"
},
"8": {
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
},
"class_type": "VAEDecode"
},
"9": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
},
"class_type": "SaveImage"
}
}
}

View File

@@ -0,0 +1,203 @@
import base64
import os
import queue
import re
import logging
from io import BytesIO
from subprocess import PIPE
from typing import Optional, Union
import jupyter_client
from PIL import Image
# 获取模块级 logger
logger = logging.getLogger(__name__)
IPYKERNEL = os.environ.get('IPYKERNEL', 'chatglm3')
class CodeKernel(object):
def __init__(self,
kernel_name='kernel',
kernel_id=None,
kernel_config_path="",
python_path=None,
ipython_path=None,
init_file_path="./startup.py",
verbose=1):
self.kernel_name = kernel_name
self.kernel_id = kernel_id
self.kernel_config_path = kernel_config_path
self.python_path = python_path
self.ipython_path = ipython_path
self.init_file_path = init_file_path
self.verbose = verbose
if python_path is None and ipython_path is None:
env = None
else:
env = {"PATH": self.python_path + ":$PATH",
"PYTHONPATH": self.python_path}
# Initialize the backend kernel
self.kernel_manager = jupyter_client.KernelManager(kernel_name=IPYKERNEL,
connection_file=self.kernel_config_path,
exec_files=[
self.init_file_path],
env=env)
if self.kernel_config_path:
self.kernel_manager.load_connection_file()
self.kernel_manager.start_kernel(stdout=PIPE, stderr=PIPE)
logger.info("Backend kernel started with the configuration: %s",
self.kernel_config_path)
else:
self.kernel_manager.start_kernel(stdout=PIPE, stderr=PIPE)
logger.info("Backend kernel started with the configuration: %s",
self.kernel_manager.connection_file)
if verbose:
logger.debug(self.kernel_manager.get_connection_info())
# Initialize the code kernel
self.kernel = self.kernel_manager.blocking_client()
# self.kernel.load_connection_file()
self.kernel.start_channels()
logger.info("Code kernel started.")
def execute(self, code):
self.kernel.execute(code)
try:
shell_msg = self.kernel.get_shell_msg(timeout=40)
io_msg_content = self.kernel.get_iopub_msg(timeout=40)['content']
while True:
msg_out = io_msg_content
# Poll the message
try:
io_msg_content = self.kernel.get_iopub_msg(timeout=40)[
'content']
if 'execution_state' in io_msg_content and io_msg_content['execution_state'] == 'idle':
break
except queue.Empty:
break
return shell_msg, msg_out
except Exception as e:
logger.error("执行代码时出错: %s", str(e), exc_info=True)
return None
def execute_interactive(self, code, verbose=False):
shell_msg = self.kernel.execute_interactive(code)
if shell_msg is queue.Empty:
if verbose:
logger.warning("Timeout waiting for shell message.")
self.check_msg(shell_msg, verbose=verbose)
return shell_msg
def inspect(self, code, verbose=False):
msg_id = self.kernel.inspect(code)
shell_msg = self.kernel.get_shell_msg(timeout=30)
if shell_msg is queue.Empty:
if verbose:
logger.warning("Timeout waiting for shell message.")
self.check_msg(shell_msg, verbose=verbose)
return shell_msg
def get_error_msg(self, msg, verbose=False) -> Optional[str]:
if msg['content']['status'] == 'error':
try:
error_msg = msg['content']['traceback']
except BaseException:
try:
error_msg = msg['content']['traceback'][-1].strip()
except BaseException:
error_msg = "Traceback Error"
if verbose:
logger.error("Error: %s", error_msg)
return error_msg
return None
def check_msg(self, msg, verbose=False):
status = msg['content']['status']
if status == 'ok':
if verbose:
logger.info("Execution succeeded.")
elif status == 'error':
for line in msg['content']['traceback']:
if verbose:
logger.error(line)
def shutdown(self):
# Shutdown the backend kernel
self.kernel_manager.shutdown_kernel()
logger.info("Backend kernel shutdown.")
# Shutdown the code kernel
self.kernel.shutdown()
logger.info("Code kernel shutdown.")
def restart(self):
# Restart the backend kernel
self.kernel_manager.restart_kernel()
# logger.info("Backend kernel restarted.")
def interrupt(self):
# Interrupt the backend kernel
self.kernel_manager.interrupt_kernel()
# logger.info("Backend kernel interrupted.")
def is_alive(self):
return self.kernel.is_alive()
def b64_2_img(data):
buff = BytesIO(base64.b64decode(data))
return Image.open(buff)
def clean_ansi_codes(input_string):
ansi_escape = re.compile(r'(\x9B|\x1B\[|\u001b\[)[0-?]*[ -/]*[@-~]')
return ansi_escape.sub('', input_string)
def execute(code, kernel: CodeKernel) -> tuple[str, Union[str, Image.Image]]:
res = ""
res_type = None
code = code.replace("<|observation|>", "")
code = code.replace("<|assistant|>interpreter", "")
code = code.replace("<|assistant|>", "")
code = code.replace("<|user|>", "")
code = code.replace("<|system|>", "")
msg, output = kernel.execute(code)
if msg['metadata']['status'] == "timeout":
return res_type, 'Timed out'
elif msg['metadata']['status'] == 'error':
return res_type, clean_ansi_codes('\n'.join(kernel.get_error_msg(msg, verbose=True)))
if 'text' in output:
res_type = "text"
res = output['text']
elif 'data' in output:
for key in output['data']:
if 'image/png' in key:
res_type = "image"
res = output['data'][key]
break
elif 'text/plain' in key:
res_type = "text"
res = output['data'][key]
if res_type == "image":
return res_type, b64_2_img(res)
elif res_type == "text" or res_type == "traceback":
res = res
return res_type, res
def extract_code(text: str) -> str:
pattern = r'```([^\n]*)\n(.*?)```'
matches = re.findall(pattern, text, re.DOTALL)
return matches[-1][1]

View File

@@ -0,0 +1,186 @@
# This is an example that uses the websockets api to know when a prompt execution is done
# Once the prompt execution is done it downloads the images using the /history endpoint
import io
import json
import random
import urllib
import uuid
import requests
# NOTE: websocket-client (https://github.com/websocket-client/websocket-client)
import websocket
from PIL import Image
class ComfyUIApi():
def __init__(self, server_address="127.0.0.1:8188"):
self.server_address = server_address
self.client_id = str(uuid.uuid4())
self.ws = websocket.WebSocket()
self.ws.connect(
"ws://{}/ws?clientId={}".format(server_address, self.client_id))
def queue_prompt(self, prompt):
p = {"prompt": prompt, "client_id": self.client_id}
data = json.dumps(p).encode('utf-8')
req = requests.post(
"http://{}/prompt".format(self.server_address), data=data)
print(req.text)
return json.loads(req.text)
def get_image(self, filename, subfolder, folder_type):
data = {"filename": filename,
"subfolder": subfolder, "type": folder_type}
url_values = urllib.parse.urlencode(data)
with requests.get("http://{}/view?{}".format(self.server_address, url_values)) as response:
image = Image.open(io.BytesIO(response.content))
return image
def get_image_url(self, filename, subfolder, folder_type):
data = {"filename": filename,
"subfolder": subfolder, "type": folder_type}
url_values = urllib.parse.urlencode(data)
return "http://{}/view?{}".format(self.server_address, url_values)
def get_history(self, prompt_id):
with requests.get("http://{}/history/{}".format(self.server_address, prompt_id)) as response:
return json.loads(response.text)
def get_images(self, prompt, isUrl=False):
prompt_id = self.queue_prompt(prompt)['prompt_id']
output_images = []
while True:
out = self.ws.recv()
if isinstance(out, str):
message = json.loads(out)
if message['type'] == 'executing':
data = message['data']
if data['node'] is None and data['prompt_id'] == prompt_id:
break # Execution is done
else:
continue # previews are binary data
history = self.get_history(prompt_id)[prompt_id]
for o in history['outputs']:
for node_id in history['outputs']:
node_output = history['outputs'][node_id]
if 'images' in node_output:
for image in node_output['images']:
image_data = self.get_image_url(image['filename'], image['subfolder'], image['type']) if isUrl else self.get_image(
image['filename'], image['subfolder'], image['type'])
image['image'] = image_data
output_images.append(image)
return output_images
prompt_text = """
{
"3": {
"class_type": "KSampler",
"inputs": {
"cfg": 8,
"denoise": 1,
"latent_image": [
"5",
0
],
"model": [
"4",
0
],
"negative": [
"7",
0
],
"positive": [
"6",
0
],
"sampler_name": "euler",
"scheduler": "normal",
"seed": 8566257,
"steps": 20
}
},
"4": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "chilloutmix_NiPrunedFp32Fix.safetensors"
}
},
"5": {
"class_type": "EmptyLatentImage",
"inputs": {
"batch_size": 1,
"height": 512,
"width": 512
}
},
"6": {
"class_type": "CLIPTextEncode",
"inputs": {
"clip": [
"4",
1
],
"text": "masterpiece best quality girl"
}
},
"7": {
"class_type": "CLIPTextEncode",
"inputs": {
"clip": [
"4",
1
],
"text": "bad hands"
}
},
"8": {
"class_type": "VAEDecode",
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
}
},
"9": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
}
}
}
"""
if __name__ == '__main__':
prompt = json.loads(prompt_text)
# set the text prompt for our positive CLIPTextEncode
prompt["6"]["inputs"]["text"] = "masterpiece best quality man"
# set the seed for our KSampler node
prompt["3"]["inputs"]["seed"] = ''.join(
random.sample('123456789012345678901234567890', 14))
cfui = ComfyUIApi()
images = cfui.get_images(prompt)
# Commented out code to display the output images:
for node_id in images:
for image_data in images[node_id]:
import io
from PIL import Image
image = Image.open(io.BytesIO(image_data))
image.show()

View File

@@ -0,0 +1,167 @@
import inspect
import json
import random
import re
import traceback
from copy import deepcopy
from datetime import datetime
from types import GenericAlias
from typing import Annotated, get_origin
from ai_providers.chatglm.comfyUI_api import ComfyUIApi
from function.func_news import News
from zhdate import ZhDate
_TOOL_HOOKS = {}
_TOOL_DESCRIPTIONS = {}
def extract_code(text: str) -> str:
pattern = r'```([^\n]*)\n(.*?)```'
matches = re.findall(pattern, text, re.DOTALL)
return matches[-1][1]
def register_tool(func: callable):
tool_name = func.__name__
tool_description = inspect.getdoc(func).strip()
python_params = inspect.signature(func).parameters
tool_params = []
for name, param in python_params.items():
annotation = param.annotation
if annotation is inspect.Parameter.empty:
raise TypeError(f"Parameter `{name}` missing type annotation")
if get_origin(annotation) != Annotated:
raise TypeError(
f"Annotation type for `{name}` must be typing.Annotated")
typ, (description, required) = annotation.__origin__, annotation.__metadata__
typ: str = str(typ) if isinstance(typ, GenericAlias) else typ.__name__
if not isinstance(description, str):
raise TypeError(f"Description for `{name}` must be a string")
if not isinstance(required, bool):
raise TypeError(f"Required for `{name}` must be a bool")
tool_params.append({
"name": name,
"description": description,
"type": typ,
"required": required
})
tool_def = {
"name": tool_name,
"description": tool_description,
"parameters": tool_params
}
# print("[registered tool] " + pformat(tool_def))
_TOOL_HOOKS[tool_name] = func
_TOOL_DESCRIPTIONS[tool_name] = tool_def
return func
def dispatch_tool(tool_name: str, tool_params: dict) -> str:
if tool_name not in _TOOL_HOOKS:
return f"Tool `{tool_name}` not found. Please use a provided tool."
tool_call = _TOOL_HOOKS[tool_name]
try:
ret = tool_call(**tool_params)
except BaseException:
ret = traceback.format_exc()
return ret
def get_tools() -> dict:
return deepcopy(_TOOL_DESCRIPTIONS)
# Tool Definitions
# @register_tool
# def random_number_generator(
# seed: Annotated[int, 'The random seed used by the generator', True],
# range: Annotated[tuple[int, int], 'The range of the generated numbers', True],
# ) -> int:
# """
# Generates a random number x, s.t. range[0] <= x < range[1]
# """
# if not isinstance(seed, int):
# raise TypeError("Seed must be an integer")
# if not isinstance(range, tuple):
# raise TypeError("Range must be a tuple")
# if not isinstance(range[0], int) or not isinstance(range[1], int):
# raise TypeError("Range must be a tuple of integers")
# import random
# return random.Random(seed).randint(*range)
@register_tool
def get_weather(
city_name: Annotated[str, 'The name of the city to be queried', True],
) -> str:
"""
Get the current weather for `city_name`
"""
if not isinstance(city_name, str):
raise TypeError("City name must be a string")
key_selection = {
"current_condition": ["temp_C", "FeelsLikeC", "humidity", "weatherDesc", "observation_time"],
}
import requests
try:
resp = requests.get(f"https://wttr.in/{city_name}?format=j1")
resp.raise_for_status()
resp = resp.json()
ret = {k: {_v: resp[k][0][_v] for _v in v}
for k, v in key_selection.items()}
except BaseException:
import traceback
ret = "Error encountered while fetching weather data!\n" + traceback.format_exc()
return str(ret)
@register_tool
def get_confyui_image(prompt: Annotated[str, '要生成图片的提示词,注意必须是英文', True]) -> dict:
'''
生成图片
'''
with open("ai_providers/chatglm/base.json", "r", encoding="utf-8") as f:
data2 = json.load(f)
data2['prompt']['3']['inputs']['seed'] = ''.join(
random.sample('123456789012345678901234567890', 14))
# 模型名称
data2['prompt']['4']['inputs']['ckpt_name'] = 'chilloutmix_NiPrunedFp32Fix.safetensors'
data2['prompt']['6']['inputs']['text'] = prompt # 正向提示词
# data2['prompt']['7']['inputs']['text']='' #反向提示词
cfui = ComfyUIApi(server_address="127.0.0.1:8188") # 根据自己comfyUI地址修改
images = cfui.get_images(data2['prompt'])
return {'res': images[0]['image'], 'res_type': 'image', 'filename': images[0]['filename']}
@register_tool
def get_news() -> str:
'''
获取最新新闻
'''
news = News()
return news.get_important_news()
@register_tool
def get_time() -> str:
'''
获取当前日期,时间,农历日期,星期几
'''
time = datetime.now()
date2 = ZhDate.from_datetime(time)
week_list = ["星期一", "星期二", "星期三", "星期四", "星期五", "星期六", "星期日"]
return '{} {} {}'.format(time.strftime("%Y年%m月%d%H:%M:%S"), week_list[time.weekday()], '农历:' + date2.chinese())
if __name__ == "__main__":
print(dispatch_tool("get_weather", {"city_name": "beijing"}))
print(get_tools())

11
commands/__init__.py Normal file
View File

@@ -0,0 +1,11 @@
# commands package
"""
命令路由系统包
此包包含了命令路由系统的所有组件:
- context: 消息上下文类
- models: 命令数据模型
- router: 命令路由器
- registry: 命令注册表
- handlers: 命令处理函数
"""

89
commands/context.py Normal file
View File

@@ -0,0 +1,89 @@
import re
from dataclasses import dataclass, field
from typing import Dict, Optional, Any
@dataclass
class MessageContext:
"""
消息上下文,封装消息及其处理所需的所有信息
"""
# 原始参数
msg: Any # 原始 WxMsg 对象
wcf: Any # Wcf 实例,方便 handler 调用 API
config: Any # Config 实例,方便 handler 读取配置
all_contacts: Dict[str, str] # 所有联系人信息
robot_wxid: str # 机器人自身的 wxid
robot: Any = None # Robot 实例,用于访问其方法和属性
logger: Any = None # 日志记录器
# 预处理字段
text: str = "" # 预处理后的纯文本消息 (去@, 去空格)
is_group: bool = False # 是否群聊消息
is_at_bot: bool = False # 是否在群聊中 @ 了机器人
sender_name: str = "未知用户" # 发送者昵称 (群内或私聊)
# 懒加载字段
_room_members: Optional[Dict[str, str]] = field(default=None, init=False, repr=False)
@property
def room_members(self) -> Dict[str, str]:
"""获取群成员列表 (仅群聊有效,懒加载)"""
if not self.is_group:
return {}
if self._room_members is None:
try:
self._room_members = self.wcf.get_chatroom_members(self.msg.roomid)
except Exception as e:
if self.logger:
self.logger.error(f"获取群 {self.msg.roomid} 成员失败: {e}")
else:
print(f"获取群 {self.msg.roomid} 成员失败: {e}")
self._room_members = {} # 出错时返回空字典
return self._room_members
def get_sender_alias_or_name(self) -> str:
"""获取发送者在群里的昵称,如果获取失败或私聊,则返回其微信昵称"""
if self.is_group:
try:
# 尝试获取群昵称
alias = self.wcf.get_alias_in_chatroom(self.msg.sender, self.msg.roomid)
if alias and alias.strip():
return alias
except Exception as e:
if self.logger:
self.logger.error(f"获取群 {self.msg.roomid} 成员 {self.msg.sender} 昵称失败: {e}")
else:
print(f"获取群 {self.msg.roomid} 成员 {self.msg.sender} 昵称失败: {e}")
# 群昵称获取失败或私聊,返回通讯录昵称
return self.all_contacts.get(self.msg.sender, self.msg.sender) # 兜底返回 wxid
def get_receiver(self) -> str:
"""获取应答接收者ID (群聊返回群ID私聊返回用户ID)"""
return self.msg.roomid if self.is_group else self.msg.sender
def send_text(self, content: str, at_list: str = "") -> bool:
"""
发送文本消息
:param content: 消息内容
:param at_list: 要@的用户列表,多个用逗号分隔
:return: 是否发送成功
"""
if self.robot and hasattr(self.robot, "sendTextMsg"):
receiver = self.get_receiver()
try:
self.robot.sendTextMsg(content, receiver, at_list)
return True
except Exception as e:
if self.logger:
self.logger.error(f"发送消息失败: {e}")
else:
print(f"发送消息失败: {e}")
return False
else:
if self.logger:
self.logger.error("Robot实例不存在或没有sendTextMsg方法")
else:
print("Robot实例不存在或没有sendTextMsg方法")
return False

1267
commands/handlers.py Normal file

File diff suppressed because it is too large Load Diff

38
commands/models.py Normal file
View File

@@ -0,0 +1,38 @@
import re
from dataclasses import dataclass
from typing import Pattern, Callable, Literal, Optional, Any, Union, Match
# 导入 MessageContext使用前向引用避免循环导入
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from .context import MessageContext
@dataclass
class Command:
"""
命令定义类,封装命令的匹配条件和处理函数
"""
name: str # 命令名称,用于日志和调试
pattern: Union[Pattern, Callable[['MessageContext'], Optional[Match]]] # 匹配规则:正则表达式或自定义匹配函数
scope: Literal["group", "private", "both"] # 生效范围: "group"-仅群聊, "private"-仅私聊, "both"-两者都可
handler: Callable[['MessageContext', Optional[Match]], bool] # 处理函数
need_at: bool = False # 在群聊中是否必须@机器人才能触发
priority: int = 100 # 优先级,数字越小越先匹配
description: str = "" # 命令的描述,用于生成帮助信息
def __post_init__(self):
"""验证命令配置的有效性"""
if self.scope not in ["group", "private", "both"]:
raise ValueError(f"无效的作用域: {self.scope},必须是 'group', 'private''both'")
# 检查pattern是否为正则表达式或可调用对象
if not isinstance(self.pattern, (Pattern, Callable)):
# 如果是字符串,尝试转换为正则表达式
if isinstance(self.pattern, str):
try:
self.pattern = re.compile(self.pattern)
except re.error:
raise ValueError(f"无效的正则表达式: {self.pattern}")
else:
raise TypeError(f"pattern 必须是正则表达式或可调用对象,而不是 {type(self.pattern)}")

226
commands/registry.py Normal file
View File

@@ -0,0 +1,226 @@
import re
from .models import Command
from .handlers import (
handle_help, handle_duel, handle_sneak_attack, handle_duel_rank,
handle_duel_stats, handle_check_equipment, handle_reset_memory,
handle_summary, handle_clear_messages, handle_news_request,
handle_rename, handle_chengyu, handle_chitchat, handle_insult,
handle_perplexity_ask, handle_reminder, handle_list_reminders, handle_delete_reminder,
handle_weather, handle_weather_forecast
)
# 命令列表,按优先级排序
# 优先级越小越先匹配
COMMANDS = [
# ======== 基础系统命令 ========
Command(
name="help",
pattern=re.compile(r"^(info|帮助|指令)$", re.IGNORECASE),
scope="both", # 群聊和私聊都支持
need_at=False, # 不需要@机器人
priority=10, # 优先级较高
handler=handle_help,
description="显示机器人的帮助信息"
),
# 添加骂人命令
Command(
name="insult",
pattern=re.compile(r"骂一下\s*@([^\s@]+)"),
scope="group", # 仅群聊支持
need_at=True, # 需要@机器人
priority=15, # 优先级较高
handler=handle_insult,
description="骂指定用户"
),
Command(
name="reset_memory",
pattern=re.compile(r"^(reset|重置)$", re.IGNORECASE),
scope="both", # 群聊和私聊都支持
need_at=True, # 需要@机器人
priority=20, # 优先级较高
handler=handle_reset_memory,
description="重置机器人缓存里的上下文历史"
),
# ======== Perplexity AI 命令 ========
Command(
name="perplexity_ask",
pattern=re.compile(r"^ask\s*(.+)", re.IGNORECASE | re.DOTALL),
scope="both", # 群聊和私聊都支持
need_at=True, # 需要@机器人
priority=25, # 较高优先级,确保在闲聊之前处理
handler=handle_perplexity_ask,
description="使用 Perplexity AI 进行深度查询"
),
# ======== 消息管理命令 ========
Command(
name="summary",
pattern=re.compile(r"^(summary|总结)$", re.IGNORECASE),
scope="group", # 仅群聊支持
need_at=True, # 需要@机器人
priority=30, # 优先级一般
handler=handle_summary,
description="总结群聊最近的消息"
),
Command(
name="clear_messages",
pattern=re.compile(r"^(clearmessages|清除历史)$", re.IGNORECASE),
scope="group", # 仅群聊支持
need_at=True, # 需要@机器人
priority=31, # 优先级一般
handler=handle_clear_messages,
description="从数据库中清除群聊的历史消息记录"
),
# ======== 新闻和实用工具 ========
Command(
name="weather_forecast",
pattern=re.compile(r"^(?:天气预报|预报)\s+(.+)$"), # 匹配 天气预报/预报 城市名
scope="both", # 群聊和私聊都支持
need_at=True, # 需要@机器人
priority=38, # 优先级比天气高一点
handler=handle_weather_forecast,
description="查询指定城市未来几天的天气预报 (例如:天气预报 北京)"
),
Command(
name="weather",
pattern=re.compile(r"^(?:天气|温度)\s+(.+)$"), # 匹配 天气/温度 城市名
scope="both", # 群聊和私聊都支持
need_at=True, # 需要@机器人
priority=39, # 优先级设置在新闻命令前
handler=handle_weather,
description="查询指定城市的天气 (例如:天气 北京)"
),
Command(
name="news",
pattern=re.compile(r"^新闻$"),
scope="both", # 群聊和私聊都支持
need_at=True, # 需要@机器人
priority=40, # 优先级一般
handler=handle_news_request,
description="获取最新新闻"
),
# ======== 决斗系统命令 ========
Command(
name="duel",
pattern=re.compile(r"决斗.*?(?:@|[与和])\s*([^\s@]+)"),
scope="group", # 仅群聊支持
need_at=False, # 不需要@机器人
priority=50, # 优先级较低
handler=handle_duel,
description="发起决斗"
),
Command(
name="sneak_attack",
pattern=re.compile(r"(?:偷袭|偷分).*?@([^\s@]+)"),
scope="group", # 仅群聊支持
need_at=False, # 不需要@机器人
priority=51, # 优先级较低
handler=handle_sneak_attack,
description="偷袭其他玩家"
),
Command(
name="duel_rank",
pattern=re.compile(r"^(决斗排行|决斗排名|排行榜)$"),
scope="group", # 仅群聊支持
need_at=True, # 不需要@机器人
priority=52, # 优先级较低
handler=handle_duel_rank,
description="查看决斗排行榜"
),
Command(
name="duel_stats",
pattern=re.compile(r"^(决斗战绩|我的战绩|战绩查询)(.*)$"),
scope="group", # 仅群聊支持
need_at=True, # 不需要@机器人
priority=53, # 优先级较低
handler=handle_duel_stats,
description="查看决斗战绩"
),
Command(
name="check_equipment",
pattern=re.compile(r"^(我的装备|查看装备)$"),
scope="group", # 仅群聊支持
need_at=True, # 不需要@机器人
priority=54, # 优先级较低
handler=handle_check_equipment,
description="查看我的装备"
),
Command(
name="rename",
pattern=re.compile(r"^改名\s+([^\s]+)\s+([^\s]+)$"),
scope="group", # 仅群聊支持
need_at=True, # 不需要@机器人
priority=55, # 优先级较低
handler=handle_rename,
description="更改昵称"
),
# ======== 成语系统命令 ========
Command(
name="chengyu",
pattern=re.compile(r"^([#?])(.+)$"),
scope="both", # 群聊和私聊都支持
need_at=False, # 不需要@机器人
priority=60, # 优先级较低
handler=handle_chengyu,
description="成语接龙与查询"
),
# ======== 提醒功能 ========
Command(
name="reminder",
pattern=re.compile(r"^(提醒\s*.+)$", re.IGNORECASE | re.DOTALL), # 匹配"提醒"开头(可无空格),捕获包括"提醒"在内的完整内容
scope="both", # 支持群聊和私聊
need_at=True, # 在群聊中需要@机器人
priority=35, # 优先级适中,在基础命令后,复杂功能或闲聊前
handler=handle_reminder,
description="设置一个提醒 (例如:提醒 明天下午3点 开会 或 提醒我早上七点起床)"
),
Command(
name="list_reminders",
pattern=re.compile(r"^(查看提醒|我的提醒|提醒列表)$", re.IGNORECASE),
scope="both", # 支持群聊和私聊
need_at=True, # 在群聊中需要@机器人
priority=36, # 优先级略低于设置提醒
handler=handle_list_reminders,
description="查看您设置的所有提醒"
),
Command(
name="delete_reminder",
# 匹配 "删除提醒 " 后跟任意内容,用于删除特定提醒
pattern=re.compile(r"^(删除提醒|取消提醒)\s+(.+)$", re.IGNORECASE | re.DOTALL),
scope="both", # 支持群聊和私聊
need_at=True, # 在群聊中需要@机器人
priority=37,
handler=handle_delete_reminder,
description="删除指定的提醒 (例如:删除提醒 ID:xxxxxx)"
),
]
# 可以添加一个函数,获取命令列表的简单描述
def get_commands_info():
"""获取所有命令的简要信息,用于调试"""
info = []
for i, cmd in enumerate(COMMANDS):
scope_str = {"group": "仅群聊", "private": "仅私聊", "both": "群聊私聊"}[cmd.scope]
at_str = "需要@" if cmd.need_at else "不需@"
info.append(f"{i+1}. [{cmd.priority}] {cmd.name} ({scope_str},{at_str}) - {cmd.description or '无描述'}")
return "\n".join(info)
# 导出所有命令
__all__ = ["COMMANDS", "get_commands_info"]

117
commands/router.py Normal file
View File

@@ -0,0 +1,117 @@
import re
import logging
from typing import List, Optional, Any, Dict, Match
import traceback
from .models import Command
from .context import MessageContext
# 获取模块级 logger
logger = logging.getLogger(__name__)
class CommandRouter:
"""
命令路由器,负责将消息路由到对应的命令处理函数
"""
def __init__(self, commands: List[Command], robot_instance: Optional[Any] = None):
# 按优先级排序命令列表,数字越小优先级越高
self.commands = sorted(commands, key=lambda cmd: cmd.priority)
self.robot_instance = robot_instance
# 分析并输出命令注册信息,便于调试
scope_count = {"group": 0, "private": 0, "both": 0}
for cmd in commands:
scope_count[cmd.scope] += 1
logger.info(f"命令路由器初始化成功,共加载 {len(commands)} 个命令")
logger.info(f"命令作用域分布: 仅群聊 {scope_count['group']},仅私聊 {scope_count['private']},两者均可 {scope_count['both']}")
# 按优先级输出命令信息
for i, cmd in enumerate(self.commands[:10]): # 只输出前10个
logger.info(f"{i+1}. [{cmd.priority}] {cmd.name} - {cmd.description or '无描述'}")
if len(self.commands) > 10:
logger.info(f"... 共 {len(self.commands)} 个命令")
def dispatch(self, ctx: MessageContext) -> bool:
"""
根据消息上下文分发命令
:param ctx: 消息上下文对象
:return: 是否有命令成功处理
"""
# 确保context可以访问到robot实例
if self.robot_instance and not ctx.robot:
ctx.robot = self.robot_instance
# 如果robot有logger属性且ctx没有logger则使用robot的logger
if hasattr(self.robot_instance, 'LOG') and not ctx.logger:
ctx.logger = self.robot_instance.LOG
# 记录日志,便于调试
if ctx.logger:
ctx.logger.debug(f"开始路由消息: '{ctx.text}', 来自: {ctx.sender_name}, 群聊: {ctx.is_group}, @机器人: {ctx.is_at_bot}")
# 遍历命令列表,按优先级顺序匹配
for cmd in self.commands:
# 1. 检查作用域 (scope)
if cmd.scope != "both":
if (cmd.scope == "group" and not ctx.is_group) or \
(cmd.scope == "private" and ctx.is_group):
continue # 作用域不匹配,跳过
# 2. 检查是否需要 @ (need_at) - 仅在群聊中有效
if ctx.is_group and cmd.need_at and not ctx.is_at_bot:
continue # 需要@机器人但未被@,跳过
# 3. 执行匹配逻辑
match_result = None
try:
# 根据pattern类型执行匹配
if callable(cmd.pattern):
# 自定义匹配函数
match_result = cmd.pattern(ctx)
else:
# 正则表达式匹配
match_obj = cmd.pattern.search(ctx.text)
match_result = match_obj
# 匹配失败,尝试下一个命令
if match_result is None:
continue
# 匹配成功,记录日志
if ctx.logger:
ctx.logger.info(f"命令 '{cmd.name}' 匹配成功,准备处理")
# 4. 执行命令处理函数
try:
result = cmd.handler(ctx, match_result)
if result:
if ctx.logger:
ctx.logger.info(f"命令 '{cmd.name}' 处理成功")
return True
else:
if ctx.logger:
ctx.logger.warning(f"命令 '{cmd.name}' 处理返回False尝试下一个命令")
except Exception as e:
if ctx.logger:
ctx.logger.error(f"执行命令 '{cmd.name}' 处理函数时出错: {e}")
ctx.logger.error(traceback.format_exc())
else:
logger.error(f"执行命令 '{cmd.name}' 处理函数时出错: {e}", exc_info=True)
# 出错后继续尝试下一个命令
except Exception as e:
# 匹配过程出错,记录并继续
if ctx.logger:
ctx.logger.error(f"匹配命令 '{cmd.name}' 时出错: {e}")
else:
logger.error(f"匹配命令 '{cmd.name}' 时出错: {e}", exc_info=True)
continue
# 所有命令都未匹配或处理失败
if ctx.logger:
ctx.logger.debug("所有命令匹配失败或处理失败")
return False
def get_command_descriptions(self) -> Dict[str, str]:
"""获取所有命令的描述,用于生成帮助信息"""
return {cmd.name: cmd.description for cmd in self.commands if cmd.description}

190
config.yaml.template Normal file
View File

@@ -0,0 +1,190 @@
logging:
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s %(message)s"
datefmt: "%Y-%m-%d %H:%M:%S"
error:
format: "%(asctime)s %(name)s %(levelname)s %(filename)s::%(funcName)s[%(lineno)d]:%(message)s"
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: simple
filename: wx_info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
warning_file_handler:
class: logging.handlers.RotatingFileHandler
level: WARNING
formatter: simple
filename: wx_warning.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
error_file_handler:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: error
filename: wx_error.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
root:
level: INFO
handlers: [console, info_file_handler, error_file_handler]
groups:
enable: [example12345@chatroom,example12345@chatroom] # 允许响应的群 roomId大概长这样2xxxxxxxxx3@chatroom
welcome_msg: "欢迎 {new_member} 加入群聊!\n请简单介绍一下自己吧~\n如果想和我聊天可以@我" # 新人入群欢迎消息,可使用{new_member}和{inviter}变量
# 群聊与AI模型映射如果不配置则使用默认模型
models:
# 模型ID参考
# 0: 自动选择第一个可用模型
# 1: TigerBot
# 2: ChatGPT
# 3: 讯飞星火
# 4: ChatGLM
# 5: BardAssistant/Gemini
# 6: 智谱ZhiPu
# 7: Ollama
# 8: DeepSeek
# 9: Perplexity
default: 0 # 默认模型ID0表示自动选择第一个可用模型
# 群聊映射
mapping:
- room_id: example12345@chatroom
model: 2 # 对应ChatType.CHATGPT
- room_id: example12345@chatroom
model: 7 # 对应ChatType.OLLAMA
# 私聊映射
private_mapping:
- wxid: filehelper
model: 2 # 对应ChatType.CHATGPT
- wxid: wxid_example12345
model: 8 # 对应ChatType.DEEPSEEK
news:
receivers: ["filehelper"] # 定时新闻接收人roomid 或者 wxid
report_reminder:
receivers: [] # 定时日报周报月报提醒roomid 或者 wxid
# 消息发送速率限制一分钟内最多发送6条消息
send_rate_limit: 6
weather: # -----天气提醒配置这行不填-----
city_code: 101010100 # 北京城市代码如若需要其他城市可参考base/main_city.json或者自寻城市代码填写
receivers: ["filehelper"] # 天气提醒接收人roomid 或者 wxid
chatgpt: # -----chatgpt配置这行不填-----
key: # 填写你 ChatGPT 的 key
api: https://api.openai.com/v1 # 如果你不知道这是干嘛的,就不要改
model: gpt-3.5-turbo # 可选gpt-3.5-turbo、gpt-4、gpt-4-turbo、gpt-4.1-mini、o4-mini
proxy: # 如果你在国内你可能需要魔法大概长这样http://域名或者IP地址:端口号
prompt: 你是智能聊天机器人,你叫 wcferry # 根据需要对角色进行设定
chatglm: # -----chatglm配置这行不填-----
key: # 这个应该不用动
api: http://localhost:8000/v1 # 根据自己的chatglm地址修改
proxy: # 如果你在国内你可能需要魔法大概长这样http://域名或者IP地址:端口号
prompt: 你是智能聊天机器人,你叫小薇 # 根据需要对角色进行设定
file_path: F:/Pictures/temp #设定生成图片和代码使用的文件夹路径
ollama: # -----ollama配置这行不填-----
enable: true # 是否启用 ollama
model: deepseek-r1:1.5b # ollama-7b-sft
prompt: 你是智能聊天机器人,你叫 梅好事 # 根据需要对角色进行设定
file_path: d:/pictures/temp #设定生成图片和代码使用的文件夹路径
tigerbot: # -----tigerbot配置这行不填-----
key: # key
model: # tigerbot-7b-sft
xinghuo_web: # -----讯飞星火web模式api配置这行不填 抓取方式详见文档https://www.bilibili.com/read/cv27066577-----
cookie: # cookie
fd: # fd
GtToken: # GtToken
prompt: 你是智能聊天机器人,你叫 wcferry。请用这个角色回答我的问题 # 根据需要对角色进行设定
bard: # -----bard配置这行不填-----
api_key: # api-key 创建地址https://ai.google.dev/pricing?hl=en创建后复制过来即可
model_name: gemini-pro # 新模型上线后可以选择模型
proxy: http://127.0.0.1:7890 # 如果你在国内你可能需要魔法大概长这样http://域名或者IP地址:端口号
# 提示词尽可能用英文bard对中文提示词的效果不是很理想下方提示词为英语老师的示例请按实际需要修改,默认设置的提示词为谷歌创造的AI大语言模型
# I want you to act as a spoken English teacher and improver. I will speak to you in English and you will reply to me in English to practice my spoken English. I want you to keep your reply neat, limiting the reply to 100 words. I want you to strictly correct my grammar mistakes, typos, and factual errors. I want you to ask me a question in your reply. Now let's start practicing, you could ask me a question first. Remember, I want you to strictly correct my grammar mistakes, typos, and factual errors.
prompt: You am a large language model, trained by Google.
zhipu: # -----zhipu配置这行不填-----
api_key: #api key
model: # 模型类型
deepseek: # -----deepseek配置这行不填-----
#思维链相关功能默认关闭开启后会增加响应时间和消耗更多的token
key: # 填写你的 DeepSeek API Key API Key的格式为sk-xxxxxxxxxxxxxxx
api: https://api.deepseek.com # DeepSeek API 地址
model: deepseek-chat # 可选: deepseek-chat (DeepSeek-V3), deepseek-reasoner (DeepSeek-R1)
prompt: 你是智能聊天机器人,你叫 DeepSeek 助手 # 根据需要对角色进行设定
enable_reasoning: false # 是否启用思维链功能,仅在使用 deepseek-reasoner 模型时有效
show_reasoning: false # 是否在回复中显示思维过程,仅在启用思维链功能时有效
cogview: # -----智谱AI图像生成配置这行不填-----
# 此API请参考 https://www.bigmodel.cn/dev/api/image-model/cogview
enable: False # 是否启用图像生成功能默认关闭将False替换为true则开启此模型可和其他模型同时运行。
api_key: # 智谱API密钥请填入您的API Key
model: cogview-4-250304 # 模型编码可选cogview-4-250304、cogview-4、cogview-3-flash
quality: standard # 生成质量可选standard快速、hd高清
size: 1024x1024 # 图片尺寸,可自定义,需符合条件
trigger_keyword: 牛智谱 # 触发图像生成的关键词
temp_dir: # 临时文件存储目录留空则默认使用项目目录下的zhipuimg文件夹如果要更改例如 D:/Pictures/temp 或 /home/user/temp
fallback_to_chat: true # 当未启用绘画功能时true=将请求发给聊天模型处理false=回复固定的未启用提示信息
aliyun_image: # -----如果要使用阿里云文生图,取消下面的注释并填写相关内容,模型到阿里云百炼找通义万相-文生图2.1-Turbo-----
enable: true # 是否启用阿里文生图功能false为关闭默认开启如果未配置则会将消息发送给聊天大模型
api_key: sk-xxxxxxxxxxxxxxxxxxxxxxxx # 替换为你的DashScope API密钥
model: wanx2.1-t2i-turbo # 模型名称默认使用wanx2.1-t2i-turbo(快),wanx2.1-t2i-plus,wanx-v1会给用户不同的提示
size: 1024*1024 # 图像尺寸,格式为宽*高
n: 1 # 生成图像的数量
temp_dir: ./temp # 临时文件存储路径
trigger_keyword: 牛阿里 # 触发词,默认为"牛阿里"
fallback_to_chat: true # 当服务不可用时是否转发给聊天模型处理
gemini_image: # -----谷歌AI画图配置这行不填-----
enable: true # 是否启用谷歌AI画图功能
api_key: # 谷歌Gemini API密钥必填
model: gemini-2.0-flash-exp-image-generation # 模型名称,建议保持默认,只有这一个模型可以进行绘画
temp_dir: ./geminiimg # 图片保存目录,可选
trigger_keyword: 牛谷歌 # 触发词,默认为"牛谷歌"
fallback_to_chat: false # 未启用时是否回退到聊天模式
proxy: http://127.0.0.1:7890 # 使用Clash代理格式为http://域名或者IP地址:端口号
perplexity: # -----perplexity配置这行不填-----
key: # 填写你的Perplexity API Key
api: https://api.perplexity.ai # API地址
proxy: # 如果你在国内你可能需要魔法大概长这样http://域名或者IP地址:端口号
model: mixtral-8x7b-instruct # 可选模型包括sonar-small-chat, sonar-medium-chat, sonar-pro, mixtral-8x7b-instruct等
prompt: 你是Perplexity AI助手请用专业、准确、有帮助的方式回答问题 # 角色设定
trigger_keyword: ask # 触发Perplexity服务的前置词
allow_all: false # 是否允许所有群聊和用户使用Perplexity设为true时忽略下面的白名单配置
allowed_groups: [] # 允许使用Perplexity的群聊ID列表例如["123456789@chatroom", "123456789@chatroom"]
allowed_users: [] # 允许使用Perplexity的用户ID列表例如["wxid_123456789", "filehelper"]
goblin_gift: # -----古灵阁妖精的馈赠配置这行不填-----
enable: false # 是否全局启用古灵阁妖精的馈赠功能,默认关闭
probability: 0.01 # 触发概率默认为1%
min_points: 10 # 最小奖励积分
max_points: 100 # 最大奖励积分
allowed_groups: [] # 允许使用馈赠功能的群聊ID列表例如["123456789@chatroom", "123456789@chatroom"],留空表示不启用

51
configuration.py Normal file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import logging.config
import os
import shutil
from typing import Dict, List
import yaml
class Config(object):
def __init__(self) -> None:
self.reload()
def _load_config(self) -> dict:
pwd = os.path.dirname(os.path.abspath(__file__))
try:
with open(f"{pwd}/config.yaml", "rb") as fp:
yconfig = yaml.safe_load(fp)
except FileNotFoundError:
shutil.copyfile(f"{pwd}/config.yaml.template", f"{pwd}/config.yaml")
with open(f"{pwd}/config.yaml", "rb") as fp:
yconfig = yaml.safe_load(fp)
return yconfig
def reload(self) -> None:
yconfig = self._load_config()
logging.config.dictConfig(yconfig["logging"])
self.CITY_CODE = yconfig["weather"]["city_code"]
self.WEATHER = yconfig["weather"]["receivers"]
self.GROUPS = yconfig["groups"]["enable"]
self.WELCOME_MSG = yconfig["groups"].get("welcome_msg", "欢迎 {new_member} 加入群聊!")
self.GROUP_MODELS = yconfig["groups"].get("models", {"default": 0, "mapping": []})
self.NEWS = yconfig["news"]["receivers"]
self.REPORT_REMINDERS = yconfig["report_reminder"]["receivers"]
self.CHATGPT = yconfig.get("chatgpt", {})
self.OLLAMA = yconfig.get("ollama", {})
self.TIGERBOT = yconfig.get("tigerbot", {})
self.XINGHUO_WEB = yconfig.get("xinghuo_web", {})
self.CHATGLM = yconfig.get("chatglm", {})
self.BardAssistant = yconfig.get("bard", {})
self.ZhiPu = yconfig.get("zhipu", {})
self.DEEPSEEK = yconfig.get("deepseek", {})
self.PERPLEXITY = yconfig.get("perplexity", {})
self.COGVIEW = yconfig.get("cogview", {})
self.ALIYUN_IMAGE = yconfig.get("aliyun_image", {})
self.GEMINI_IMAGE = yconfig.get("gemini_image", {})
self.SEND_RATE_LIMIT = yconfig.get("send_rate_limit", 0)

29
constants.py Normal file
View File

@@ -0,0 +1,29 @@
from enum import IntEnum, unique
@unique
class ChatType(IntEnum):
# UnKnown = 0 # 未知, 即未设置
TIGER_BOT = 1 # TigerBot
CHATGPT = 2 # ChatGPT
XINGHUO_WEB = 3 # 讯飞星火
CHATGLM = 4 # ChatGLM
BardAssistant = 5 # Google Bard
ZhiPu = 6 # ZhiPu
OLLAMA = 7 # Ollama
DEEPSEEK = 8 # DeepSeek
PERPLEXITY = 9 # Perplexity
@staticmethod
def is_in_chat_types(chat_type: int) -> bool:
if chat_type in [ChatType.TIGER_BOT.value, ChatType.CHATGPT.value,
ChatType.XINGHUO_WEB.value, ChatType.CHATGLM.value,
ChatType.BardAssistant.value, ChatType.ZhiPu.value,
ChatType.OLLAMA.value, ChatType.DEEPSEEK.value,
ChatType.PERPLEXITY.value]:
return True
return False
@staticmethod
def help_hint() -> str:
return str({member.value: member.name for member in ChatType}).replace('{', '').replace('}', '')

30903
function/chengyu.csv Normal file

File diff suppressed because it is too large Load Diff

89
function/func_chengyu.py Normal file
View File

@@ -0,0 +1,89 @@
# -*- coding: utf-8 -*-
import os
import random
import logging
import pandas as pd
# 获取模块级 logger
logger = logging.getLogger(__name__)
class Chengyu(object):
def __init__(self) -> None:
root = os.path.dirname(os.path.abspath(__file__))
self.df = pd.read_csv(f"{root}/chengyu.csv", delimiter="\t")
self.cys, self.zis, self.yins = self._build_data()
def _build_data(self):
df = self.df.copy()
df["shouzi"] = df["chengyu"].apply(lambda x: x[0])
df["mozi"] = df["chengyu"].apply(lambda x: x[-1])
df["shouyin"] = df["pingyin"].apply(lambda x: x.split(" ")[0])
df["moyin"] = df["pingyin"].apply(lambda x: x.split(" ")[-1])
cys = dict(zip(df["chengyu"], df["moyin"]))
zis = df.groupby("shouzi").agg({"chengyu": set})["chengyu"].to_dict()
yins = df.groupby("shouyin").agg({"chengyu": set})["chengyu"].to_dict()
return cys, zis, yins
def isChengyu(self, cy: str) -> bool:
return self.cys.get(cy, None) is not None
def getNext(self, cy: str, tongyin: bool = True) -> str:
"""获取下一个成语
cy: 当前成语
tongyin: 是否允许同音字
"""
zi = cy[-1]
ansers = list(self.zis.get(zi, {}))
try:
ansers.remove(cy) # 移除当前成语
except Exception as e:
pass # Just ignore...
if ansers:
return random.choice(ansers)
# 如果找不到同字,允许同音
if tongyin:
yin = self.cys.get(cy)
ansers = list(self.yins.get(yin, {}))
try:
ansers.remove(cy) # 移除当前成语
except Exception as e:
pass # Just ignore...
if ansers:
return random.choice(ansers)
return None
def getMeaning(self, cy: str) -> str:
ress = self.df[self.df["chengyu"] == cy].to_dict(orient="records")
if ress:
res = ress[0]
rsp = res["chengyu"] + "\n" + res["pingyin"] + "\n" + res["jieshi"]
if res["chuchu"] and res["chuchu"] != "":
rsp += "\n出处:" + res["chuchu"]
if res["lizi"] and res["lizi"] != "":
rsp += "\n例子:" + res["lizi"]
return rsp
return None
cy = Chengyu()
if __name__ == "__main__":
# 设置测试用的日志配置
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s'
)
answer = cy.getNext("便宜行事")
logger.info(answer)

1546
function/func_duel.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,102 @@
import random
from typing import TYPE_CHECKING, Callable, Any
from wcferry import WxMsg
from function.func_duel import DuelRankSystem
if TYPE_CHECKING:
from logging import Logger
from wcferry import Wcf
from typing import Dict
class GoblinGiftManager:
"""管理古灵阁妖精的馈赠事件"""
def __init__(self, config: Any, wcf: 'Wcf', log: 'Logger', send_text_msg: Callable):
"""初始化馈赠管理器
Args:
config: 配置对象包含GOBLIN_GIFT配置项
wcf: WCF实例用于获取群聊昵称等信息
log: 日志记录器
send_text_msg: 发送文本消息的函数
"""
self.config = config
self.wcf = wcf
self.LOG = log
self.sendTextMsg = send_text_msg
def try_trigger(self, msg: WxMsg) -> None:
"""尝试触发古灵阁妖精的馈赠事件
Args:
msg: 微信消息对象
"""
# 检查配置是否存在
if not hasattr(self.config, 'GOBLIN_GIFT'):
return
# 检查全局开关
if not self.config.GOBLIN_GIFT.get('enable', False):
return
# 检查群聊白名单
allowed_groups = self.config.GOBLIN_GIFT.get('allowed_groups', [])
if not allowed_groups or msg.roomid not in allowed_groups:
return
# 只在群聊中才触发
if not msg.from_group():
return
# 获取触发概率默认1%
probability = self.config.GOBLIN_GIFT.get('probability', 0.01)
# 按概率触发
if random.random() < probability:
try:
# 获取玩家昵称
player_name = self.wcf.get_alias_in_chatroom(msg.sender, msg.roomid)
if not player_name:
player_name = msg.sender # 如果获取不到昵称用wxid代替
# 初始化对应群聊的积分系统
rank_system = DuelRankSystem(group_id=msg.roomid)
# 获取配置的积分范围默认10-100
min_points = self.config.GOBLIN_GIFT.get('min_points', 10)
max_points = self.config.GOBLIN_GIFT.get('max_points', 100)
# 随机增加积分
points_added = random.randint(min_points, max_points)
# 更新玩家数据
player_data = rank_system.get_player_data(player_name)
player_data['score'] += points_added
# 保存数据
rank_system._save_ranks()
# 准备随机馈赠消息
gift_sources = [
f"✨ 一只迷路的家养小精灵往 {player_name} 口袋里塞了什么东西!",
f"💰 古灵阁的妖精似乎格外青睐 {player_name},留下了一袋金加隆(折合积分)!",
f"🦉 一只送信的猫头鹰丢错了包裹,{player_name} 意外发现了一笔“意外之财”!",
f"🍀 {player_name} 踩到了一株幸运四叶草,好运带来了额外的积分!",
f"🍄 在禁林的边缘,{player_name} 发现了一簇闪闪发光的魔法蘑菇!",
f"{player_name} 捡到了一个有求必应屋掉出来的神秘物品!",
f"🔮 временами удача улыбается {player_name}!", # 偶尔来点不一样的语言增加神秘感
f"🎉 费尔奇打瞌睡时掉了一小袋没收来的积分,刚好被 {player_name} 捡到!",
f"📜 一张古老的藏宝图碎片指引 {player_name} 找到了一些失落的积分!",
f"🧙‍♂️ 邓布利多教授对 {player_name} 的行为表示赞赏,特批“为学院加分”!",
f"🧪 {player_name} 的魔药课作业获得了斯拉格霍恩教授的额外加分!",
f"🌟 一颗流星划过霍格沃茨上空,{player_name} 许下的愿望成真了!"
]
gift_message = random.choice(gift_sources)
final_message = f"{gift_message}\n获得积分: +{points_added} 分!"
# 发送馈赠通知 (@发送者)
self.sendTextMsg(final_message, msg.roomid, msg.sender)
self.LOG.info(f"古灵阁馈赠触发: 群 {msg.roomid}, 用户 {player_name}, 获得 {points_added} 积分")
except Exception as e:
self.LOG.error(f"触发古灵阁馈赠时出错: {e}")

164
function/func_insult.py Normal file
View File

@@ -0,0 +1,164 @@
import random
import re
from wcferry import Wcf
from typing import Callable, Optional
class InsultGenerator:
"""
生成贴吧风格的骂人话术
"""
# 贴吧风格骂人话术模板
INSULT_TEMPLATES = [
"{target},你这想法属实有点抽象,建议回炉重造。",
"不是吧,{target},这都能说出来?大脑是用来思考的,不是用来长个儿的。",
"乐,{target} 你成功逗笑了我,就像看猴戏一样。",
"我说 {target} 啊,网上吵架没赢过,现实打架没输过是吧?",
"{target},听君一席话,浪费十分钟。",
"给你个梯子,{target},下个台阶吧,别搁这丢人现眼了。",
"就这?{target},就这?我还以为多大事呢。",
"{target},你是不是网线直连马桶的?味儿有点冲。",
"讲道理,{target},你这发言水平,在贴吧都活不过三楼。",
"{target},建议你去买两斤猪脑子煲汤喝,补补智商。",
"说真的,{target},你这智商要是放在好声音能把那四把椅子都转回来。",
"{target},放着好端端的智商不用,非得秀下限是吧?",
"我看你是典型的脑子搭错弦,{target},说话一套一套的。",
"{target},别整天搁这儿水经验了,你这水平也就适合到幼儿园门口卖糖水。",
"你这句话水平跟你智商一样,{target},都在地平线以下。",
"就你这个水平,{target},看王者荣耀的视频都能让你买错装备。",
"{target},整天叫唤啥呢?我没看《西游记》的时候真不知道猴子能说人话。",
"我听懂了,{target},你说的都对,可是能不能先把脑子装回去再说话?",
"{target}鼓个掌,成功把我逗乐了,这么多年的乐子人,今天是栽你手里了。",
"{target},我看你是孔子放屁——闻(文)所未闻(闻)啊。",
"收敛点吧,{target},你这智商余额明显不足了。",
"{target},你要是没话说可以咬个打火机,大家爱看那个。",
"{target},知道你急,但你先别急,喝口水慢慢说。",
"{target},你这发言跟你长相一样,突出一个随心所欲。",
"不是,{target},你这脑回路是盘山公路吗?九曲十八弯啊?",
"{target},太平洋没加盖,觉得委屈可以跳下去。",
"搁这儿装啥大尾巴狼呢 {target}?尾巴都快摇断了吧?",
"{target},我看你不是脑子进水,是脑子被驴踢了吧?",
"给你脸了是吧 {target}?真以为自己是个人物了?",
"{target},少在这里狺狺狂吠,影响市容。",
"你这智商,{target},二维码扫出来都得是付款码。",
"乐死我了,{target},哪来的自信在这里指点江山?",
"{target},回去多读两年书吧,省得出来丢人现眼。",
"赶紧爬吧 {target},别在这污染空气了。",
"我看你是没挨过打,{target},这么嚣张。",
"给你个键盘,{target},你能敲出一部《圣经》来是吧?",
"脑子是个好东西,{target},希望你也有一个。",
"{target},少在这里秀你的智商下限。",
"就这?{target}?我还以为多牛逼呢,原来是个憨批。",
"{target},你这理解能力,怕不是胎教没做好。",
"{target},我看你像个小丑,上蹿下跳的。",
"你这逻辑,{target},体育老师教的吧?",
"你这发言,{target},堪称当代迷惑行为大赏。",
"{target},你这狗叫声能不能小点?",
"你是猴子请来的救兵吗?{target}",
"{target},你这脑容量,怕是连条草履虫都不如。",
"给你个杆子你就往上爬是吧?{target}",
"{target},你这嘴跟开了光似的,叭叭个没完。",
"省省吧 {target},你的智商税已经交得够多了。",
"{target},你这发言如同老太太的裹脚布,又臭又长。",
"{target},我看你是真的皮痒了。",
"少在这里妖言惑众,{target},滚回你的老鼠洞去。",
"{target},你就像个苍蝇一样,嗡嗡嗡烦死人。"
]
@staticmethod
def generate_insult(target_name: str) -> str:
"""
随机生成一句针对目标用户的骂人话术(贴吧风格)
Args:
target_name (str): 被骂的人的名字
Returns:
str: 生成的骂人语句
"""
if not target_name or target_name.strip() == "":
target_name = "那个谁" # 兜底,防止名字为空
template = random.choice(InsultGenerator.INSULT_TEMPLATES)
return template.format(target=target_name)
def generate_random_insult(target_name: str) -> str:
"""
随机生成一句针对目标用户的骂人话术(贴吧风格)
函数封装,方便直接调用
Args:
target_name (str): 被骂的人的名字
Returns:
str: 生成的骂人语句
"""
return InsultGenerator.generate_insult(target_name)
def handle_insult_request(
wcf: Wcf,
logger,
bot_wxid: str,
send_text_func: Callable[[str, str, Optional[str]], None],
trigger_goblin_gift_func: Callable[[object], None],
msg,
target_mention_name: str
) -> bool:
"""
处理群聊中的"骂一下"请求。
Args:
wcf: Wcf 实例。
logger: 日志记录器。
bot_wxid: 机器人自身的 wxid。
send_text_func: 发送文本消息的函数 (content, receiver, at_list=None)。
trigger_goblin_gift_func: 触发哥布林馈赠的函数。
msg: 原始消息对象 (需要 .roomid 属性)。
target_mention_name: 从消息中提取的被@用户的名称。
Returns:
bool: 如果处理了该请求(无论成功失败),返回 True否则返回 False。
"""
logger.info(f"群聊 {msg.roomid} 中处理骂人指令,提及目标:{target_mention_name}")
actual_target_name = target_mention_name
target_wxid = None
try:
room_members = wcf.get_chatroom_members(msg.roomid)
found = False
for wxid, name in room_members.items():
if target_mention_name == name:
target_wxid = wxid
actual_target_name = name
found = True
break
if not found:
for wxid, name in room_members.items():
if target_mention_name in name and wxid != bot_wxid:
target_wxid = wxid
actual_target_name = name
logger.info(f"部分匹配到用户: {name} ({wxid})")
break
except Exception as e:
logger.error(f"查找群成员信息时出错: {e}")
if target_wxid and target_wxid == bot_wxid:
send_text_func("😅 不行,我不能骂我自己。", msg.roomid)
return True
try:
insult_text = generate_random_insult(actual_target_name)
send_text_func(insult_text, msg.roomid)
logger.info(f"已发送骂人消息至群 {msg.roomid},目标: {actual_target_name}")
if trigger_goblin_gift_func:
trigger_goblin_gift_func(msg)
except Exception as e:
logger.error(f"生成或发送骂人消息时出错: {e}")
send_text_func("呃,我想骂但出错了...", msg.roomid)
return True

76
function/func_news.py Normal file
View File

@@ -0,0 +1,76 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import re
import logging
import time
from datetime import datetime
import requests
from lxml import etree
class News(object):
def __init__(self) -> None:
self.LOG = logging.getLogger(__name__)
self.week = {0: "周一", 1: "周二", 2: "周三", 3: "周四", 4: "周五", 5: "周六", 6: "周日"}
self.headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/110.0"}
def get_important_news(self):
"""
获取重要新闻。
返回一个元组 (is_today, news_content)。
is_today: 布尔值True表示是当天新闻False表示是旧闻或获取失败。
news_content: 格式化后的新闻字符串,或在失败时为空字符串。
"""
url = "https://www.cls.cn/api/sw?app=CailianpressWeb&os=web&sv=7.7.5"
data = {"type": "telegram", "keyword": "你需要知道的隔夜全球要闻", "page": 0,
"rn": 1, "os": "web", "sv": "7.7.5", "app": "CailianpressWeb"}
try:
rsp = requests.post(url=url, headers=self.headers, data=data)
data = json.loads(rsp.text)["data"]["telegram"]["data"][0]
news = data["descr"]
timestamp = data["time"]
ts = time.localtime(timestamp)
weekday_news = datetime(*ts[:6]).weekday()
# 格式化新闻内容
fmt_time = time.strftime("%Y年%m月%d", ts)
news = re.sub(r"(\d{1,2}、)", r"\n\1", news)
fmt_news = "".join(etree.HTML(news).xpath(" // text()"))
fmt_news = re.sub(r"周[一|二|三|四|五|六|日]你需要知道的", r"", fmt_news)
formatted_news = f"{fmt_time} {self.week[weekday_news]}\n{fmt_news}"
# 检查是否是当天新闻
weekday_now = datetime.now().weekday()
date_news_str = time.strftime("%Y%m%d", ts)
date_now_str = time.strftime("%Y%m%d", time.localtime())
# 使用日期字符串比较,而不是仅比较星期
is_today = (date_news_str == date_now_str)
if is_today:
return (True, formatted_news) # 当天新闻
else:
self.LOG.info(f"获取到的是旧闻 (发布于 {fmt_time})")
return (False, formatted_news) # 旧闻
except Exception as e:
self.LOG.error(e)
return (False, "") # 获取失败
if __name__ == "__main__":
# 设置测试用的日志配置
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s'
)
logger = logging.getLogger(__name__)
news = News()
is_today, content = news.get_important_news()
logger.info(f"Is Today: {is_today}")
logger.info(content)

398
function/func_reminder.py Normal file
View File

@@ -0,0 +1,398 @@
# -*- coding: utf-8 -*-
import sqlite3
import uuid
import time
import schedule
from datetime import datetime, timedelta
import logging
import threading
from typing import Optional, Dict, Tuple # 添加类型提示导入
# 获取 Logger 实例
logger = logging.getLogger("ReminderManager")
class ReminderManager:
# 使用线程锁确保数据库操作的线程安全
_db_lock = threading.Lock()
def __init__(self, robot, db_path: str, check_interval_minutes=1):
"""
初始化 ReminderManager。
:param robot: Robot 实例,用于发送消息。
:param db_path: SQLite 数据库文件路径。
:param check_interval_minutes: 检查提醒任务的频率(分钟)。
"""
self.robot = robot
self.db_path = db_path
self._create_table() # 初始化时确保表存在
# 注册周期性检查任务
schedule.every(check_interval_minutes).minutes.do(self.check_and_trigger_reminders)
logger.info(f"提醒管理器已初始化,连接到数据库 '{db_path}',每 {check_interval_minutes} 分钟检查一次。")
def _get_db_conn(self) -> sqlite3.Connection:
"""获取数据库连接"""
try:
# connect_timeout 增加等待时间check_same_thread=False 允许其他线程使用 (配合锁)
conn = sqlite3.connect(self.db_path, timeout=10, check_same_thread=False)
conn.row_factory = sqlite3.Row # 让查询结果可以像字典一样访问列
return conn
except sqlite3.Error as e:
logger.error(f"无法连接到 SQLite 数据库 '{self.db_path}': {e}", exc_info=True)
raise # 连接失败是严重问题,直接抛出异常
def _create_table(self):
"""创建 reminders 表(如果不存在)"""
sql = """
CREATE TABLE IF NOT EXISTS reminders (
id TEXT PRIMARY KEY,
wxid TEXT NOT NULL,
type TEXT NOT NULL CHECK(type IN ('once', 'daily', 'weekly')),
time_str TEXT NOT NULL,
content TEXT NOT NULL,
created_at TEXT NOT NULL,
last_triggered_at TEXT,
weekday INTEGER,
roomid TEXT
);
"""
# 创建索引的 SQL
index_sql_wxid = "CREATE INDEX IF NOT EXISTS idx_reminders_wxid ON reminders (wxid);"
index_sql_type = "CREATE INDEX IF NOT EXISTS idx_reminders_type ON reminders (type);"
index_sql_roomid = "CREATE INDEX IF NOT EXISTS idx_reminders_roomid ON reminders (roomid);"
try:
with self._db_lock: # 加锁保护数据库连接和操作
with self._get_db_conn() as conn:
cursor = conn.cursor()
# 1. 先确保表存在
cursor.execute(sql)
# 2. 尝试添加新列(如果表已存在且没有该列)
try:
# 检查列是否存在
cursor.execute("PRAGMA table_info(reminders);")
columns = [col['name'] for col in cursor.fetchall()]
# 添加 weekday 列(如果不存在)
if 'weekday' not in columns:
cursor.execute("ALTER TABLE reminders ADD COLUMN weekday INTEGER;")
logger.info("成功添加 'weekday' 列到 'reminders' 表。")
# 添加 roomid 列(如果不存在)
if 'roomid' not in columns:
cursor.execute("ALTER TABLE reminders ADD COLUMN roomid TEXT;")
logger.info("成功添加 'roomid' 列到 'reminders' 表。")
except sqlite3.OperationalError as e:
# 如果列已存在,会报错误,可以忽略
logger.warning(f"尝试添加列时发生错误: {e}")
# 3. 创建索引
cursor.execute(index_sql_wxid)
cursor.execute(index_sql_type)
cursor.execute(index_sql_roomid)
conn.commit()
logger.info("数据库表 'reminders' 检查/创建 完成。")
except sqlite3.Error as e:
logger.error(f"创建/检查数据库表 'reminders' 失败: {e}", exc_info=True)
# --- 对外接口 ---
def add_reminder(self, wxid: str, data: dict, roomid: Optional[str] = None) -> Tuple[bool, str]:
"""
将解析后的提醒数据添加到数据库。
:param wxid: 用户的微信 ID。
:param data: 包含 type, time, content 的字典。
:param roomid: 群聊ID如果在群聊中设置提醒则不为空
:return: (是否成功, 提醒 ID 或 错误信息)
"""
reminder_id = str(uuid.uuid4())
created_at_iso = datetime.now().isoformat()
# 校验数据 (基本)
required_keys = {"type", "time", "content"}
if not required_keys.issubset(data.keys()):
return False, "AI 返回的 JSON 缺少必要字段 (type, time, content)"
if data["type"] not in ["once", "daily", "weekly"]:
return False, f"不支持的提醒类型: {data['type']}"
# 进一步校验时间格式 (根据类型)
weekday_val = None # 初始化 weekday
try:
if data["type"] == "once":
# 尝试解析,确保格式正确,并且是未来的时间
trigger_dt = datetime.strptime(data["time"], "%Y-%m-%d %H:%M")
if trigger_dt <= datetime.now():
return False, f"一次性提醒时间 ({data['time']}) 必须是未来的时间"
elif data["type"] == "daily":
datetime.strptime(data["time"], "%H:%M") # 只校验格式
elif data["type"] == "weekly":
datetime.strptime(data["time"], "%H:%M") # 校验时间格式
if "weekday" not in data or not isinstance(data["weekday"], int) or not (0 <= data["weekday"] <= 6):
return False, "每周提醒必须提供有效的 weekday 字段 (0-6)"
weekday_val = data["weekday"] # 获取 weekday 值
except ValueError as e:
return False, f"时间格式错误 ({data['time']}),需要 'YYYY-MM-DD HH:MM' (once) 或 'HH:MM' (daily/weekly): {e}"
# 准备插入数据库
sql = """
INSERT INTO reminders (id, wxid, type, time_str, content, created_at, last_triggered_at, weekday, roomid)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
"""
params = (
reminder_id,
wxid,
data["type"],
data["time"],
data["content"],
created_at_iso,
None, # last_triggered_at 初始为 NULL
weekday_val, # weekday 字段
roomid # 新增roomid 参数
)
try:
with self._db_lock: # 加锁
with self._get_db_conn() as conn:
cursor = conn.cursor()
cursor.execute(sql, params)
conn.commit()
# 记录日志时包含群聊信息
log_target = f"用户 {wxid}" + (f" 在群聊 {roomid}" if roomid else "")
logger.info(f"成功添加提醒 {reminder_id} for {log_target} 到数据库。")
return True, reminder_id
except sqlite3.IntegrityError as e: # 例如,如果 UUID 冲突 (极不可能)
logger.error(f"添加提醒失败 (数据冲突): {e}", exc_info=True)
return False, f"添加提醒失败 (数据冲突): {e}"
except sqlite3.Error as e:
logger.error(f"添加提醒到数据库失败: {e}", exc_info=True)
return False, f"数据库错误: {e}"
# --- 核心检查逻辑 ---
def check_and_trigger_reminders(self):
"""由 schedule 定期调用。检查数据库,触发到期的提醒。"""
now = datetime.now()
now_iso = now.isoformat()
current_weekday = now.weekday() # 获取今天是周几 (0-6)
current_hm = now.strftime("%H:%M") # 当前时分
reminders_to_delete = [] # 存储需要删除的 once 提醒 ID
reminders_to_update = [] # 存储需要更新 last_triggered_at 的 daily/weekly 提醒 ID
try:
with self._db_lock: # 加锁
with self._get_db_conn() as conn:
cursor = conn.cursor()
# 1. 查询到期的一次性提醒
sql_once = """
SELECT id, wxid, content, roomid FROM reminders
WHERE type = 'once' AND time_str <= ?
"""
cursor.execute(sql_once, (now.strftime("%Y-%m-%d %H:%M"),))
due_once_reminders = cursor.fetchall()
for reminder in due_once_reminders:
self._send_reminder(reminder["wxid"], reminder["content"], reminder["id"], reminder["roomid"])
reminders_to_delete.append(reminder["id"])
logger.info(f"一次性提醒 {reminder['id']} 已触发并标记删除。")
# 2. 查询到期的每日提醒
# a. 获取当前时间 HH:MM
# b. 查询所有 daily 提醒
sql_daily_all = "SELECT id, wxid, content, time_str, last_triggered_at, roomid FROM reminders WHERE type = 'daily'"
cursor.execute(sql_daily_all)
all_daily_reminders = cursor.fetchall()
for reminder in all_daily_reminders:
# 检查时间是否到达或超过 daily 设置的 HH:MM
if current_hm >= reminder["time_str"]:
last_triggered_dt = None
if reminder["last_triggered_at"]:
try:
last_triggered_dt = datetime.fromisoformat(reminder["last_triggered_at"])
except ValueError:
logger.warning(f"无法解析 daily 提醒 {reminder['id']} 的 last_triggered_at: {reminder['last_triggered_at']}")
# 计算今天应该触发的时间点 (用于比较)
trigger_hm_dt = datetime.strptime(reminder["time_str"], "%H:%M").time()
today_trigger_dt = now.replace(hour=trigger_hm_dt.hour, minute=trigger_hm_dt.minute, second=0, microsecond=0)
# 如果从未触发过,或者上次触发是在今天的触发时间点之前,则应该触发
if last_triggered_dt is None or last_triggered_dt < today_trigger_dt:
self._send_reminder(reminder["wxid"], reminder["content"], reminder["id"], reminder["roomid"])
reminders_to_update.append(reminder["id"])
logger.info(f"每日提醒 {reminder['id']} 已触发并标记更新触发时间。")
# 3. 查询并处理到期的 'weekly' 提醒
sql_weekly = """
SELECT id, wxid, content, time_str, last_triggered_at, roomid FROM reminders
WHERE type = 'weekly' AND weekday = ? AND time_str <= ?
"""
cursor.execute(sql_weekly, (current_weekday, current_hm))
due_weekly_reminders = cursor.fetchall()
for reminder in due_weekly_reminders:
last_triggered_dt = None
if reminder["last_triggered_at"]:
try:
last_triggered_dt = datetime.fromisoformat(reminder["last_triggered_at"])
except ValueError:
logger.warning(f"无法解析 weekly 提醒 {reminder['id']} 的 last_triggered_at")
# 计算今天应该触发的时间点 (用于比较)
trigger_hm_dt = datetime.strptime(reminder["time_str"], "%H:%M").time()
today_trigger_dt = now.replace(hour=trigger_hm_dt.hour, minute=trigger_hm_dt.minute, second=0, microsecond=0)
# 如果今天是设定的星期几,时间已到,且今天还未触发过
if last_triggered_dt is None or last_triggered_dt < today_trigger_dt:
self._send_reminder(reminder["wxid"], reminder["content"], reminder["id"], reminder["roomid"])
reminders_to_update.append(reminder["id"]) # 每周提醒也需要更新触发时间
logger.info(f"每周提醒 {reminder['id']} (周{current_weekday+1}) 已触发并标记更新触发时间。")
# 4. 在事务中执行删除和更新
if reminders_to_delete:
# 使用 executemany 提高效率
sql_delete = "DELETE FROM reminders WHERE id = ?"
cursor.executemany(sql_delete, [(rid,) for rid in reminders_to_delete])
logger.info(f"从数据库删除了 {len(reminders_to_delete)} 条一次性提醒。")
if reminders_to_update:
sql_update = "UPDATE reminders SET last_triggered_at = ? WHERE id = ?"
cursor.executemany(sql_update, [(now_iso, rid) for rid in reminders_to_update])
logger.info(f"更新了 {len(reminders_to_update)} 条提醒的最后触发时间。")
# 提交事务
if reminders_to_delete or reminders_to_update:
conn.commit()
except sqlite3.Error as e:
logger.error(f"检查并触发提醒时数据库出错: {e}", exc_info=True)
except Exception as e: # 捕获其他潜在错误
logger.error(f"检查并触发提醒时发生意外错误: {e}", exc_info=True)
def _send_reminder(self, wxid: str, content: str, reminder_id: str, roomid: Optional[str] = None):
"""
安全地发送提醒消息。
根据roomid是否存在决定发送方式
- 如果roomid存在则发送到群聊并@用户
- 如果roomid不存在则发送私聊消息
"""
try:
message = f"⏰ 提醒:{content}"
if roomid:
# 群聊提醒: 发送到群聊并@设置提醒的用户
self.robot.sendTextMsg(message, roomid, wxid)
logger.info(f"已尝试发送群聊提醒 {reminder_id} 到群 {roomid} @ 用户 {wxid}")
else:
# 私聊提醒: 直接发送给用户
self.robot.sendTextMsg(message, wxid)
logger.info(f"已尝试发送私聊提醒 {reminder_id} 给用户 {wxid}")
except Exception as e:
target = f"{roomid} @ 用户 {wxid}" if roomid else f"用户 {wxid}"
logger.error(f"发送提醒 {reminder_id}{target} 失败: {e}", exc_info=True)
# --- 查看和删除提醒功能 ---
def list_reminders(self, wxid: str) -> list:
"""列出用户的所有提醒(包括私聊和群聊中设置的),按类型和时间排序"""
reminders = []
try:
with self._db_lock:
with self._get_db_conn() as conn:
cursor = conn.cursor()
# 按类型(once->daily->weekly),再按时间排序
sql = """
SELECT id, type, time_str, content, created_at, last_triggered_at, weekday, roomid
FROM reminders
WHERE wxid = ?
ORDER BY
CASE type
WHEN 'once' THEN 1
WHEN 'daily' THEN 2
WHEN 'weekly' THEN 3
ELSE 4 END ASC,
time_str ASC
"""
cursor.execute(sql, (wxid,))
results = cursor.fetchall()
# 将 sqlite3.Row 对象转换为普通字典列表
reminders = [dict(row) for row in results]
logger.info(f"为用户 {wxid} 查询到 {len(reminders)} 条提醒。")
return reminders
except sqlite3.Error as e:
logger.error(f"为用户 {wxid} 列出提醒时数据库出错: {e}", exc_info=True)
return [] # 出错返回空列表
def delete_reminder(self, wxid: str, reminder_id: str) -> Tuple[bool, str]:
"""
删除用户的特定提醒。
用户可以删除自己的任何提醒,无论是在私聊还是群聊中设置的。
:return: (是否成功, 消息)
"""
try:
with self._db_lock:
with self._get_db_conn() as conn:
cursor = conn.cursor()
# 确保用户只能删除自己的提醒
sql_check = "SELECT COUNT(*), roomid FROM reminders WHERE id = ? AND wxid = ? GROUP BY roomid"
cursor.execute(sql_check, (reminder_id, wxid))
result = cursor.fetchone()
if not result or result[0] == 0:
logger.warning(f"用户 {wxid} 尝试删除不存在或不属于自己的提醒 {reminder_id}")
return False, f"未找到 ID 为 {reminder_id[:6]}... 的提醒,或该提醒不属于您。"
# 获取roomid用于日志记录
roomid = result[1] if len(result) > 1 else None
sql_delete = "DELETE FROM reminders WHERE id = ? AND wxid = ?"
cursor.execute(sql_delete, (reminder_id, wxid))
conn.commit()
# 在日志中记录位置信息
location_info = f"在群聊 {roomid}" if roomid else "在私聊"
logger.info(f"用户 {wxid} 成功删除了{location_info}设置的提醒 {reminder_id}")
return True, f"已成功删除提醒 (ID: {reminder_id[:6]}...)"
except sqlite3.Error as e:
logger.error(f"用户 {wxid} 删除提醒 {reminder_id} 时数据库出错: {e}", exc_info=True)
return False, f"删除提醒时发生数据库错误: {e}"
except Exception as e:
logger.error(f"用户 {wxid} 删除提醒 {reminder_id} 时发生意外错误: {e}", exc_info=True)
return False, f"删除提醒时发生未知错误: {e}"
def delete_all_reminders(self, wxid: str) -> Tuple[bool, str, int]:
"""
删除用户的所有提醒(包括群聊和私聊中设置的)。
:param wxid: 用户的微信ID
:return: (是否成功, 消息, 删除的提醒数量)
"""
try:
with self._db_lock:
with self._get_db_conn() as conn:
cursor = conn.cursor()
# 先查询用户有多少条提醒
count_sql = "SELECT COUNT(*) FROM reminders WHERE wxid = ?"
cursor.execute(count_sql, (wxid,))
count = cursor.fetchone()[0]
if count == 0:
return False, "您当前没有任何提醒。", 0
# 删除用户的所有提醒
delete_sql = "DELETE FROM reminders WHERE wxid = ?"
cursor.execute(delete_sql, (wxid,))
conn.commit()
logger.info(f"用户 {wxid} 删除了其所有 {count} 条提醒")
return True, f"已成功删除您的所有提醒(共 {count} 条)。", count
except sqlite3.Error as e:
logger.error(f"用户 {wxid} 删除所有提醒时数据库出错: {e}", exc_info=True)
return False, f"删除提醒时发生数据库错误: {e}", 0
except Exception as e:
logger.error(f"用户 {wxid} 删除所有提醒时发生意外错误: {e}", exc_info=True)
return False, f"删除提醒时发生未知错误: {e}", 0

View File

@@ -0,0 +1,62 @@
import calendar
import datetime
from chinese_calendar import is_workday
from robot import Robot
class ReportReminder:
@staticmethod
def remind(robot: Robot) -> None:
receivers = robot.config.REPORT_REMINDERS
if not receivers:
receivers = ["filehelper"]
# 日报周报月报提醒
for receiver in receivers:
today = datetime.datetime.now().date()
# 如果是非工作日
if not is_workday(today):
#robot.sendTextMsg("休息日快乐", receiver)
pass
# 如果是工作日
if is_workday(today):
robot.sendTextMsg("该发日报啦", receiver)
# 如果是本周最后一个工作日
if ReportReminder.last_work_day_of_week(today) == today:
robot.sendTextMsg("该发周报啦", receiver)
# 如果本日是本月最后一整周的最后一个工作日:
if ReportReminder.last_work_friday_of_month(today) == today:
robot.sendTextMsg("该发月报啦", receiver)
# 计算本月最后一个周的最后一个工作日
@staticmethod
def last_work_friday_of_month(d: datetime.date) -> datetime.date:
days_in_month = calendar.monthrange(d.year, d.month)[1]
weekday = calendar.weekday(d.year, d.month, days_in_month)
if weekday == 4:
last_friday_of_month = datetime.date(
d.year, d.month, days_in_month)
else:
if weekday >= 5:
last_friday_of_month = datetime.date(d.year, d.month, days_in_month) - \
datetime.timedelta(days=(weekday - 4))
else:
last_friday_of_month = datetime.date(d.year, d.month, days_in_month) - \
datetime.timedelta(days=(weekday + 3))
while not is_workday(last_friday_of_month):
last_friday_of_month = last_friday_of_month - datetime.timedelta(days=1)
return last_friday_of_month
# 计算本周最后一个工作日
@staticmethod
def last_work_day_of_week(d: datetime.date) -> datetime.date:
weekday = calendar.weekday(d.year, d.month, d.day)
last_work_day_of_week = datetime.date(
d.year, d.month, d.day) + datetime.timedelta(days=(6 - weekday))
while not is_workday(last_work_day_of_week):
last_work_day_of_week = last_work_day_of_week - \
datetime.timedelta(days=1)
return last_work_day_of_week

462
function/func_summary.py Normal file
View File

@@ -0,0 +1,462 @@
# -*- coding: utf-8 -*-
import logging
import time
import re
from collections import deque
# from threading import Lock # 不再需要锁使用SQLite的事务机制
import sqlite3 # 添加sqlite3模块
import os # 用于处理文件路径
from function.func_xml_process import XmlProcessor # 导入XmlProcessor
class MessageSummary:
"""消息总结功能类 (使用SQLite持久化)
用于记录、管理和生成聊天历史消息的总结
"""
def __init__(self, max_history=300, db_path="data/message_history.db"):
"""初始化消息总结功能
Args:
max_history: 每个聊天保存的最大消息数量
db_path: SQLite数据库文件路径
"""
self.LOG = logging.getLogger("MessageSummary")
self.max_history = max_history
self.db_path = db_path
# 实例化XML处理器用于提取引用消息
self.xml_processor = XmlProcessor(self.LOG)
# 移除旧的内存存储相关代码
# self._msg_history = {} # 使用字典以群ID或用户ID为键
# self._msg_history_lock = Lock() # 添加锁以保证线程安全
try:
# 确保数据库文件所在的目录存在
db_dir = os.path.dirname(self.db_path)
if db_dir and not os.path.exists(db_dir):
os.makedirs(db_dir)
self.LOG.info(f"创建数据库目录: {db_dir}")
# 连接到数据库 (如果文件不存在会自动创建)
# check_same_thread=False 允许在不同线程中使用此连接
# 这在多线程机器人应用中是必要的,但要注意事务管理
self.conn = sqlite3.connect(self.db_path, check_same_thread=False)
self.cursor = self.conn.cursor()
self.LOG.info(f"已连接到 SQLite 数据库: {self.db_path}")
# 创建消息表 (如果不存在)
# 使用 INTEGER PRIMARY KEY AUTOINCREMENT 作为 rowid 的别名,方便管理
# timestamp_float 用于排序和限制数量
# timestamp_str 用于显示
self.cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chat_id TEXT NOT NULL,
sender TEXT NOT NULL,
content TEXT NOT NULL,
timestamp_float REAL NOT NULL,
timestamp_str TEXT NOT NULL
)
""")
# 为 chat_id 和 timestamp_float 创建索引,提高查询效率
self.cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_chat_time ON messages (chat_id, timestamp_float)
""")
self.conn.commit() # 提交更改
self.LOG.info("消息表已准备就绪")
except sqlite3.Error as e:
self.LOG.error(f"数据库初始化失败: {e}")
# 如果数据库连接失败,抛出异常或进行其他错误处理
raise ConnectionError(f"无法连接或初始化数据库: {e}") from e
except OSError as e:
self.LOG.error(f"创建数据库目录失败: {e}")
raise OSError(f"无法创建数据库目录: {e}") from e
def close_db(self):
"""关闭数据库连接"""
if hasattr(self, 'conn') and self.conn:
try:
self.conn.commit() # 确保所有更改都已保存
self.conn.close()
self.LOG.info("数据库连接已关闭")
except sqlite3.Error as e:
self.LOG.error(f"关闭数据库连接时出错: {e}")
def record_message(self, chat_id, sender_name, content, timestamp=None):
"""记录单条消息到数据库
Args:
chat_id: 聊天ID群ID或用户ID
sender_name: 发送者名称
content: 消息内容
timestamp: 时间戳,默认为当前时间
"""
try:
# 生成浮点数时间戳用于排序
current_time_float = time.time()
# 生成或使用传入的时间字符串
if not timestamp:
timestamp_str = time.strftime("%H:%M", time.localtime(current_time_float))
else:
timestamp_str = timestamp
# 插入新消息
self.cursor.execute("""
INSERT INTO messages (chat_id, sender, content, timestamp_float, timestamp_str)
VALUES (?, ?, ?, ?, ?)
""", (chat_id, sender_name, content, current_time_float, timestamp_str))
# 删除超出 max_history 的旧消息
# 使用子查询找到要保留的最新 N 条消息的 id然后删除不在这个列表中的该 chat_id 的其他消息
self.cursor.execute("""
DELETE FROM messages
WHERE chat_id = ? AND id NOT IN (
SELECT id
FROM messages
WHERE chat_id = ?
ORDER BY timestamp_float DESC
LIMIT ?
)
""", (chat_id, chat_id, self.max_history))
self.conn.commit() # 提交事务
except sqlite3.Error as e:
self.LOG.error(f"记录消息到数据库时出错: {e}")
# 可以考虑回滚事务
try:
self.conn.rollback()
except:
pass
def clear_message_history(self, chat_id):
"""清除指定聊天的消息历史记录
Args:
chat_id: 聊天ID群ID或用户ID
Returns:
bool: 是否成功清除
"""
try:
# 删除指定chat_id的所有消息
self.cursor.execute("DELETE FROM messages WHERE chat_id = ?", (chat_id,))
rows_deleted = self.cursor.rowcount # 获取删除的行数
self.conn.commit()
self.LOG.info(f"为 chat_id={chat_id} 清除了 {rows_deleted} 条历史消息")
return True # 删除0条也视为成功完成操作
except sqlite3.Error as e:
self.LOG.error(f"清除消息历史时出错 (chat_id={chat_id}): {e}")
return False
def get_message_count(self, chat_id):
"""获取指定聊天的消息数量
Args:
chat_id: 聊天ID群ID或用户ID
Returns:
int: 消息数量
"""
try:
# 使用COUNT查询获取消息数量
self.cursor.execute("SELECT COUNT(*) FROM messages WHERE chat_id = ?", (chat_id,))
result = self.cursor.fetchone() # fetchone() 返回一个元组,例如 (5,)
return result[0] if result else 0
except sqlite3.Error as e:
self.LOG.error(f"获取消息数量时出错 (chat_id={chat_id}): {e}")
return 0
def get_messages(self, chat_id):
"""获取指定聊天的所有消息 (按时间升序)
Args:
chat_id: 聊天ID群ID或用户ID
Returns:
list: 消息列表,格式为 [{"sender": ..., "content": ..., "time": ...}]
"""
messages = []
try:
# 查询需要的字段,按浮点时间戳升序排序,限制数量
self.cursor.execute("""
SELECT sender, content, timestamp_str
FROM messages
WHERE chat_id = ?
ORDER BY timestamp_float ASC
LIMIT ?
""", (chat_id, self.max_history))
rows = self.cursor.fetchall() # fetchall() 返回包含元组的列表
# 将数据库行转换为期望的字典列表格式
for row in rows:
messages.append({
"sender": row[0],
"content": row[1],
"time": row[2] # 使用存储的 timestamp_str
})
except sqlite3.Error as e:
self.LOG.error(f"获取消息列表时出错 (chat_id={chat_id}): {e}")
# 出错时返回空列表,保持与原逻辑一致
return messages
def _basic_summarize(self, messages):
"""基本的消息总结逻辑不使用AI
Args:
messages: 消息列表
Returns:
str: 消息总结
"""
if not messages:
return "没有可以总结的历史消息。"
# 构建总结
res = ["以下是近期聊天记录摘要:\n"]
for msg in messages:
res.append(f"[{msg['time']}]{msg['sender']}: {msg['content']}")
return "\n".join(res)
def _ai_summarize(self, messages, chat_model, chat_id):
"""使用AI模型生成消息总结
Args:
messages: 消息列表
chat_model: AI聊天模型对象
chat_id: 聊天ID
Returns:
str: 消息总结
"""
if not messages:
return "没有可以总结的历史消息。"
# 构建用于AI总结的消息格式
formatted_msgs = []
for msg in messages:
formatted_msgs.append(f"[{msg['time']}]{msg['sender']}: {msg['content']}")
# 构建提示词 - 更加客观、中立
prompt = (
"请仔细阅读并分析以下聊天记录,生成一简要的、结构清晰且抓住重点的摘要。\n\n"
"摘要格式要求:\n"
"1. 使用数字编号列表 (例如 1., 2., 3.) 来组织内容每个编号代表一个独立的主要讨论主题不要超过3个主题。\n"
"2. 在每个编号的主题下,写成一段不带格式的文字,每个主题单独成段并空行,需包含以下内容:\n"
" - 这个讨论的核心的简要描述。\n"
" - 该讨论的关键成员 (用括号 [用户名] 格式) 和他们的关键发言内容、成员之间的关键互动。\n"
" - 该讨论的讨论结果。\n"
"3. 总结需客观、精炼、简短精悍,直接呈现最核心且精简的事实,尽量不要添加额外的评论或分析。\n"
"4. 不要暴露出格式不要说核心是xxx参与者是xxx结果是xxx自然一点。\n\n"
"聊天记录如下:\n" + "\n".join(formatted_msgs)
)
# 使用AI模型生成总结 - 创建一个临时的聊天会话ID避免污染正常对话上下文
try:
# 对于支持新会话参数的模型,使用特殊标记告知这是独立的总结请求
if hasattr(chat_model, 'get_answer_with_context') and callable(getattr(chat_model, 'get_answer_with_context')):
# 使用带上下文参数的方法
summary = chat_model.get_answer_with_context(prompt, "summary_" + chat_id, clear_context=True)
else:
# 普通方法使用特殊会话ID
summary = chat_model.get_answer(prompt, "summary_" + chat_id)
if not summary:
return self._basic_summarize(messages)
return summary
except Exception as e:
self.LOG.error(f"使用AI生成总结失败: {e}")
return self._basic_summarize(messages)
def summarize_messages(self, chat_id, chat_model=None):
"""生成消息总结
Args:
chat_id: 聊天ID群ID或用户ID
chat_model: AI聊天模型对象如果为None则使用基础总结
Returns:
str: 消息总结
"""
messages = self.get_messages(chat_id)
if not messages:
return "没有可以总结的历史消息。"
# 根据是否提供了AI模型决定使用哪种总结方式
if chat_model:
return self._ai_summarize(messages, chat_model, chat_id)
else:
return self._basic_summarize(messages)
def process_message_from_wxmsg(self, msg, wcf, all_contacts, bot_wxid=None):
"""从微信消息对象中处理并记录与总结相关的文本消息
使用 XmlProcessor 提取用户实际输入的新内容或卡片标题。
Args:
msg: 微信消息对象(WxMsg)
wcf: 微信接口对象
all_contacts: 所有联系人字典
bot_wxid: 机器人自己的wxid用于检测@机器人的消息
"""
# 1. 基本筛选只记录群聊中的、非自己发送的文本消息或App消息
if not msg.from_group():
return
if msg.type != 0x01 and msg.type != 49: # 只记录文本消息和App消息(包括引用消息)
return
if msg.from_self():
return
chat_id = msg.roomid
# 2. 检查是否 @机器人 (如果提供了 bot_wxid)
original_content = msg.content # 获取原始content用于检测@和后续处理
if bot_wxid:
# 获取机器人在群里的昵称
bot_name_in_group = wcf.get_alias_in_chatroom(bot_wxid, chat_id)
if not bot_name_in_group:
# 如果获取不到群昵称,使用通讯录中的名称或默认名称
bot_name_in_group = all_contacts.get(bot_wxid, "泡泡") # 默认使用"泡泡"
# 检查消息中任意位置是否@机器人(含特殊空格\u2005
mention_pattern = f"@{bot_name_in_group}"
if mention_pattern in original_content:
# 消息提及了机器人,不记录
self.LOG.debug(f"跳过包含@机器人的消息: {original_content[:30]}...")
return
# 使用正则表达式匹配更复杂的情况(考虑特殊空格)
if re.search(rf"@{re.escape(bot_name_in_group)}(\u2005|\s|$)", original_content):
self.LOG.debug(f"通过正则跳过包含@机器人的消息: {original_content[:30]}...")
return
# 3. 使用 XmlProcessor 提取消息详情
try:
extracted_data = self.xml_processor.extract_quoted_message(msg)
except Exception as e:
self.LOG.error(f"使用XmlProcessor提取消息内容时出错 (msg.id={msg.id}): {e}")
return # 出错时,保守起见,不记录
# 4. 确定要记录的内容 (content_to_record)
content_to_record = ""
source_info = "未知来源"
# 优先使用提取到的新内容 (来自回复或普通文本或<title>)
if extracted_data.get("new_content", "").strip():
content_to_record = extracted_data["new_content"].strip()
source_info = "来自 new_content (回复/文本/标题)"
# 如果是引用类型消息,添加引用标记和引用内容的简略信息
if extracted_data.get("has_quote", False):
quoted_sender = extracted_data.get("quoted_sender", "")
quoted_content = extracted_data.get("quoted_content", "")
# 处理被引用内容
if quoted_content:
# 对较长的引用内容进行截断
max_quote_length = 30
if len(quoted_content) > max_quote_length:
quoted_content = quoted_content[:max_quote_length] + "..."
# 如果被引用的是卡片,则使用标准卡片格式
if extracted_data.get("quoted_is_card", False):
quoted_card_title = extracted_data.get("quoted_card_title", "")
quoted_card_type = extracted_data.get("quoted_card_type", "")
# 根据卡片类型确定内容类型
card_type = "卡片"
if "链接" in quoted_card_type or "消息" in quoted_card_type:
card_type = "链接"
elif "视频" in quoted_card_type or "音乐" in quoted_card_type:
card_type = "媒体"
elif "位置" in quoted_card_type:
card_type = "位置"
elif "图片" in quoted_card_type:
card_type = "图片"
elif "文件" in quoted_card_type:
card_type = "文件"
# 整个卡片内容包裹在【】中
quoted_content = f"{card_type}: {quoted_card_title}"
# 根据是否有被引用者信息构建引用前缀
if quoted_sender:
# 添加带引用人的引用格式,将新内容放在前面,引用内容放在后面
content_to_record = f"{content_to_record} 【回复 {quoted_sender}{quoted_content}"
else:
# 仅添加引用内容,将新内容放在前面,引用内容放在后面
content_to_record = f"{content_to_record} 【回复:{quoted_content}"
# 其次,如果新内容为空,但这是一个卡片且有标题,则使用卡片标题
elif extracted_data.get("is_card") and extracted_data.get("card_title", "").strip():
# 卡片消息使用固定格式,包含标题和描述
card_title = extracted_data.get("card_title", "").strip()
card_description = extracted_data.get("card_description", "").strip()
card_type = extracted_data.get("card_type", "")
card_source = extracted_data.get("card_appname") or extracted_data.get("card_sourcedisplayname", "")
# 构建格式化的卡片内容,包含标题和描述
# 根据卡片类型进行特殊处理
if "链接" in card_type or "消息" in card_type:
content_type = "链接"
elif "视频" in card_type or "音乐" in card_type:
content_type = "媒体"
elif "位置" in card_type:
content_type = "位置"
elif "图片" in card_type:
content_type = "图片"
elif "文件" in card_type:
content_type = "文件"
else:
content_type = "卡片"
# 构建完整卡片内容
card_content = f"{content_type}: {card_title}"
# 添加描述内容(如果有)
if card_description:
# 对较长的描述进行截断
max_desc_length = 50
if len(card_description) > max_desc_length:
card_description = card_description[:max_desc_length] + "..."
card_content += f" - {card_description}"
# 添加来源信息(如果有)
if card_source:
card_content += f" (来自:{card_source})"
# 将整个卡片内容包裹在【】中
content_to_record = f"{card_content}"
source_info = "来自 卡片(标题+描述)"
# 普通文本消息的保底处理
elif msg.type == 0x01 and not ("<" in original_content and ">" in original_content):
content_to_record = original_content.strip()
source_info = "来自 纯文本消息"
# 5. 如果最终没有提取到有效内容,则不记录
if not content_to_record:
self.LOG.debug(f"XmlProcessor未能提取到有效文本内容跳过记录 (msg.id={msg.id}) - Quote: {extracted_data.get('has_quote', False)}, IsCard: {extracted_data.get('is_card', False)}")
return
# 6. 获取发送者昵称
sender_name = wcf.get_alias_in_chatroom(msg.sender, msg.roomid)
if not sender_name: # 如果没有群昵称,尝试获取微信昵称
sender_data = all_contacts.get(msg.sender)
sender_name = sender_data if sender_data else msg.sender # 最后使用wxid
# 获取当前时间(只用于记录,不再打印)
current_time_str = time.strftime("%H:%M", time.localtime())
# 8. 记录提取到的有效内容
self.LOG.debug(f"记录消息 (来源: {source_info}): '[{current_time_str}]{sender_name}: {content_to_record}' (来自 msg.id={msg.id})")
self.record_message(chat_id, sender_name, content_to_record, current_time_str)

105
function/func_weather.py Normal file
View File

@@ -0,0 +1,105 @@
import requests, json
import logging
import re # 导入正则表达式模块,用于提取数字
class Weather:
def __init__(self, city_code: str) -> None:
self.city_code = city_code
self.LOG = logging.getLogger("Weather")
def _extract_temp(self, temp_str: str) -> str:
"""从高温/低温字符串中提取温度数值"""
if not temp_str:
return ""
# 匹配温度数字部分
match = re.search(r"(\d+(?:\.\d+)?)", temp_str)
if match:
return match.group(1)
return ""
def get_weather(self, include_forecast: bool = False) -> str:
# api地址
url = 'http://t.weather.sojson.com/api/weather/city/'
# 网络请求传入请求api+城市代码
self.LOG.info(f"获取天气: {url + str(self.city_code)}")
try:
response = requests.get(url + str(self.city_code))
self.LOG.info(f"获取天气成功: 状态码={response.status_code}")
if response.status_code != 200:
self.LOG.error(f"API返回非200状态码: {response.status_code}")
return f"获取天气失败: 服务器返回状态码 {response.status_code}"
except Exception as e:
self.LOG.error(f"获取天气失败: {str(e)}")
return "由于网络原因,获取天气失败"
try:
# 将数据以json形式返回这个d就是返回的json数据
d = response.json()
except json.JSONDecodeError as e:
self.LOG.error(f"解析JSON失败: {str(e)}")
return "获取天气失败: 返回数据格式错误"
# 当返回状态码为200输出天气状况
if(d.get('status') == 200):
city_info = d.get('cityInfo', {})
data = d.get('data', {})
forecast = data.get('forecast', [])
if not forecast:
self.LOG.warning("API返回的数据中没有forecast字段")
return "获取天气失败: 数据不完整"
today = forecast[0] if forecast else {}
# 提取今日温度
low_temp = self._extract_temp(today.get('low', ''))
high_temp = self._extract_temp(today.get('high', ''))
temp_range = f"{low_temp}~{high_temp}" if low_temp and high_temp else "N/A"
# 基础天气信息(当天)
result = [
f"城市:{city_info.get('parent', '')}/{city_info.get('city', '')}",
f"时间:{d.get('time', '')} {today.get('week', '')}",
f"温度:{temp_range}",
f"天气:{today.get('type', '')}"
]
# 如果需要预报信息,添加未来几天的天气
if include_forecast and len(forecast) > 1:
result.append("\n📅 天气预报:") # 修改标题
# 显示未来4天的预报 (索引 1, 2, 3, 4)
for day in forecast[1:5]: # 增加到4天预报
# 提取星期的最后一个字
week_day = day.get('week', '')
week_char = week_day[-1] if week_day else ''
# 提取温度数值
low_temp = self._extract_temp(day.get('low', ''))
high_temp = self._extract_temp(day.get('high', ''))
temp_range = f"{low_temp}~{high_temp}" if low_temp and high_temp else "N/A"
# 天气类型
weather_type = day.get('type', '未知')
# 简化格式:只显示周几、温度范围和天气类型
result.append(f"- 周{week_char} {temp_range} {weather_type}")
return "\n".join(result)
else:
return "获取天气失败"
if __name__ == "__main__":
# 设置测试用的日志配置
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s'
)
logger = logging.getLogger(__name__)
# 测试当天天气
w = Weather("101010100") # 北京
logger.info(w.get_weather()) # 不带预报
# 测试天气预报
logger.info(w.get_weather(include_forecast=True)) # 带预报

View File

@@ -0,0 +1,856 @@
import logging
import re
import html
import time
import xml.etree.ElementTree as ET
from wcferry import WxMsg
class XmlProcessor:
"""处理微信消息XML解析的工具类"""
def __init__(self, logger=None):
"""初始化XML处理器
Args:
logger: 日志对象,如果不提供则创建一个新的
"""
self.logger = logger or logging.getLogger("XmlProcessor")
def extract_quoted_message(self, msg: WxMsg) -> dict:
"""从微信消息中提取引用内容
Args:
msg: 微信消息对象
Returns:
dict: {
"new_content": "", # 用户新发送的内容
"quoted_content": "", # 引用的内容
"quoted_sender": "", # 被引用消息的发送者
"media_type": "", # 媒体类型(文本/图片/视频/链接等)
"has_quote": False, # 是否包含引用
"is_card": False, # 是否为卡片消息
"card_type": "", # 卡片类型
"card_title": "", # 卡片标题
"card_description": "", # 卡片描述
"card_url": "", # 卡片链接
"card_appname": "", # 卡片来源应用
"card_sourcedisplayname": "", # 来源显示名称
"quoted_is_card": False, # 被引用的内容是否为卡片
"quoted_card_type": "", # 被引用的卡片类型
"quoted_card_title": "", # 被引用的卡片标题
"quoted_card_description": "", # 被引用的卡片描述
"quoted_card_url": "", # 被引用的卡片链接
"quoted_card_appname": "", # 被引用的卡片来源应用
"quoted_card_sourcedisplayname": "" # 被引用的来源显示名称
}
"""
result = {
"new_content": "",
"quoted_content": "",
"quoted_sender": "",
"media_type": "文本",
"has_quote": False,
"is_card": False,
"card_type": "",
"card_title": "",
"card_description": "",
"card_url": "",
"card_appname": "",
"card_sourcedisplayname": "",
"quoted_is_card": False,
"quoted_card_type": "",
"quoted_card_title": "",
"quoted_card_description": "",
"quoted_card_url": "",
"quoted_card_appname": "",
"quoted_card_sourcedisplayname": ""
}
try:
# 检查消息类型
if msg.type != 0x01 and msg.type != 49: # 普通文本消息或APP消息
return result
self.logger.info(f"处理群聊消息: 类型={msg.type}, 发送者={msg.sender}")
# 检查是否为引用消息类型 (type 57)
is_quote_msg = False
appmsg_type_match = re.search(r'<appmsg.*?type="(\d+)"', msg.content, re.DOTALL)
if appmsg_type_match and appmsg_type_match.group(1) == "57":
is_quote_msg = True
self.logger.info("检测到引用类型消息 (type 57)")
# 检查是否包含refermsg标签
has_refermsg = "<refermsg>" in msg.content
# 确定是否是引用操作
is_referring = is_quote_msg or has_refermsg
# 处理App类型消息类型49
if msg.type == 49:
if not is_referring:
# 如果不是引用消息,按普通卡片处理
card_details = self.extract_card_details(msg.content)
result.update(card_details)
# 根据卡片类型更新媒体类型
if card_details["is_card"] and card_details["card_type"]:
result["media_type"] = card_details["card_type"]
# 引用消息情况下我们不立即更新result的卡片信息因为外层appmsg是引用容器
# 处理用户新输入内容
# 优先检查是否有<title>标签内容
title_match = re.search(r'<title>(.*?)</title>', msg.content)
if title_match:
# 对于引用消息从title标签提取用户新输入
if is_referring:
result["new_content"] = title_match.group(1).strip()
self.logger.info(f"引用消息中的新内容: {result['new_content']}")
else:
# 对于普通卡片消息避免将card_title重复设为new_content
extracted_title = title_match.group(1).strip()
if not (result["is_card"] and result["card_title"] == extracted_title):
result["new_content"] = extracted_title
self.logger.info(f"从title标签提取到用户新消息: {result['new_content']}")
elif msg.type == 0x01: # 纯文本消息
# 检查是否有XML标签如果没有则视为普通消息
if not ("<" in msg.content and ">" in msg.content):
result["new_content"] = msg.content
return result
# 如果是引用消息处理refermsg部分
if is_referring:
result["has_quote"] = True
# 提取refermsg内容
refer_data = self.extract_refermsg(msg.content)
result["quoted_sender"] = refer_data.get("sender", "")
result["quoted_content"] = refer_data.get("content", "")
# 从raw_content尝试解析被引用内容的卡片信息
raw_content = refer_data.get("raw_content", "")
if raw_content and "<appmsg" in raw_content:
quoted_card_details = self.extract_card_details(raw_content)
# 将引用的卡片详情存储到quoted_前缀的字段
result["quoted_is_card"] = quoted_card_details["is_card"]
result["quoted_card_type"] = quoted_card_details["card_type"]
result["quoted_card_title"] = quoted_card_details["card_title"]
result["quoted_card_description"] = quoted_card_details["card_description"]
result["quoted_card_url"] = quoted_card_details["card_url"]
result["quoted_card_appname"] = quoted_card_details["card_appname"]
result["quoted_card_sourcedisplayname"] = quoted_card_details["card_sourcedisplayname"]
# 如果没有提取到有效内容使用卡片标题作为quoted_content
if not result["quoted_content"] and quoted_card_details["card_title"]:
result["quoted_content"] = quoted_card_details["card_title"]
self.logger.info(f"成功从引用内容中提取卡片信息: {quoted_card_details['card_type']}")
else:
# 如果未发现卡片特征尝试fallback方法
if not result["quoted_content"]:
fallback_content = self.extract_quoted_fallback(msg.content)
if fallback_content:
if fallback_content.startswith("引用内容:") or fallback_content.startswith("相关内容:"):
result["quoted_content"] = fallback_content.split(":", 1)[1].strip()
else:
result["quoted_content"] = fallback_content
# 设置媒体类型
if result["is_card"] and result["card_type"]:
result["media_type"] = result["card_type"]
elif is_referring and result["quoted_is_card"]:
# 如果当前消息是引用,且引用的是卡片,则媒体类型设为"引用消息"
result["media_type"] = "引用消息"
else:
# 普通消息,使用群聊消息类型识别
result["media_type"] = self.identify_message_type(msg.content)
return result
except Exception as e:
self.logger.error(f"处理群聊引用消息时出错: {e}")
return result
def extract_private_quoted_message(self, msg: WxMsg) -> dict:
"""专门处理私聊引用消息,返回结构化数据
Args:
msg: 微信消息对象
Returns:
dict: {
"new_content": "", # 用户新发送的内容
"quoted_content": "", # 引用的内容
"quoted_sender": "", # 被引用消息的发送者
"media_type": "", # 媒体类型(文本/图片/视频/链接等)
"has_quote": False, # 是否包含引用
"is_card": False, # 是否为卡片消息
"card_type": "", # 卡片类型
"card_title": "", # 卡片标题
"card_description": "", # 卡片描述
"card_url": "", # 卡片链接
"card_appname": "", # 卡片来源应用
"card_sourcedisplayname": "", # 来源显示名称
"quoted_is_card": False, # 被引用的内容是否为卡片
"quoted_card_type": "", # 被引用的卡片类型
"quoted_card_title": "", # 被引用的卡片标题
"quoted_card_description": "", # 被引用的卡片描述
"quoted_card_url": "", # 被引用的卡片链接
"quoted_card_appname": "", # 被引用的卡片来源应用
"quoted_card_sourcedisplayname": "" # 被引用的来源显示名称
}
"""
result = {
"new_content": "",
"quoted_content": "",
"quoted_sender": "",
"media_type": "文本",
"has_quote": False,
"is_card": False,
"card_type": "",
"card_title": "",
"card_description": "",
"card_url": "",
"card_appname": "",
"card_sourcedisplayname": "",
"quoted_is_card": False,
"quoted_card_type": "",
"quoted_card_title": "",
"quoted_card_description": "",
"quoted_card_url": "",
"quoted_card_appname": "",
"quoted_card_sourcedisplayname": ""
}
try:
# 检查消息类型
if msg.type != 0x01 and msg.type != 49: # 普通文本消息或APP消息
return result
self.logger.info(f"处理私聊消息: 类型={msg.type}, 发送者={msg.sender}")
# 检查是否为引用消息类型 (type 57)
is_quote_msg = False
appmsg_type_match = re.search(r'<appmsg.*?type="(\d+)"', msg.content, re.DOTALL)
if appmsg_type_match and appmsg_type_match.group(1) == "57":
is_quote_msg = True
self.logger.info("检测到引用类型消息 (type 57)")
# 检查是否包含refermsg标签
has_refermsg = "<refermsg>" in msg.content
# 确定是否是引用操作
is_referring = is_quote_msg or has_refermsg
# 处理App类型消息类型49
if msg.type == 49:
if not is_referring:
# 如果不是引用消息,按普通卡片处理
card_details = self.extract_card_details(msg.content)
result.update(card_details)
# 根据卡片类型更新媒体类型
if card_details["is_card"] and card_details["card_type"]:
result["media_type"] = card_details["card_type"]
# 引用消息情况下我们不立即更新result的卡片信息因为外层appmsg是引用容器
# 处理用户新输入内容
# 优先检查是否有<title>标签内容
title_match = re.search(r'<title>(.*?)</title>', msg.content)
if title_match:
# 对于引用消息从title标签提取用户新输入
if is_referring:
result["new_content"] = title_match.group(1).strip()
self.logger.info(f"引用消息中的新内容: {result['new_content']}")
else:
# 对于普通卡片消息避免将card_title重复设为new_content
extracted_title = title_match.group(1).strip()
if not (result["is_card"] and result["card_title"] == extracted_title):
result["new_content"] = extracted_title
self.logger.info(f"从title标签提取到用户新消息: {result['new_content']}")
elif msg.type == 0x01: # 纯文本消息
# 检查是否有XML标签如果没有则视为普通消息
if not ("<" in msg.content and ">" in msg.content):
result["new_content"] = msg.content
return result
# 如果是引用消息处理refermsg部分
if is_referring:
result["has_quote"] = True
# 提取refermsg内容
refer_data = self.extract_private_refermsg(msg.content)
result["quoted_sender"] = refer_data.get("sender", "")
result["quoted_content"] = refer_data.get("content", "")
# 从raw_content尝试解析被引用内容的卡片信息
raw_content = refer_data.get("raw_content", "")
if raw_content and "<appmsg" in raw_content:
quoted_card_details = self.extract_card_details(raw_content)
# 将引用的卡片详情存储到quoted_前缀的字段
result["quoted_is_card"] = quoted_card_details["is_card"]
result["quoted_card_type"] = quoted_card_details["card_type"]
result["quoted_card_title"] = quoted_card_details["card_title"]
result["quoted_card_description"] = quoted_card_details["card_description"]
result["quoted_card_url"] = quoted_card_details["card_url"]
result["quoted_card_appname"] = quoted_card_details["card_appname"]
result["quoted_card_sourcedisplayname"] = quoted_card_details["card_sourcedisplayname"]
# 如果没有提取到有效内容使用卡片标题作为quoted_content
if not result["quoted_content"] and quoted_card_details["card_title"]:
result["quoted_content"] = quoted_card_details["card_title"]
self.logger.info(f"成功从引用内容中提取卡片信息: {quoted_card_details['card_type']}")
else:
# 如果未发现卡片特征尝试fallback方法
if not result["quoted_content"]:
fallback_content = self.extract_quoted_fallback(msg.content)
if fallback_content:
if fallback_content.startswith("引用内容:") or fallback_content.startswith("相关内容:"):
result["quoted_content"] = fallback_content.split(":", 1)[1].strip()
else:
result["quoted_content"] = fallback_content
# 设置媒体类型
if result["is_card"] and result["card_type"]:
result["media_type"] = result["card_type"]
elif is_referring and result["quoted_is_card"]:
# 如果当前消息是引用,且引用的是卡片,则媒体类型设为"引用消息"
result["media_type"] = "引用消息"
else:
# 普通消息,使用私聊消息类型识别
result["media_type"] = self.identify_private_message_type(msg.content)
return result
except Exception as e:
self.logger.error(f"处理私聊引用消息时出错: {e}")
return result
def extract_refermsg(self, content: str) -> dict:
"""专门提取群聊refermsg节点内容包括HTML解码
Args:
content: 消息内容
Returns:
dict: {
"sender": "", # 发送者
"content": "", # 引用内容
"raw_content": "" # 解码后的原始XML内容用于后续解析
}
"""
result = {"sender": "", "content": "", "raw_content": ""}
try:
# 使用正则表达式精确提取refermsg内容避免完整XML解析
refermsg_match = re.search(r'<refermsg>(.*?)</refermsg>', content, re.DOTALL)
if not refermsg_match:
return result
refermsg_content = refermsg_match.group(1)
# 提取发送者
displayname_match = re.search(r'<displayname>(.*?)</displayname>', refermsg_content, re.DOTALL)
if displayname_match:
result["sender"] = displayname_match.group(1).strip()
# 提取内容并进行HTML解码
content_match = re.search(r'<content>(.*?)</content>', refermsg_content, re.DOTALL)
if content_match:
# 获取引用的原始内容可能是HTML编码的XML
extracted_content = content_match.group(1)
# 保存解码后的原始内容,用于后续解析
decoded_content = html.unescape(extracted_content)
result["raw_content"] = decoded_content
# 清理内容中的HTML标签用于文本展示
cleaned_content = re.sub(r'<.*?>', '', extracted_content)
# 清理HTML实体编码和多余空格
cleaned_content = re.sub(r'\s+', ' ', cleaned_content).strip()
# 解码HTML实体
cleaned_content = html.unescape(cleaned_content)
result["content"] = cleaned_content
return result
except Exception as e:
self.logger.error(f"提取群聊refermsg内容时出错: {e}")
return result
def extract_private_refermsg(self, content: str) -> dict:
"""专门提取私聊refermsg节点内容包括HTML解码
Args:
content: 消息内容
Returns:
dict: {
"sender": "", # 发送者
"content": "", # 引用内容
"raw_content": "" # 解码后的原始XML内容用于后续解析
}
"""
result = {"sender": "", "content": "", "raw_content": ""}
try:
# 使用正则表达式精确提取refermsg内容避免完整XML解析
refermsg_match = re.search(r'<refermsg>(.*?)</refermsg>', content, re.DOTALL)
if not refermsg_match:
return result
refermsg_content = refermsg_match.group(1)
# 提取发送者
displayname_match = re.search(r'<displayname>(.*?)</displayname>', refermsg_content, re.DOTALL)
if displayname_match:
result["sender"] = displayname_match.group(1).strip()
# 提取内容并进行HTML解码
content_match = re.search(r'<content>(.*?)</content>', refermsg_content, re.DOTALL)
if content_match:
# 获取引用的原始内容可能是HTML编码的XML
extracted_content = content_match.group(1)
# 保存解码后的原始内容,用于后续解析
decoded_content = html.unescape(extracted_content)
result["raw_content"] = decoded_content
# 清理内容中的HTML标签用于文本展示
cleaned_content = re.sub(r'<.*?>', '', extracted_content)
# 清理HTML实体编码和多余空格
cleaned_content = re.sub(r'\s+', ' ', cleaned_content).strip()
# 解码HTML实体
cleaned_content = html.unescape(cleaned_content)
result["content"] = cleaned_content
return result
except Exception as e:
self.logger.error(f"提取私聊refermsg内容时出错: {e}")
return result
def identify_message_type(self, content: str) -> str:
"""识别群聊消息的媒体类型
Args:
content: 消息内容
Returns:
str: 媒体类型描述
"""
try:
if "<appmsg type=\"2\"" in content:
return "图片"
elif "<appmsg type=\"5\"" in content:
return "文件"
elif "<appmsg type=\"4\"" in content:
return "链接分享"
elif "<appmsg type=\"3\"" in content:
return "音频"
elif "<appmsg type=\"6\"" in content:
return "视频"
elif "<appmsg type=\"8\"" in content:
return "动画表情"
elif "<appmsg type=\"1\"" in content:
return "文本卡片"
elif "<appmsg type=\"7\"" in content:
return "位置分享"
elif "<appmsg type=\"17\"" in content:
return "实时位置分享"
elif "<appmsg type=\"19\"" in content:
return "频道消息"
elif "<appmsg type=\"33\"" in content:
return "小程序"
elif "<appmsg type=\"57\"" in content:
return "引用消息"
else:
return "文本"
except Exception as e:
self.logger.error(f"识别消息类型时出错: {e}")
return "文本"
def identify_private_message_type(self, content: str) -> str:
"""识别私聊消息的媒体类型
Args:
content: 消息内容
Returns:
str: 媒体类型描述
"""
try:
if "<appmsg type=\"2\"" in content:
return "图片"
elif "<appmsg type=\"5\"" in content:
return "文件"
elif "<appmsg type=\"4\"" in content:
return "链接分享"
elif "<appmsg type=\"3\"" in content:
return "音频"
elif "<appmsg type=\"6\"" in content:
return "视频"
elif "<appmsg type=\"8\"" in content:
return "动画表情"
elif "<appmsg type=\"1\"" in content:
return "文本卡片"
elif "<appmsg type=\"7\"" in content:
return "位置分享"
elif "<appmsg type=\"17\"" in content:
return "实时位置分享"
elif "<appmsg type=\"19\"" in content:
return "频道消息"
elif "<appmsg type=\"33\"" in content:
return "小程序"
elif "<appmsg type=\"57\"" in content:
return "引用消息"
else:
return "文本"
except Exception as e:
self.logger.error(f"识别消息类型时出错: {e}")
return "文本"
def extract_quoted_fallback(self, content: str) -> str:
"""当XML解析失败时的后备提取方法
Args:
content: 原始消息内容
Returns:
str: 提取的引用内容,如果未找到返回空字符串
"""
try:
# 使用正则表达式直接从内容中提取
# 查找<content>标签内容
content_match = re.search(r'<content>(.*?)</content>', content, re.DOTALL)
if content_match:
extracted = content_match.group(1)
# 清理可能存在的XML标签
extracted = re.sub(r'<.*?>', '', extracted)
# 去除换行符和多余空格
extracted = re.sub(r'\s+', ' ', extracted).strip()
# 解码HTML实体
extracted = html.unescape(extracted)
return extracted
# 查找displayname和content的组合
display_name_match = re.search(r'<displayname>(.*?)</displayname>', content, re.DOTALL)
content_match = re.search(r'<content>(.*?)</content>', content, re.DOTALL)
if display_name_match and content_match:
name = re.sub(r'<.*?>', '', display_name_match.group(1))
text = re.sub(r'<.*?>', '', content_match.group(1))
# 去除换行符和多余空格
text = re.sub(r'\s+', ' ', text).strip()
# 解码HTML实体
name = html.unescape(name)
text = html.unescape(text)
return f"{name}: {text}"
# 查找引用或回复的关键词
if "引用" in content or "回复" in content:
# 寻找引用关键词后的内容
match = re.search(r'[引用|回复].*?[:](.*?)(?:<|$)', content, re.DOTALL)
if match:
text = match.group(1).strip()
text = re.sub(r'<.*?>', '', text)
# 去除换行符和多余空格
text = re.sub(r'\s+', ' ', text).strip()
# 解码HTML实体
text = html.unescape(text)
return text
return ""
except Exception as e:
self.logger.error(f"后备提取引用内容时出错: {e}")
return ""
def extract_card_details(self, content: str) -> dict:
"""从消息内容中提取卡片详情 (使用 ElementTree 解析)
Args:
content: 消息内容 (XML 字符串)
Returns:
dict: 包含卡片详情的字典
"""
result = {
"is_card": False,
"card_type": "",
"card_title": "",
"card_description": "",
"card_url": "",
"card_appname": "",
"card_sourcedisplayname": ""
}
try:
# 1. 定位并提取 <appmsg> 标签内容
# 正则表达式用于精确找到 <appmsg>...</appmsg> 部分,避免解析整个消息体可能引入的错误
appmsg_match = re.search(r'<appmsg.*?>(.*?)</appmsg>', content, re.DOTALL | re.IGNORECASE)
if not appmsg_match:
# 有些简单的 appmsg 可能没有闭合标签,尝试匹配自闭合或非标准格式
appmsg_match_simple = re.search(r'(<appmsg[^>]*>)', content, re.IGNORECASE)
if not appmsg_match_simple:
# 尝试查找 <msg> 下的 <appmsg> 作为根
msg_match = re.search(r'<msg>(.*?)</msg>', content, re.DOTALL | re.IGNORECASE)
if msg_match:
inner_content = msg_match.group(1)
try:
# 尝试将<msg>内的内容解析为根然后查找appmsg
# 为了容错,添加一个虚拟根标签
root = ET.fromstring(f"<root>{inner_content}</root>")
appmsg_node = root.find('.//appmsg')
if appmsg_node is None:
self.logger.debug("在 <msg> 内未找到 <appmsg> 标签")
return result # 未找到 appmsg不是标准卡片
# 将 Element 对象转回字符串以便后续统一处理(或直接使用 Element对象查找
# 为简化后续流程我们还是转回字符串交给下面的ET.fromstring处理
# 注意:这里需要重新构造 appmsg 标签本身ET.tostring只包含内容
appmsg_xml_str = ET.tostring(appmsg_node, encoding='unicode', method='xml')
except ET.ParseError as parse_error:
self.logger.debug(f"解析 <msg> 内容时出错: {parse_error}")
return result # 解析失败
else:
self.logger.debug("未找到 <appmsg> 标签")
return result # 未找到 appmsg不是标准卡片
else:
# 对于 <appmsg ... /> 这种简单情况,可能无法提取内部标签,但也标记为卡片
appmsg_xml_str = appmsg_match_simple.group(1)
result["is_card"] = True # 标记为卡片,即使可能无法提取详细信息
else:
# 需要重新包含 <appmsg ...> 标签本身来解析属性
appmsg_outer_match = re.search(r'(<appmsg[^>]*>).*?</appmsg>', content, re.DOTALL | re.IGNORECASE)
if not appmsg_outer_match:
# 如果上面的正则失败,尝试简单匹配开始标签
appmsg_outer_match = re.search(r'(<appmsg[^>]*>)', content, re.IGNORECASE)
if appmsg_outer_match:
appmsg_tag_start = appmsg_outer_match.group(1)
appmsg_inner_content = appmsg_match.group(1)
appmsg_xml_str = f"{appmsg_tag_start}{appmsg_inner_content}</appmsg>"
else:
self.logger.warning("无法提取完整的 <appmsg> 标签结构")
return result # 结构不完整
# 2. 使用 ElementTree 解析 <appmsg> 内容
try:
# 尝试解析提取出的 <appmsg> XML 字符串
# 使用 XML 而不是 fromstring因为它对根元素要求更宽松
appmsg_root = ET.XML(appmsg_xml_str)
result["is_card"] = True # 解析成功,确认是卡片
# 3. 提取卡片类型 (来自 <appmsg> 标签的 type 属性)
card_type_num = appmsg_root.get('type', '') # 安全获取属性
if card_type_num:
result["card_type"] = self.get_card_type_name(card_type_num)
else:
# 尝试从内部 <type> 标签获取 (兼容旧格式或特殊格式)
type_node = appmsg_root.find('./type')
if type_node is not None and type_node.text:
result["card_type"] = self.get_card_type_name(type_node.text.strip())
# 4. 提取标题 (<title>)
title = appmsg_root.findtext('./title', default='').strip()
if title:
result["card_title"] = html.unescape(title)
# 5. 提取描述 (<des>)
description = appmsg_root.findtext('./des', default='').strip()
if description:
cleaned_desc = re.sub(r'<.*?>', '', description) # 清理HTML标签
result["card_description"] = html.unescape(cleaned_desc)
# 6. 提取链接 (<url>)
url = appmsg_root.findtext('./url', default='').strip()
if url:
result["card_url"] = html.unescape(url)
# 7. 提取应用名称 (<appinfo/appname> 或 <sourcedisplayname>)
# 优先尝试 <appinfo><appname>
appname_node = appmsg_root.find('./appinfo/appname')
if appname_node is not None and appname_node.text:
appname = appname_node.text.strip()
result["card_appname"] = html.unescape(appname)
# 如果没找到,或者为空,尝试 <sourcedisplayname>
sourcedisplayname_node = appmsg_root.find('./sourcedisplayname')
if sourcedisplayname_node is not None and sourcedisplayname_node.text:
sourcedisplayname = sourcedisplayname_node.text.strip()
result["card_sourcedisplayname"] = html.unescape(sourcedisplayname)
# 如果 appname 为空,使用 sourcedisplayname 作为 appname
if not result["card_appname"]:
result["card_appname"] = result["card_sourcedisplayname"]
# 兼容直接在 appmsg 下的 appname
if not result["card_appname"]:
appname_direct = appmsg_root.findtext('./appname', default='').strip()
if appname_direct:
result["card_appname"] = html.unescape(appname_direct)
# 记录提取结果用于调试
self.logger.debug(f"ElementTree 解析结果: type={result['card_type']}, title={result['card_title']}, desc_len={len(result['card_description'])}, url_len={len(result['card_url'])}, app={result['card_appname']}, source={result['card_sourcedisplayname']}")
except ET.ParseError as e:
self.logger.error(f"使用 ElementTree 解析 <appmsg> 时出错: {e}\nXML 内容片段: {appmsg_xml_str[:500]}...", exc_info=True)
# 即使解析<appmsg>出错,如果正则找到了<appmsg>,仍然标记为卡片
if result["is_card"] == False and ('<appmsg' in content or '<msg>' in content):
result["is_card"] = True # 基本判断是卡片,但细节提取失败
# 尝试用正则提取基础信息作为后备
type_match_fallback = re.search(r'<type>(\d+)</type>', content)
title_match_fallback = re.search(r'<title>(.*?)</title>', content, re.DOTALL)
if type_match_fallback:
result["card_type"] = self.get_card_type_name(type_match_fallback.group(1))
if title_match_fallback:
result["card_title"] = html.unescape(title_match_fallback.group(1).strip())
self.logger.warning("ElementTree 解析失败,已尝试正则后备提取基础信息")
except Exception as e:
self.logger.error(f"提取卡片详情时发生意外错误: {e}", exc_info=True)
# 尽量判断是否是卡片
if not result["is_card"] and ('<appmsg' in content or '<msg>' in content):
result["is_card"] = True
return result
def get_card_type_name(self, type_num: str) -> str:
"""根据卡片类型编号获取类型名称
Args:
type_num: 类型编号
Returns:
str: 类型名称
"""
card_types = {
"1": "文本卡片",
"2": "图片",
"3": "音频",
"4": "视频",
"5": "链接",
"6": "文件",
"7": "位置",
"8": "表情动画",
"17": "实时位置",
"19": "频道消息",
"33": "小程序",
"36": "转账",
"50": "视频号",
"51": "直播间",
"57": "引用消息",
"62": "视频号直播",
"63": "视频号商品",
"87": "群收款",
"88": "语音通话"
}
return card_types.get(type_num, f"未知类型({type_num})")
def format_message_for_ai(self, msg_data: dict, sender_name: str) -> str:
"""将提取的消息数据格式化为发送给AI的最终文本
Args:
msg_data: 提取的消息数据
sender_name: 发送者名称
Returns:
str: 格式化后的文本
"""
result = []
current_time = time.strftime("%H:%M", time.localtime())
# 添加用户新消息
if msg_data["new_content"]:
result.append(f"[{current_time}] {sender_name}: {msg_data['new_content']}")
# 处理当前消息的卡片信息(如果不是引用消息而是直接分享的卡片)
if msg_data["is_card"] and not msg_data["has_quote"]:
card_info = []
card_info.append(f"[卡片信息]")
if msg_data["card_type"]:
card_info.append(f"类型: {msg_data['card_type']}")
if msg_data["card_title"]:
card_info.append(f"标题: {msg_data['card_title']}")
if msg_data["card_description"]:
# 如果描述过长,截取一部分
description = msg_data["card_description"]
if len(description) > 100:
description = description[:97] + "..."
card_info.append(f"描述: {description}")
if msg_data["card_appname"] or msg_data["card_sourcedisplayname"]:
source = msg_data["card_appname"] or msg_data["card_sourcedisplayname"]
card_info.append(f"来源: {source}")
if msg_data["card_url"]:
# 如果URL过长截取一部分
url = msg_data["card_url"]
if len(url) > 80:
url = url[:77] + "..."
card_info.append(f"链接: {url}")
# 只有当有实质性内容时才添加卡片信息
if len(card_info) > 1: # 不只有[卡片信息]这一行
result.append("\n".join(card_info))
# 添加引用内容(如果有)
if msg_data["has_quote"]:
quoted_header = f"[用户引用]"
if msg_data["quoted_sender"]:
quoted_header += f" {msg_data['quoted_sender']}"
# 检查被引用内容是否为卡片
if msg_data["quoted_is_card"]:
# 格式化被引用的卡片信息
quoted_info = [quoted_header]
if msg_data["quoted_card_type"]:
quoted_info.append(f"类型: {msg_data['quoted_card_type']}")
if msg_data["quoted_card_title"]:
quoted_info.append(f"标题: {msg_data['quoted_card_title']}")
if msg_data["quoted_card_description"]:
# 如果描述过长,截取一部分
description = msg_data["quoted_card_description"]
if len(description) > 100:
description = description[:97] + "..."
quoted_info.append(f"描述: {description}")
if msg_data["quoted_card_appname"] or msg_data["quoted_card_sourcedisplayname"]:
source = msg_data["quoted_card_appname"] or msg_data["quoted_card_sourcedisplayname"]
quoted_info.append(f"来源: {source}")
if msg_data["quoted_card_url"]:
# 如果URL过长截取一部分
url = msg_data["quoted_card_url"]
if len(url) > 80:
url = url[:77] + "..."
quoted_info.append(f"链接: {url}")
result.append("\n".join(quoted_info))
elif msg_data["quoted_content"]:
# 如果是普通文本引用
result.append(f"{quoted_header}: {msg_data['quoted_content']}")
# 如果没有任何内容,但有媒体类型,添加基本信息
if not result and msg_data["media_type"] and msg_data["media_type"] != "文本":
result.append(f"[{current_time}] {sender_name} 发送了 [{msg_data['media_type']}]")
# 如果完全没有内容,返回一个默认消息
if not result:
result.append(f"[{current_time}] {sender_name} 发送了消息")
return "\n\n".join(result)

448
function/main_city.json Normal file
View File

@@ -0,0 +1,448 @@
{
"七台河": "101051002",
"万宁": "101310215",
"万州天城": "101041200",
"万州龙宝": "101041300",
"万盛": "101040600",
"三亚": "101310201",
"三明": "101230801",
"三门峡": "101181701",
"上海": "101020100",
"上饶": "101240301",
"东丽": "101030400",
"东方": "101310202",
"东莞": "101281601",
"东营": "101121201",
"中卫": "101170501",
"中山": "101281701",
"丰台": "101010900",
"丰都": "101043000",
"临夏": "101161101",
"临汾": "101100701",
"临沂": "101120901",
"临沧": "101291101",
"临河": "101080801",
"临高": "101310203",
"丹东": "101070601",
"丽水": "101210801",
"丽江": "101291401",
"乌兰浩特": "101081101",
"乌海": "101080301",
"乌鲁木齐": "101130101",
"乐东": "101310221",
"乐山": "101271401",
"九江": "101240201",
"云浮": "101281401",
"云阳": "101041700",
"五指山": "101310222",
"亳州": "101220901",
"仙桃": "101201601",
"伊宁": "101131001",
"伊春": "101050801",
"佛山": "101280800",
"佛爷顶": "101011700",
"佳木斯": "101050401",
"保亭": "101310214",
"保定": "101090201",
"保山": "101290501",
"信阳": "101180601",
"儋州": "101310205",
"克拉玛依": "101130201",
"八达岭": "101011600",
"六安": "101221501",
"六盘水": "101260801",
"兰州": "101160101",
"兴义": "101260906",
"内江": "101271201",
"凉山": "101271601",
"凯里": "101260501",
"包头": "101080201",
"北京": "101010100",
"北京城区": "101012200",
"北海": "101301301",
"北碚": "101040800",
"北辰": "101030600",
"十堰": "101201101",
"南京": "101190101",
"南充": "101270501",
"南宁": "101300101",
"南川": "101040400",
"南平": "101230901",
"南昌": "101240101",
"南汇": "101020600",
"南沙岛": "101310220",
"南通": "101190501",
"南阳": "101180701",
"博乐": "101131601",
"厦门": "101230201",
"双鸭山": "101051301",
"台中": "101340401",
"台北县": "101340101",
"台州": "101210601",
"合作": "101161201",
"合川": "101040300",
"合肥": "101220101",
"吉安": "101240601",
"吉林": "101060201",
"吉首": "101251501",
"吐鲁番": "101130501",
"吕梁": "101101100",
"吴忠": "101170301",
"周口": "101181401",
"呼伦贝尔": "101081000",
"呼和浩特": "101080101",
"和田": "101131301",
"咸宁": "101200701",
"咸阳": "101110200",
"哈密": "101131201",
"哈尔滨": "101050101",
"唐山": "101090501",
"商丘": "101181001",
"商洛": "101110601",
"喀什": "101130901",
"嘉兴": "101210301",
"嘉定": "101020500",
"嘉峪关": "101161401",
"四平": "101060401",
"固原": "101170401",
"垫江": "101042200",
"城口": "101041600",
"塔城": "101131101",
"塘沽": "101031100",
"大兴": "101011100",
"大兴安岭": "101050701",
"大同": "101100201",
"大庆": "101050901",
"大港": "101031200",
"大理": "101290201",
"大足": "101042600",
"大连": "101070201",
"天水": "101160901",
"天津": "101030100",
"天门": "101201501",
"太原": "101100101",
"奉节": "101041900",
"奉贤": "101021000",
"威海": "101121301",
"娄底": "101250801",
"孝感": "101200401",
"宁德": "101230301",
"宁河": "101030700",
"宁波": "101210401",
"安庆": "101220601",
"安康": "101110701",
"安阳": "101180201",
"安顺": "101260301",
"定安": "101310209",
"定西": "101160201",
"宜宾": "101271101",
"宜昌": "101200901",
"宜春": "101240501",
"宝坻": "101030300",
"宝山": "101020300",
"宝鸡": "101110901",
"宣城": "101221401",
"宿州": "101220701",
"宿迁": "101191301",
"密云": "101011300",
"密云上甸子": "101011900",
"屯昌": "101310210",
"山南": "101140301",
"岳阳": "101251001",
"崇左": "101300201",
"崇明": "101021100",
"巢湖": "101221601",
"巫山": "101042000",
"巫溪": "101041800",
"巴中": "101270901",
"巴南": "101040900",
"常州": "101191101",
"常德": "101250601",
"平凉": "101160301",
"平谷": "101011500",
"平顶山": "101180501",
"广元": "101272101",
"广安": "101270801",
"广州": "101280101",
"庆阳": "101160401",
"库尔勒": "101130601",
"廊坊": "101090601",
"延吉": "101060301",
"延安": "101110300",
"延庆": "101010800",
"开县": "101041500",
"开封": "101180801",
"张家口": "101090301",
"张家界": "101251101",
"张掖": "101160701",
"彭水": "101043200",
"徐家汇": "101021200",
"徐州": "101190801",
"德宏": "101291501",
"德州": "101120401",
"德阳": "101272001",
"忠县": "101042400",
"忻州": "101101001",
"怀化": "101251201",
"怀柔": "101010500",
"怒江": "101291201",
"恩施": "101201001",
"惠州": "101280301",
"成都": "101270101",
"房山": "101011200",
"扬州": "101190601",
"承德": "101090402",
"抚州": "101240401",
"抚顺": "101070401",
"拉萨": "101140101",
"揭阳": "101281901",
"攀枝花": "101270201",
"文山": "101290601",
"文昌": "101310212",
"斋堂": "101012000",
"新乡": "101180301",
"新余": "101241001",
"无锡": "101190201",
"日喀则": "101140201",
"日照": "101121501",
"昆明": "101290101",
"昌吉": "101130401",
"昌平": "101010700",
"昌江": "101310206",
"昌都": "101140501",
"昭通": "101291001",
"晋中": "101100401",
"晋城": "101100601",
"晋江": "101230509",
"普洱": "101290901",
"景德镇": "101240801",
"景洪": "101291601",
"曲靖": "101290401",
"朔州": "101100901",
"朝阳": "101071201",
"本溪": "101070501",
"来宾": "101300401",
"杭州": "101210101",
"松原": "101060801",
"松江": "101020900",
"林芝": "101140401",
"果洛": "101150501",
"枣庄": "101121401",
"柳州": "101300301",
"株洲": "101250301",
"桂林": "101300501",
"梁平": "101042300",
"梅州": "101280401",
"梧州": "101300601",
"楚雄": "101290801",
"榆林": "101110401",
"武威": "101160501",
"武汉": "101200101",
"武清": "101030200",
"武都": "101161001",
"武隆": "101043100",
"毕节": "101260701",
"永川": "101040200",
"永州": "101251401",
"汉中": "101110801",
"汉沽": "101030800",
"汕头": "101280501",
"汕尾": "101282101",
"江津": "101040500",
"江门": "101281101",
"池州": "101221701",
"汤河口": "101011800",
"沈阳": "101070101",
"沙坪坝": "101043700",
"沧州": "101090701",
"河池": "101301201",
"河源": "101281201",
"泉州": "101230501",
"泰安": "101120801",
"泰州": "101191201",
"泸州": "101271001",
"洛阳": "101180901",
"津南": "101031000",
"济南": "101120101",
"济宁": "101120701",
"济源": "101181801",
"浦东": "101021300",
"海东": "101150201",
"海北": "101150801",
"海南": "101150401",
"海口": "101310101",
"海淀": "101010200",
"海西": "101150701",
"涪陵": "101041400",
"淄博": "101120301",
"淮北": "101221201",
"淮南": "101220401",
"淮安": "101190901",
"深圳": "101280601",
"清远": "101281301",
"渝北": "101040700",
"温州": "101210701",
"渭南": "101110501",
"湖州": "101210201",
"湘潭": "101250201",
"湛江": "101281001",
"滁州": "101221101",
"滨州": "101121101",
"漯河": "101181501",
"漳州": "101230601",
"潍坊": "101120601",
"潜江": "101201701",
"潮州": "101281501",
"潼南": "101042100",
"澄迈": "101310204",
"濮阳": "101181301",
"烟台": "101120501",
"焦作": "101181101",
"牡丹江": "101050301",
"玉林": "101300901",
"玉树": "101150601",
"玉溪": "101290701",
"珠海": "101280701",
"琼中": "101310208",
"琼山": "101310102",
"琼海": "101310211",
"璧山": "101042900",
"甘孜": "101271801",
"白城": "101060601",
"白山": "101060901",
"白沙": "101310207",
"白银": "101161301",
"百色": "101301001",
"益阳": "101250700",
"盐城": "101190701",
"盘锦": "101071301",
"眉山": "101271501",
"石嘴山": "101170201",
"石家庄": "101090101",
"石景山": "101011000",
"石柱": "101042500",
"石河子": "101130301",
"神农架": "101201201",
"福州": "101230101",
"秀山": "101043600",
"秦皇岛": "101091101",
"綦江": "101043300",
"红河": "101290301",
"绍兴": "101210501",
"绥化": "101050501",
"绵阳": "101270401",
"聊城": "101121701",
"肇庆": "101280901",
"自贡": "101270301",
"舟山": "101211101",
"芜湖": "101220301",
"苏州": "101190401",
"茂名": "101282001",
"荆州": "101200801",
"荆门": "101201401",
"荣昌": "101042700",
"莆田": "101230401",
"莱芜": "101121601",
"菏泽": "101121001",
"萍乡": "101240901",
"营口": "101070801",
"葫芦岛": "101071401",
"蓟县": "101031400",
"蚌埠": "101220201",
"衡水": "101090801",
"衡阳": "101250401",
"衢州": "101211001",
"襄樊": "101200201",
"西宁": "101150101",
"西安": "101110101",
"西沙": "101310217",
"西青": "101030500",
"许昌": "101180401",
"贵港": "101300801",
"贵阳": "101260101",
"贺州": "101300701",
"资阳": "101271301",
"赣州": "101240701",
"赤峰": "101080601",
"辽源": "101060701",
"辽阳": "101071001",
"达州": "101270601",
"运城": "101100801",
"连云港": "101191001",
"通化": "101060501",
"通州": "101010600",
"通辽": "101080501",
"遂宁": "101270701",
"遵义": "101260201",
"邢台": "101090901",
"那曲": "101140601",
"邯郸": "101091001",
"邵阳": "101250901",
"郑州": "101180101",
"郴州": "101250501",
"都匀": "101260401",
"鄂尔多斯": "101080701",
"鄂州": "101200301",
"酉阳": "101043400",
"酒泉": "101160801",
"重庆": "101040100",
"金华": "101210901",
"金山": "101020700",
"金昌": "101160601",
"钦州": "101301101",
"铁岭": "101071101",
"铜仁": "101260601",
"铜川": "101111001",
"铜梁": "101042800",
"铜陵": "101221301",
"银川": "101170101",
"锡林浩特": "101080901",
"锦州": "101070701",
"镇江": "101190301",
"长寿": "101041000",
"长春": "101060101",
"长沙": "101250101",
"长治": "101100501",
"门头沟": "101011400",
"闵行": "101020200",
"阜新": "101070901",
"阜阳": "101220801",
"防城港": "101301401",
"阳江": "101281801",
"阳泉": "101100301",
"阿克苏": "101130801",
"阿勒泰": "101131401",
"阿图什": "101131501",
"阿坝": "101271901",
"阿拉善左旗": "101081201",
"阿拉尔": "101130701",
"阿里": "101140701",
"陵水": "101310216",
"随州": "101201301",
"雅安": "101271701",
"集宁": "101080401",
"霞云岭": "101012100",
"青岛": "101120201",
"青浦": "101020800",
"静海": "101030900",
"鞍山": "101070301",
"韶关": "101280201",
"顺义": "101010400",
"香格里拉": "101291301",
"马鞍山": "101220501",
"驻马店": "101181601",
"高雄": "101340201",
"鸡西": "101051101",
"鹤壁": "101181201",
"鹤岗": "101051201",
"鹰潭": "101241101",
"黄冈": "101200501",
"黄南": "101150301",
"黄山": "101221001",
"黄石": "101200601",
"黑河": "101050601",
"黔江": "101041100",
"黔阳": "101251301",
"齐齐哈尔": "101050201",
"龙岩": "101230701"
}

13
image/__init__.py Normal file
View File

@@ -0,0 +1,13 @@
"""图像生成功能模块
包含以下功能:
- CogView: 智谱AI文生图
- AliyunImage: 阿里云文生图
- GeminiImage: 谷歌Gemini文生图
"""
from .img_cogview import CogView
from .img_aliyun_image import AliyunImage
from .img_gemini_image import GeminiImage
__all__ = ['CogView', 'AliyunImage', 'GeminiImage']

108
image/img_aliyun_image.py Normal file
View File

@@ -0,0 +1,108 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
import os
import time
from http import HTTPStatus
from urllib.parse import urlparse, unquote
from pathlib import PurePosixPath
import requests
from dashscope import ImageSynthesis
import dashscope
class AliyunImage():
"""阿里文生图API调用
"""
@staticmethod
def value_check(args: dict) -> bool:
try:
return bool(args and args.get("api_key", "") and args.get("model", ""))
except Exception:
return False
def __init__(self, config={}) -> None:
self.LOG = logging.getLogger("AliyunImage")
if not config:
raise Exception("缺少配置信息")
self.api_key = config.get("api_key", "")
self.model = config.get("model", "wanx2.1-t2i-turbo")
self.size = config.get("size", "1024*1024")
self.enable = config.get("enable", True)
self.n = config.get("n", 1)
self.temp_dir = config.get("temp_dir", "./temp")
# 确保临时目录存在
if not os.path.exists(self.temp_dir):
os.makedirs(self.temp_dir)
# 设置API密钥
dashscope.api_key = self.api_key
# 不要记录初始化日志
def generate_image(self, prompt: str) -> str:
"""生成图像并返回图像URL
Args:
prompt (str): 图像描述
Returns:
str: 生成的图像URL或错误信息
"""
if not self.enable or not self.api_key:
return "阿里文生图功能未启用或API密钥未配置"
try:
rsp = ImageSynthesis.call(
api_key=self.api_key,
model=self.model,
prompt=prompt,
n=self.n,
size=self.size
)
if rsp.status_code == HTTPStatus.OK and rsp.output and rsp.output.results:
return rsp.output.results[0].url
else:
self.LOG.error(f"图像生成失败: {rsp.code}, {rsp.message}")
return f"图像生成失败: {rsp.message}"
except Exception as e:
error_str = str(e)
self.LOG.error(f"图像生成出错: {error_str}")
if "Error code: 500" in error_str or "HTTP/1.1 500" in error_str:
self.LOG.warning(f"检测到违规内容请求: {prompt}")
return "很抱歉,您的请求可能包含违规内容,无法生成图像"
return "图像生成失败,请调整您的描述后重试"
def download_image(self, image_url: str) -> str:
"""
下载图片并返回本地文件路径
Args:
image_url (str): 图片URL
Returns:
str: 本地图片文件路径下载失败则返回None
"""
try:
response = requests.get(image_url, stream=True, timeout=30)
if response.status_code == 200:
file_path = os.path.join(self.temp_dir, f"aliyun_image_{int(time.time())}.jpg")
with open(file_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
self.LOG.info(f"图片已下载到: {file_path}")
return file_path
else:
self.LOG.error(f"下载图片失败,状态码: {response.status_code}")
return None
except Exception as e:
self.LOG.error(f"下载图片过程出错: {str(e)}")
return None

99
image/img_cogview.py Normal file
View File

@@ -0,0 +1,99 @@
import logging
import os
import requests
import tempfile
import time
from zhipuai import ZhipuAI
class CogView():
def __init__(self, conf: dict) -> None:
self.api_key = conf.get("api_key")
self.model = conf.get("model", "cogview-4-250304") # 默认使用最新模型
self.quality = conf.get("quality", "standard")
self.size = conf.get("size", "1024x1024")
self.enable = conf.get("enable", True)
project_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
default_img_dir = os.path.join(project_dir, "zhipuimg")
self.temp_dir = conf.get("temp_dir", default_img_dir)
self.LOG = logging.getLogger("CogView")
if self.api_key:
self.client = ZhipuAI(api_key=self.api_key)
else:
self.LOG.warning("未配置智谱API密钥图像生成功能无法使用")
self.client = None
os.makedirs(self.temp_dir, exist_ok=True)
@staticmethod
def value_check(conf: dict) -> bool:
if conf and conf.get("api_key") and conf.get("enable", True):
return True
return False
def __repr__(self):
return 'CogView'
def generate_image(self, prompt: str) -> str:
"""
生成图像并返回图像URL
Args:
prompt (str): 图像描述
Returns:
str: 生成的图像URL或错误信息
"""
if not self.client or not self.enable:
return "图像生成功能未启用或API密钥未配置"
try:
response = self.client.images.generations(
model=self.model,
prompt=prompt,
quality=self.quality,
size=self.size,
)
if response and response.data and len(response.data) > 0:
return response.data[0].url
else:
return "图像生成失败,未收到有效响应"
except Exception as e:
error_str = str(e)
self.LOG.error(f"图像生成出错: {error_str}")
if "Error code: 500" in error_str or "HTTP/1.1 500" in error_str or "code\":\"1234\"" in error_str:
self.LOG.warning(f"检测到违规内容请求: {prompt}")
return "很抱歉,您的请求可能包含违规内容,无法生成图像"
return "图像生成失败,请调整您的描述后重试"
def download_image(self, image_url: str) -> str:
"""
下载图片并返回本地文件路径
Args:
image_url (str): 图片URL
Returns:
str: 本地图片文件路径下载失败则返回None
"""
try:
response = requests.get(image_url, stream=True, timeout=30)
if response.status_code == 200:
file_path = os.path.join(self.temp_dir, f"cogview_{int(time.time())}.jpg")
with open(file_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
self.LOG.info(f"图片已下载到: {file_path}")
return file_path
else:
self.LOG.error(f"下载图片失败,状态码: {response.status_code}")
return None
except Exception as e:
self.LOG.error(f"下载图片过程出错: {str(e)}")
return None

113
image/img_gemini_image.py Normal file
View File

@@ -0,0 +1,113 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
import os
import mimetypes
import time
import random
from google import genai
from google.genai import types
class GeminiImage:
"""谷歌AI画图API调用
"""
def __init__(self, config={}) -> None:
self.LOG = logging.getLogger("GeminiImage")
self.enable = config.get("enable", True)
self.api_key = config.get("api_key", "") or os.environ.get("GEMINI_API_KEY", "")
self.model = config.get("model", "gemini-2.0-flash-exp-image-generation")
self.proxy = config.get("proxy", "")
project_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
self.temp_dir = config.get("temp_dir", os.path.join(project_dir, "geminiimg"))
if not os.path.exists(self.temp_dir):
os.makedirs(self.temp_dir)
if not self.api_key:
self.enable = False
return
try:
# 设置代理
if self.proxy:
os.environ["HTTP_PROXY"] = self.proxy
os.environ["HTTPS_PROXY"] = self.proxy
# 初始化客户端
self.client = genai.Client(api_key=self.api_key)
except Exception:
self.enable = False
def generate_image(self, prompt: str) -> str:
"""生成图像并返回图像文件路径
"""
try:
# 设置代理
if self.proxy:
os.environ["HTTP_PROXY"] = self.proxy
os.environ["HTTPS_PROXY"] = self.proxy
image_prompt = f"生成一张高质量的图片: {prompt}。请直接提供图像,不需要描述。"
# 发送请求
response = self.client.models.generate_content(
model=self.model,
contents=image_prompt,
config=types.GenerateContentConfig(
response_modalities=['Text', 'Image']
)
)
# 处理响应
if hasattr(response, 'candidates') and response.candidates:
for candidate in response.candidates:
if hasattr(candidate, 'content') and candidate.content:
for part in candidate.content.parts:
if hasattr(part, 'inline_data') and part.inline_data:
# 保存图像
file_name = f"gemini_image_{int(time.time())}_{random.randint(1000, 9999)}"
file_extension = mimetypes.guess_extension(part.inline_data.mime_type) or ".png"
file_path = os.path.join(self.temp_dir, f"{file_name}{file_extension}")
with open(file_path, "wb") as f:
f.write(part.inline_data.data)
return file_path
# 如果没有找到图像,尝试获取文本响应
try:
text_content = response.text
if text_content:
return f"模型未能生成图像: {text_content[:100]}..."
except (AttributeError, TypeError):
pass
return "图像生成失败,可能需要更新模型或调整提示词"
except Exception as e:
error_str = str(e)
self.LOG.error(f"图像生成出错: {error_str}")
# 处理500错误
if "500 INTERNAL" in error_str:
self.LOG.error("遇到谷歌服务器内部错误")
return "谷歌AI服务器临时故障请稍后再试。这是谷歌服务器的问题不是你的请求有误。"
if "timeout" in error_str.lower():
return "图像生成超时,请检查网络或代理设置"
if "violated" in error_str.lower() or "policy" in error_str.lower():
return "请求包含违规内容,无法生成图像"
# 其他常见错误类型处理
if "quota" in error_str.lower() or "rate" in error_str.lower():
return "API使用配额已用尽或请求频率过高请稍后再试"
if "authentication" in error_str.lower() or "auth" in error_str.lower():
return "API密钥验证失败请联系管理员检查配置"
return f"图像生成失败,错误原因: {error_str.split('.')[-1] if '.' in error_str else error_str}"

189
image/img_manager.py Normal file
View File

@@ -0,0 +1,189 @@
import logging
import os
import random
import shutil
import time
from wcferry import Wcf
from configuration import Config
from image import CogView, AliyunImage, GeminiImage
class ImageGenerationManager:
"""图像生成管理器
封装所有图像生成服务和相关功能的管理类,使主程序代码更简洁。
"""
def __init__(self, config: Config, wcf: Wcf, logger: logging.Logger, send_text_callback: callable):
"""
初始化图像生成管理器。
Args:
config: 配置对象
wcf: Wcf 实例,用于发送图片
logger: 日志记录器
send_text_callback: 发送文本消息的回调函数 (如 Robot.sendTextMsg)
"""
self.config = config
self.wcf = wcf
self.LOG = logger
self.send_text = send_text_callback
# 初始化图像生成服务
self.cogview = None
self.aliyun_image = None
self.gemini_image = None
self.LOG.info("开始初始化图像生成服务...")
# 初始化Gemini图像生成服务
try:
if hasattr(self.config, 'GEMINI_IMAGE'):
self.gemini_image = GeminiImage(self.config.GEMINI_IMAGE)
else:
self.gemini_image = GeminiImage({})
if getattr(self.gemini_image, 'enable', False):
self.LOG.info("谷歌Gemini图像生成功能已启用")
except Exception as e:
self.LOG.error(f"初始化谷歌Gemini图像生成服务失败: {e}")
# 初始化CogView服务
if hasattr(self.config, 'COGVIEW') and self.config.COGVIEW.get('enable', False):
try:
self.cogview = CogView(self.config.COGVIEW)
self.LOG.info("智谱CogView文生图功能已初始化")
except Exception as e:
self.LOG.error(f"初始化智谱CogView文生图服务失败: {str(e)}")
# 初始化AliyunImage服务
if hasattr(self.config, 'ALIYUN_IMAGE') and self.config.ALIYUN_IMAGE.get('enable', False):
try:
self.aliyun_image = AliyunImage(self.config.ALIYUN_IMAGE)
self.LOG.info("阿里Aliyun功能已初始化")
except Exception as e:
self.LOG.error(f"初始化阿里云文生图服务失败: {str(e)}")
def handle_image_generation(self, service_type, prompt, receiver, at_user=None):
"""处理图像生成请求的通用函数
:param service_type: 服务类型,'cogview'/'aliyun'/'gemini'
:param prompt: 图像生成提示词
:param receiver: 接收者ID
:param at_user: 被@的用户ID用于群聊
:return: 处理状态True成功False失败
"""
if service_type == 'cogview':
if not self.cogview or not hasattr(self.config, 'COGVIEW') or not self.config.COGVIEW.get('enable', False):
self.LOG.info(f"收到智谱文生图请求但功能未启用: {prompt}")
fallback_to_chat = self.config.COGVIEW.get('fallback_to_chat', False) if hasattr(self.config, 'COGVIEW') else False
if not fallback_to_chat:
self.send_text("报一丝,智谱文生图功能没有开启,请联系管理员开启此功能。(可以贿赂他开启)", receiver, at_user)
return True
return False
service = self.cogview
wait_message = "正在生成图像,请稍等..."
elif service_type == 'aliyun':
if not self.aliyun_image or not hasattr(self.config, 'ALIYUN_IMAGE') or not self.config.ALIYUN_IMAGE.get('enable', False):
self.LOG.info(f"收到阿里文生图请求但功能未启用: {prompt}")
fallback_to_chat = self.config.ALIYUN_IMAGE.get('fallback_to_chat', False) if hasattr(self.config, 'ALIYUN_IMAGE') else False
if not fallback_to_chat:
self.send_text("报一丝,阿里文生图功能没有开启,请联系管理员开启此功能。(可以贿赂他开启)", receiver, at_user)
return True
return False
service = self.aliyun_image
model_type = self.config.ALIYUN_IMAGE.get('model', '')
if model_type == 'wanx2.1-t2i-plus':
wait_message = "当前模型为阿里PLUS模型生成速度较慢请耐心等候..."
elif model_type == 'wanx-v1':
wait_message = "当前模型为阿里V1模型生成速度非常慢可能需要等待较长时间请耐心等候..."
else:
wait_message = "正在生成图像,请稍等..."
elif service_type == 'gemini':
if not self.gemini_image or not getattr(self.gemini_image, 'enable', False):
self.send_text("谷歌文生图服务未启用", receiver, at_user)
return True
service = self.gemini_image
wait_message = "正在通过谷歌AI生成图像请稍等..."
else:
self.LOG.error(f"未知的图像生成服务类型: {service_type}")
return False
self.LOG.info(f"收到图像生成请求 [{service_type}]: {prompt}")
self.send_text(wait_message, receiver, at_user)
image_url = service.generate_image(prompt)
if image_url and (image_url.startswith("http") or os.path.exists(image_url)):
try:
self.LOG.info(f"开始处理图片: {image_url}")
# 谷歌API直接返回本地文件路径无需下载
image_path = image_url if service_type == 'gemini' else service.download_image(image_url)
if image_path:
# 创建一个临时副本,避免文件占用问题
temp_dir = os.path.dirname(image_path)
file_ext = os.path.splitext(image_path)[1]
temp_copy = os.path.join(
temp_dir,
f"temp_{service_type}_{int(time.time())}_{random.randint(1000, 9999)}{file_ext}"
)
try:
# 创建文件副本
shutil.copy2(image_path, temp_copy)
self.LOG.info(f"创建临时副本: {temp_copy}")
# 发送临时副本
self.LOG.info(f"发送图片到 {receiver}: {temp_copy}")
self.wcf.send_image(temp_copy, receiver)
# 等待一小段时间确保微信API完成处理
time.sleep(1.5)
except Exception as e:
self.LOG.error(f"创建或发送临时副本失败: {str(e)}")
# 如果副本处理失败,尝试直接发送原图
self.LOG.info(f"尝试直接发送原图: {image_path}")
self.wcf.send_image(image_path, receiver)
# 安全删除文件
self._safe_delete_file(image_path)
if os.path.exists(temp_copy):
self._safe_delete_file(temp_copy)
else:
self.LOG.warning(f"图片下载失败发送URL链接作为备用: {image_url}")
self.send_text(f"图像已生成,但无法自动显示,点链接也能查看:\n{image_url}", receiver, at_user)
except Exception as e:
self.LOG.error(f"发送图片过程出错: {str(e)}")
self.send_text(f"图像已生成,但发送过程出错,点链接也能查看:\n{image_url}", receiver, at_user)
else:
self.LOG.error(f"图像生成失败: {image_url}")
self.send_text(f"图像生成失败: {image_url}", receiver, at_user)
return True
def _safe_delete_file(self, file_path, max_retries=3, retry_delay=1.0):
"""安全删除文件,带有重试机制
:param file_path: 要删除的文件路径
:param max_retries: 最大重试次数
:param retry_delay: 重试间隔(秒)
:return: 是否成功删除
"""
if not os.path.exists(file_path):
return True
for attempt in range(max_retries):
try:
os.remove(file_path)
self.LOG.info(f"成功删除文件: {file_path}")
return True
except Exception as e:
if attempt < max_retries - 1:
self.LOG.warning(f"删除文件 {file_path} 失败, 将在 {retry_delay} 秒后重试: {str(e)}")
time.sleep(retry_delay)
else:
self.LOG.error(f"无法删除文件 {file_path} 经过 {max_retries} 次尝试: {str(e)}")
return False

View File

@@ -0,0 +1,72 @@
# 图像生成配置说明
#### 文生图相关功能的加入可在此说明文件内加入贡献者的GitHub链接方便以后的更新以及BUG的修改
智谱AI绘画[JiQingzhe2004 (JiQingzhe)](https://github.com/JiQingzhe2004)
阿里云AI绘画[JiQingzhe2004 (JiQingzhe)](https://github.com/JiQingzhe2004)
谷歌AI绘画[JiQingzhe2004 (JiQingzhe)](https://github.com/JiQingzhe2004)
------
在`config.yaml`中进行以下配置才可以调用:
```yaml
cogview: # -----智谱AI图像生成配置这行不填-----
# 此API请参考 https://www.bigmodel.cn/dev/api/image-model/cogview
enable: False # 是否启用图像生成功能默认关闭将False替换为true则开启此模型可和其他模型同时运行。
api_key: # 智谱API密钥请填入您的API Key
model: cogview-4-250304 # 模型编码可选cogview-4-250304、cogview-4、cogview-3-flash
quality: standard # 生成质量可选standard快速、hd高清
size: 1024x1024 # 图片尺寸,可自定义,需符合条件
trigger_keyword: 牛智谱 # 触发图像生成的关键词
temp_dir: # 临时文件存储目录留空则默认使用项目目录下的zhipuimg文件夹如果要更改例如 D:/Pictures/temp 或 /home/user/temp
fallback_to_chat: true # 当未启用绘画功能时true=将请求发给聊天模型处理false=回复固定的未启用提示信息
aliyun_image: # -----如果要使用阿里云文生图,取消下面的注释并填写相关内容,模型到阿里云百炼找通义万相-文生图2.1-Turbo-----
enable: true # 是否启用阿里文生图功能false为关闭默认开启如果未配置则会将消息发送给聊天大模型
api_key: sk-xxxxxxxxxxxxxxxxxxxxxxxx # 替换为你的DashScope API密钥
model: wanx2.1-t2i-turbo # 模型名称默认使用wanx2.1-t2i-turbo(快),wanx2.1-t2i-plus,wanx-v1会给用户不同的提示
size: 1024*1024 # 图像尺寸,格式为宽*高
n: 1 # 生成图像的数量
temp_dir: ./temp # 临时文件存储路径
trigger_keyword: 牛阿里 # 触发词,默认为"牛阿里"
fallback_to_chat: true # 当未启用绘画功能时true=将请求发给聊天模型处理false=回复固定的未启用提示信息
gemini_image: # -----谷歌AI画图配置这行不填-----
enable: true # 是否启用谷歌AI画图功能
api_key: your-api-key-here # 谷歌Gemini API密钥必填
model: gemini-2.0-flash-exp-image-generation # 模型名称,建议保持默认,只有这一个模型可以进行绘画
temp_dir: ./geminiimg # 图片保存目录,可选
trigger_keyword: 牛谷歌 # 触发词,默认为"牛谷歌"
fallback_to_chat: false # 当未启用绘画功能时true=将请求发给聊天模型处理false=回复固定的未启用提示信息
```
## 如何获取API密钥
1. 访问 [Google AI Studio](https://aistudio.google.com/)
2. 创建一个账号或登录
3. 访问 [API Keys](https://aistudio.google.com/app/apikeys) 页面
4. 创建一个新的API密钥
5. 复制API密钥并填入配置文件
## 使用方法
直接发送消息或在群聊中@机器人,使用触发词加提示词,例如:
# 单人聊天的使用
```
牛智谱 一只可爱的猫咪在阳光下玩耍
牛阿里 一只可爱的猫咪在阳光下玩耍
牛谷歌 一只可爱的猫咪在阳光下玩耍
```
## 群组的使用方法
```
@ 牛图图 一只可爱的猫咪在阳光下玩耍
需要接入机器人的微信名称叫做牛图图
```
生成的图片会自动发送到聊天窗口。

94
job_mgmt.py Normal file
View File

@@ -0,0 +1,94 @@
# -*- coding: utf-8 -*-
import time
import logging
from typing import Any, Callable
import schedule
# 获取模块级 logger
logger = logging.getLogger(__name__)
class Job(object):
def __init__(self) -> None:
pass
def onEverySeconds(self, seconds: int, task: Callable[..., Any], *args, **kwargs) -> None:
"""
每 seconds 秒执行
:param seconds: 间隔,秒
:param task: 定时执行的方法
:return: None
"""
schedule.every(seconds).seconds.do(task, *args, **kwargs)
def onEveryMinutes(self, minutes: int, task: Callable[..., Any], *args, **kwargs) -> None:
"""
每 minutes 分钟执行
:param minutes: 间隔,分钟
:param task: 定时执行的方法
:return: None
"""
schedule.every(minutes).minutes.do(task, *args, **kwargs)
def onEveryHours(self, hours: int, task: Callable[..., Any], *args, **kwargs) -> None:
"""
每 hours 小时执行
:param hours: 间隔,小时
:param task: 定时执行的方法
:return: None
"""
schedule.every(hours).hours.do(task, *args, **kwargs)
def onEveryDays(self, days: int, task: Callable[..., Any], *args, **kwargs) -> None:
"""
每 days 天执行
:param days: 间隔,天
:param task: 定时执行的方法
:return: None
"""
schedule.every(days).days.do(task, *args, **kwargs)
def onEveryTime(self, times: int, task: Callable[..., Any], *args, **kwargs) -> None:
"""
每天定时执行
:param times: 时间字符串列表,格式:
- For daily jobs -> HH:MM:SS or HH:MM
- For hourly jobs -> MM:SS or :MM
- For minute jobs -> :SS
:param task: 定时执行的方法
:return: None
例子: times=["10:30", "10:45", "11:00"]
"""
if not isinstance(times, list):
times = [times]
for t in times:
schedule.every(1).days.at(t).do(task, *args, **kwargs)
def runPendingJobs(self) -> None:
schedule.run_pending()
if __name__ == "__main__":
# 设置测试用的日志配置
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s'
)
def printStr(s):
logger.info(s)
job = Job()
job.onEverySeconds(59, printStr, "onEverySeconds 59")
job.onEveryMinutes(59, printStr, "onEveryMinutes 59")
job.onEveryHours(23, printStr, "onEveryHours 23")
job.onEveryDays(1, printStr, "onEveryDays 1")
job.onEveryTime("23:59", printStr, "onEveryTime 23:59")
while True:
job.runPendingJobs()
time.sleep(1)

110
main.py Normal file
View File

@@ -0,0 +1,110 @@
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import signal
import logging
import sys # 导入 sys 模块
import os
from argparse import ArgumentParser
# 确保日志目录存在
log_dir = "logs"
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# 配置 logging
log_format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s'
logging.basicConfig(
level=logging.WARNING, # 提高默认日志级别为 WARNING只显示警告和错误信息
format=log_format,
handlers=[
# logging.FileHandler(os.path.join(log_dir, "app.log"), encoding='utf-8'), # 将所有日志写入文件
# logging.StreamHandler(sys.stdout) # 同时输出到控制台
]
)
# 为特定模块设置更具体的日志级别
logging.getLogger("requests").setLevel(logging.ERROR) # 提高为 ERROR
logging.getLogger("urllib3").setLevel(logging.ERROR) # 提高为 ERROR
logging.getLogger("httpx").setLevel(logging.ERROR) # 提高为 ERROR
# 常见的自定义模块日志设置,按需修改
logging.getLogger("Weather").setLevel(logging.WARNING)
logging.getLogger("ai_providers").setLevel(logging.WARNING)
logging.getLogger("commands").setLevel(logging.WARNING)
from function.func_report_reminder import ReportReminder
from configuration import Config
from constants import ChatType
from robot import Robot, __version__
from wcferry import Wcf
def main(chat_type: int):
config = Config()
wcf = Wcf(debug=False) # 将 debug 设置为 False 减少 wcf 的调试输出
# 定义全局变量robot使其在handler中可访问
global robot
robot = Robot(config, wcf, chat_type)
def handler(sig, frame):
# 先清理机器人资源(包括关闭数据库连接)
if 'robot' in globals() and robot:
robot.LOG.info("程序退出,开始清理资源...")
robot.cleanup()
# 再清理wcf环境
wcf.cleanup() # 退出前清理环境
exit(0)
signal.signal(signal.SIGINT, handler)
robot.LOG.info(f"WeChatRobot【{__version__}】成功启动···")
# 机器人启动发送测试消息
robot.sendTextMsg("机器人启动成功!", "filehelper")
# 接收消息
# robot.enableRecvMsg() # 可能会丢消息?
robot.enableReceivingMsg() # 加队列
# 每天 7 点发送天气预报
robot.onEveryTime("07:00", robot.weatherReport)
# 每天 7:30 发送新闻
robot.onEveryTime("07:30", robot.newsReport)
# 每天 16:30 提醒发日报周报月报
robot.onEveryTime("17:00", ReportReminder.remind, robot=robot)
# 让机器人一直跑
robot.keepRunningAndBlockProcess()
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument('-c', type=int, default=0,
help=f'选择默认模型参数序号: {ChatType.help_hint()}(可通过配置文件为不同群指定模型)')
parser.add_argument('-d', '--debug', action='store_true',
help='启用调试模式,输出更详细的日志信息')
parser.add_argument('-q', '--quiet', action='store_true',
help='安静模式,只输出错误信息')
parser.add_argument('-v', '--verbose', action='store_true',
help='详细输出模式,显示所有信息日志')
args = parser.parse_args()
# 处理日志级别参数
if args.debug:
# 调试模式优先级最高
logging.getLogger().setLevel(logging.DEBUG)
print("已启用调试模式,将显示所有详细日志信息")
elif args.quiet:
# 安静模式,控制台只显示错误
logging.getLogger().setLevel(logging.ERROR)
print("已启用安静模式,控制台只显示错误信息")
elif args.verbose:
# 详细模式,显示所有 INFO 级别日志
logging.getLogger().setLevel(logging.INFO)
print("已启用详细模式,将显示所有信息日志")
main(args.c)

20
requirements.txt Normal file
View File

@@ -0,0 +1,20 @@
chinese_calendar
lxml
openai>1.0.0
pandas
pyyaml
requests
schedule
pyhandytools
sparkdesk-api==1.3.0
wcferry==39.5.*
websocket
pillow
jupyter_client
zhdate
ipykernel
google-generativeai>=0.3.0
zhipuai>=1.0.0
ollama
dashscope
google-genai

600
robot.py Normal file
View File

@@ -0,0 +1,600 @@
# -*- coding: utf-8 -*-
import logging
import re
import time
import xml.etree.ElementTree as ET
from queue import Empty
from threading import Thread
import os
import random
import shutil
from ai_providers.ai_zhipu import ZhiPu
from image import CogView, AliyunImage, GeminiImage
from image.img_manager import ImageGenerationManager
from wcferry import Wcf, WxMsg
from ai_providers.ai_bard import BardAssistant
from ai_providers.ai_chatglm import ChatGLM
from ai_providers.ai_ollama import Ollama
from ai_providers.ai_chatgpt import ChatGPT
from ai_providers.ai_deepseek import DeepSeek
from ai_providers.ai_perplexity import Perplexity
from function.func_chengyu import cy
from function.func_weather import Weather
from function.func_news import News
from ai_providers.ai_tigerbot import TigerBot
from ai_providers.ai_xinghuo_web import XinghuoWeb
from function.func_duel import start_duel, get_rank_list, get_player_stats, change_player_name, DuelManager, attempt_sneak_attack
from function.func_summary import MessageSummary # 导入新的MessageSummary类
from function.func_reminder import ReminderManager # 导入ReminderManager类
from configuration import Config
from constants import ChatType
from job_mgmt import Job
from function.func_xml_process import XmlProcessor
from function.func_goblin_gift import GoblinGiftManager
# 导入命令路由系统
from commands.context import MessageContext
from commands.router import CommandRouter
from commands.registry import COMMANDS, get_commands_info
from commands.handlers import handle_chitchat # 导入闲聊处理函数
__version__ = "39.2.4.0"
class Robot(Job):
"""个性化自己的机器人
"""
def __init__(self, config: Config, wcf: Wcf, chat_type: int) -> None:
# 调用父类构造函数
super().__init__()
self.wcf = wcf
self.config = config
self.LOG = logging.getLogger("Robot")
self.wxid = self.wcf.get_self_wxid()
self.allContacts = self.getAllContacts()
self._msg_timestamps = []
# 创建决斗管理器
self.duel_manager = DuelManager(self.sendDuelMsg)
# 初始化消息总结功能
self.message_summary = MessageSummary(max_history=200)
# 初始化XML处理器
self.xml_processor = XmlProcessor(self.LOG)
# 初始化所有可能需要的AI模型实例
self.chat_models = {}
self.LOG.info("开始初始化各种AI模型...")
# 初始化TigerBot
if TigerBot.value_check(self.config.TIGERBOT):
self.chat_models[ChatType.TIGER_BOT.value] = TigerBot(self.config.TIGERBOT)
self.LOG.info(f"已加载 TigerBot 模型")
# 初始化ChatGPT
if ChatGPT.value_check(self.config.CHATGPT):
self.chat_models[ChatType.CHATGPT.value] = ChatGPT(self.config.CHATGPT)
self.LOG.info(f"已加载 ChatGPT 模型")
# 初始化讯飞星火
if XinghuoWeb.value_check(self.config.XINGHUO_WEB):
self.chat_models[ChatType.XINGHUO_WEB.value] = XinghuoWeb(self.config.XINGHUO_WEB)
self.LOG.info(f"已加载 讯飞星火 模型")
# 初始化ChatGLM
if ChatGLM.value_check(self.config.CHATGLM):
try:
# 检查key是否有实际内容而不只是存在
if self.config.CHATGLM.get('key') and self.config.CHATGLM.get('key').strip():
self.chat_models[ChatType.CHATGLM.value] = ChatGLM(self.config.CHATGLM)
self.LOG.info(f"已加载 ChatGLM 模型")
else:
self.LOG.warning("ChatGLM 配置中缺少有效的API密钥跳过初始化")
except Exception as e:
self.LOG.error(f"初始化 ChatGLM 模型时出错: {str(e)}")
# 初始化BardAssistant
if BardAssistant.value_check(self.config.BardAssistant):
self.chat_models[ChatType.BardAssistant.value] = BardAssistant(self.config.BardAssistant)
self.LOG.info(f"已加载 BardAssistant 模型")
# 初始化ZhiPu
if ZhiPu.value_check(self.config.ZhiPu):
self.chat_models[ChatType.ZhiPu.value] = ZhiPu(self.config.ZhiPu)
self.LOG.info(f"已加载 智谱 模型")
# 初始化Ollama
if Ollama.value_check(self.config.OLLAMA):
self.chat_models[ChatType.OLLAMA.value] = Ollama(self.config.OLLAMA)
self.LOG.info(f"已加载 Ollama 模型")
# 初始化DeepSeek
if DeepSeek.value_check(self.config.DEEPSEEK):
self.chat_models[ChatType.DEEPSEEK.value] = DeepSeek(self.config.DEEPSEEK)
self.LOG.info(f"已加载 DeepSeek 模型")
# 初始化Perplexity
if Perplexity.value_check(self.config.PERPLEXITY):
self.chat_models[ChatType.PERPLEXITY.value] = Perplexity(self.config.PERPLEXITY)
self.perplexity = self.chat_models[ChatType.PERPLEXITY.value] # 单独保存一个引用用于特殊处理
self.LOG.info(f"已加载 Perplexity 模型")
# 根据chat_type参数选择默认模型
if chat_type > 0 and chat_type in self.chat_models:
self.chat = self.chat_models[chat_type]
self.default_model_id = chat_type
else:
# 如果没有指定chat_type或指定的模型不可用尝试使用配置文件中指定的默认模型
self.default_model_id = self.config.GROUP_MODELS.get('default', 0)
if self.default_model_id in self.chat_models:
self.chat = self.chat_models[self.default_model_id]
elif self.chat_models: # 如果有任何可用模型,使用第一个
self.default_model_id = list(self.chat_models.keys())[0]
self.chat = self.chat_models[self.default_model_id]
else:
self.LOG.warning("未配置任何可用的模型")
self.chat = None
self.default_model_id = 0
self.LOG.info(f"默认模型: {self.chat}模型ID: {self.default_model_id}")
# 显示群组-模型映射信息
if hasattr(self.config, 'GROUP_MODELS'):
# 显示群聊映射信息
if self.config.GROUP_MODELS.get('mapping'):
self.LOG.info("群聊-模型映射配置:")
for mapping in self.config.GROUP_MODELS.get('mapping', []):
room_id = mapping.get('room_id', '')
model_id = mapping.get('model', 0)
if room_id and model_id in self.chat_models:
model_name = self.chat_models[model_id].__class__.__name__
self.LOG.info(f" 群聊 {room_id} -> 模型 {model_name}(ID:{model_id})")
elif room_id:
self.LOG.warning(f" 群聊 {room_id} 配置的模型ID {model_id} 不可用")
# 显示私聊映射信息
if self.config.GROUP_MODELS.get('private_mapping'):
self.LOG.info("私聊-模型映射配置:")
for mapping in self.config.GROUP_MODELS.get('private_mapping', []):
wxid = mapping.get('wxid', '')
model_id = mapping.get('model', 0)
if wxid and model_id in self.chat_models:
model_name = self.chat_models[model_id].__class__.__name__
contact_name = self.allContacts.get(wxid, wxid)
self.LOG.info(f" 私聊用户 {contact_name}({wxid}) -> 模型 {model_name}(ID:{model_id})")
elif wxid:
self.LOG.warning(f" 私聊用户 {wxid} 配置的模型ID {model_id} 不可用")
# 初始化图像生成管理器
self.image_manager = ImageGenerationManager(self.config, self.wcf, self.LOG, self.sendTextMsg)
# 初始化古灵阁妖精馈赠管理器
self.goblin_gift_manager = GoblinGiftManager(self.config, self.wcf, self.LOG, self.sendTextMsg)
# 初始化命令路由器
self.command_router = CommandRouter(COMMANDS, robot_instance=self)
self.LOG.info(f"命令路由系统初始化完成,共加载 {len(COMMANDS)} 条命令")
# 初始化提醒管理器
try:
# 使用与MessageSummary相同的数据库路径
db_path = getattr(self.message_summary, 'db_path', "data/message_history.db")
self.reminder_manager = ReminderManager(self, db_path)
self.LOG.info("提醒管理器已初始化,与消息历史使用相同数据库。")
except Exception as e:
self.LOG.error(f"初始化提醒管理器失败: {e}", exc_info=True)
# 输出命令列表信息,便于调试
# self.LOG.debug(get_commands_info()) # 如果需要在日志中输出所有命令信息,取消本行注释
@staticmethod
def value_check(args: dict) -> bool:
if args:
return all(value is not None for key, value in args.items() if key != 'proxy')
return False
def processMsg(self, msg: WxMsg) -> None:
"""
处理收到的微信消息
:param msg: 微信消息对象
"""
try:
# 1. 使用MessageSummary记录消息(保持不变)
self.message_summary.process_message_from_wxmsg(msg, self.wcf, self.allContacts, self.wxid)
# 2. 根据消息来源选择使用的AI模型
self._select_model_for_message(msg)
# 3. 预处理消息生成MessageContext
ctx = self.preprocess(msg)
# 确保context能访问到当前选定的chat模型
setattr(ctx, 'chat', self.chat)
# 4. 使用命令路由器分发处理消息
handled = self.command_router.dispatch(ctx)
# 5. 如果没有命令处理器处理,则进行特殊逻辑处理
if not handled:
# 5.1 好友请求自动处理
if msg.type == 37: # 好友请求
self.autoAcceptFriendRequest(msg)
return
# 5.2 系统消息处理
elif msg.type == 10000:
# 5.2.1 处理新成员入群
if "加入了群聊" in msg.content and msg.from_group():
new_member_match = re.search(r'"(.+?)"邀请"(.+?)"加入了群聊', msg.content)
if new_member_match:
inviter = new_member_match.group(1) # 邀请人
new_member = new_member_match.group(2) # 新成员
# 使用配置文件中的欢迎语,支持变量替换
welcome_msg = self.config.WELCOME_MSG.format(new_member=new_member, inviter=inviter)
self.sendTextMsg(welcome_msg, msg.roomid)
self.LOG.info(f"已发送欢迎消息给新成员 {new_member} 在群 {msg.roomid}")
return
# 5.2.2 处理新好友添加
elif "你已添加了" in msg.content:
self.sayHiToNewFriend(msg)
return
# 5.3 群聊消息,且配置了响应该群
if msg.from_group() and msg.roomid in self.config.GROUPS:
# 如果在群里被@了,但命令路由器没有处理,则进行闲聊
if msg.is_at(self.wxid):
# 调用handle_chitchat函数处理闲聊
handle_chitchat(ctx, None)
else:
# 处理成语等不需要@的功能
# 成语功能已经通过命令路由器处理,这里不需要再处理
pass
# 5.4 私聊消息,未被命令处理,进行闲聊
elif not msg.from_group() and not msg.from_self():
# 检查是否是文本消息(type 1)或者是包含用户输入的类型49消息
if msg.type == 1 or (msg.type == 49 and ctx.text):
self.LOG.info(f"准备回复私聊消息: 类型={msg.type}, 文本内容='{ctx.text}'")
# 调用handle_chitchat函数处理闲聊
handle_chitchat(ctx, None)
except Exception as e:
self.LOG.error(f"处理消息时发生错误: {str(e)}", exc_info=True)
def enableRecvMsg(self) -> None:
self.wcf.enable_recv_msg(self.onMsg)
def enableReceivingMsg(self) -> None:
def innerProcessMsg(wcf: Wcf):
while wcf.is_receiving_msg():
try:
msg = wcf.get_msg()
self.LOG.info(msg)
self.processMsg(msg)
except Empty:
continue # Empty message
except Exception as e:
self.LOG.error(f"Receiving message error: {e}")
self.wcf.enable_receiving_msg()
Thread(target=innerProcessMsg, name="GetMessage", args=(self.wcf,), daemon=True).start()
def sendTextMsg(self, msg: str, receiver: str, at_list: str = "") -> None:
""" 发送消息
:param msg: 消息字符串
:param receiver: 接收人wxid或者群id
:param at_list: 要@的wxid, @所有人的wxid为notify@all
"""
# 随机延迟0.3-1.3秒,并且一分钟内发送限制
time.sleep(float(str(time.time()).split('.')[-1][-2:]) / 100.0 + 0.3)
now = time.time()
if self.config.SEND_RATE_LIMIT > 0:
# 清除超过1分钟的记录
self._msg_timestamps = [t for t in self._msg_timestamps if now - t < 60]
if len(self._msg_timestamps) >= self.config.SEND_RATE_LIMIT:
self.LOG.warning(f"发送消息过快,已达到每分钟{self.config.SEND_RATE_LIMIT}条上限。")
return
self._msg_timestamps.append(now)
# msg 中需要有 @ 名单中一样数量的 @
ats = ""
if at_list:
if at_list == "notify@all": # @所有人
ats = " @所有人"
else:
wxids = at_list.split(",")
for wxid in wxids:
# 根据 wxid 查找群昵称
ats += f" @{self.wcf.get_alias_in_chatroom(wxid, receiver)}"
# {msg}{ats} 表示要发送的消息内容后面紧跟@,例如 北京天气情况为xxx @张三
if ats == "":
self.LOG.info(f"To {receiver}: {msg}")
self.wcf.send_text(f"{msg}", receiver, at_list)
else:
self.LOG.info(f"To {receiver}:\n{ats}\n{msg}")
self.wcf.send_text(f"{ats}\n\n{msg}", receiver, at_list)
def getAllContacts(self) -> dict:
"""
获取联系人(包括好友、公众号、服务号、群成员……)
格式: {"wxid": "NickName"}
"""
contacts = self.wcf.query_sql("MicroMsg.db", "SELECT UserName, NickName FROM Contact;")
return {contact["UserName"]: contact["NickName"] for contact in contacts}
def keepRunningAndBlockProcess(self) -> None:
"""
保持机器人运行,不让进程退出
"""
while True:
self.runPendingJobs()
time.sleep(1)
def autoAcceptFriendRequest(self, msg: WxMsg) -> None:
try:
xml = ET.fromstring(msg.content)
v3 = xml.attrib["encryptusername"]
v4 = xml.attrib["ticket"]
scene = int(xml.attrib["scene"])
self.wcf.accept_new_friend(v3, v4, scene)
except Exception as e:
self.LOG.error(f"同意好友出错:{e}")
def sayHiToNewFriend(self, msg: WxMsg) -> None:
nickName = re.findall(r"你已添加了(.*),现在可以开始聊天了。", msg.content)
if nickName:
# 添加了好友,更新好友列表
self.allContacts[msg.sender] = nickName[0]
self.sendTextMsg(f"Hi {nickName[0]},我是泡泡,我自动通过了你的好友请求。", msg.sender)
def newsReport(self) -> None:
receivers = self.config.NEWS
if not receivers:
self.LOG.info("未配置定时新闻接收人,跳过。")
return
self.LOG.info("开始执行定时新闻推送任务...")
# 获取新闻,解包返回的元组
is_today, news_content = News().get_important_news()
# 必须是当天的新闻 (is_today=True) 并且有有效内容 (news_content非空) 才发送
if is_today and news_content:
self.LOG.info(f"成功获取当天新闻,准备推送给 {len(receivers)} 个接收人...")
for r in receivers:
self.sendTextMsg(news_content, r)
self.LOG.info("定时新闻推送完成。")
else:
# 记录没有发送的原因
if not is_today and news_content:
self.LOG.warning("获取到的是旧闻,定时推送已跳过。")
elif not news_content:
self.LOG.warning("获取新闻内容失败或为空,定时推送已跳过。")
else: # 理论上不会执行到这里
self.LOG.warning("获取新闻失败(未知原因),定时推送已跳过。")
def weatherReport(self, receivers: list = None) -> None:
if receivers is None:
receivers = self.config.WEATHER
if not receivers or not self.config.CITY_CODE:
self.LOG.warning("未配置天气城市代码或接收人")
return
report = Weather(self.config.CITY_CODE).get_weather()
for r in receivers:
self.sendTextMsg(report, r)
def sendDuelMsg(self, msg: str, receiver: str) -> None:
"""发送决斗消息,不受消息频率限制,不记入历史记录
:param msg: 消息字符串
:param receiver: 接收人wxid或者群id
"""
try:
self.wcf.send_text(f"{msg}", receiver, "")
except Exception as e:
self.LOG.error(f"发送决斗消息失败: {e}")
def cleanup_perplexity_threads(self):
"""清理所有Perplexity线程"""
# 如果已初始化Perplexity实例调用其清理方法
perplexity_instance = self.get_perplexity_instance()
if perplexity_instance:
perplexity_instance.cleanup()
# 检查并等待决斗线程结束
if hasattr(self, 'duel_manager') and self.duel_manager.is_duel_running():
self.LOG.info("等待决斗线程结束...")
# 最多等待5秒
for i in range(5):
if not self.duel_manager.is_duel_running():
break
time.sleep(1)
if self.duel_manager.is_duel_running():
self.LOG.warning("决斗线程在退出时仍在运行")
else:
self.LOG.info("决斗线程已结束")
def cleanup(self):
"""清理所有资源,在程序退出前调用"""
self.LOG.info("开始清理机器人资源...")
# 清理Perplexity线程
self.cleanup_perplexity_threads()
# 关闭消息历史数据库连接
if hasattr(self, 'message_summary') and self.message_summary:
self.LOG.info("正在关闭消息历史数据库...")
self.message_summary.close_db()
self.LOG.info("机器人资源清理完成")
def get_perplexity_instance(self):
"""获取Perplexity实例
Returns:
Perplexity: Perplexity实例如果未配置则返回None
"""
# 检查是否已有Perplexity实例
if hasattr(self, 'perplexity'):
return self.perplexity
# 检查config中是否有Perplexity配置
if hasattr(self.config, 'PERPLEXITY') and Perplexity.value_check(self.config.PERPLEXITY):
self.perplexity = Perplexity(self.config.PERPLEXITY)
return self.perplexity
# 检查chat是否是Perplexity类型
if isinstance(self.chat, Perplexity):
return self.chat
# 如果存在chat_models字典尝试从中获取
if hasattr(self, 'chat_models') and ChatType.PERPLEXITY.value in self.chat_models:
return self.chat_models[ChatType.PERPLEXITY.value]
return None
def try_trigger_goblin_gift(self, msg: WxMsg) -> None:
"""尝试触发古灵阁妖精的馈赠事件
用户与机器人互动时,有概率获得随机积分
根据配置决定是否启用及在哪些群聊启用
Args:
msg: 微信消息对象
"""
# 调用管理器的触发方法
self.goblin_gift_manager.try_trigger(msg)
def _select_model_for_message(self, msg: WxMsg) -> None:
"""根据消息来源选择对应的AI模型
:param msg: 接收到的消息
"""
if not hasattr(self, 'chat_models') or not self.chat_models:
return # 没有可用模型,无需切换
# 获取消息来源ID
source_id = msg.roomid if msg.from_group() else msg.sender
# 检查配置
if not hasattr(self.config, 'GROUP_MODELS'):
# 没有配置,使用默认模型
if self.default_model_id in self.chat_models:
self.chat = self.chat_models[self.default_model_id]
return
# 群聊消息处理
if msg.from_group():
model_mappings = self.config.GROUP_MODELS.get('mapping', [])
for mapping in model_mappings:
if mapping.get('room_id') == source_id:
model_id = mapping.get('model')
if model_id in self.chat_models:
# 切换到指定模型
if self.chat != self.chat_models[model_id]:
self.chat = self.chat_models[model_id]
self.LOG.info(f"已为群 {source_id} 切换到模型: {self.chat.__class__.__name__}")
else:
self.LOG.warning(f"{source_id} 配置的模型ID {model_id} 不可用,使用默认模型")
if self.default_model_id in self.chat_models:
self.chat = self.chat_models[self.default_model_id]
return
# 私聊消息处理
else:
private_mappings = self.config.GROUP_MODELS.get('private_mapping', [])
for mapping in private_mappings:
if mapping.get('wxid') == source_id:
model_id = mapping.get('model')
if model_id in self.chat_models:
# 切换到指定模型
if self.chat != self.chat_models[model_id]:
self.chat = self.chat_models[model_id]
self.LOG.info(f"已为私聊用户 {source_id} 切换到模型: {self.chat.__class__.__name__}")
else:
self.LOG.warning(f"私聊用户 {source_id} 配置的模型ID {model_id} 不可用,使用默认模型")
if self.default_model_id in self.chat_models:
self.chat = self.chat_models[self.default_model_id]
return
# 如果没有找到对应配置,使用默认模型
if self.default_model_id in self.chat_models:
self.chat = self.chat_models[self.default_model_id]
def onMsg(self, msg: WxMsg) -> int:
try:
self.LOG.info(msg)
self.processMsg(msg)
except Exception as e:
self.LOG.error(e)
return 0
def preprocess(self, msg: WxMsg) -> MessageContext:
"""
预处理消息生成MessageContext对象
:param msg: 微信消息对象
:return: MessageContext对象
"""
is_group = msg.from_group()
is_at_bot = False
pure_text = msg.content # 默认使用原始内容
# 处理引用消息等特殊情况
if msg.type == 49 and ("<title>" in msg.content or "<appmsg" in msg.content):
# 尝试提取引用消息中的文本
if is_group:
msg_data = self.xml_processor.extract_quoted_message(msg)
else:
msg_data = self.xml_processor.extract_private_quoted_message(msg)
if msg_data and msg_data.get("new_content"):
pure_text = msg_data["new_content"]
# 检查是否包含@机器人
if is_group and pure_text.startswith(f"@{self.allContacts.get(self.wxid, '')}"):
is_at_bot = True
pure_text = re.sub(r"^@.*?[\u2005|\s]", "", pure_text).strip()
elif "<title>" in msg.content:
# 备选直接从title标签提取
title_match = re.search(r'<title>(.*?)</title>', msg.content)
if title_match:
pure_text = title_match.group(1).strip()
# 检查是否@机器人
if is_group and pure_text.startswith(f"@{self.allContacts.get(self.wxid, '')}"):
is_at_bot = True
pure_text = re.sub(r"^@.*?[\u2005|\s]", "", pure_text).strip()
# 处理文本消息
elif msg.type == 1: # 文本消息
# 检查是否@机器人
if is_group and msg.is_at(self.wxid):
is_at_bot = True
# 移除@前缀
pure_text = re.sub(r"^@.*?[\u2005|\s]", "", msg.content).strip()
else:
pure_text = msg.content.strip()
# 构造上下文对象
ctx = MessageContext(
msg=msg,
wcf=self.wcf,
config=self.config,
all_contacts=self.allContacts,
robot_wxid=self.wxid,
robot=self, # 传入Robot实例本身便于handlers访问其方法
logger=self.LOG,
text=pure_text,
is_group=is_group,
is_at_bot=is_at_bot or (is_group and msg.is_at(self.wxid)), # 确保is_at_bot正确
)
# 获取发送者昵称
ctx.sender_name = ctx.get_sender_alias_or_name()
self.LOG.debug(f"预处理消息: text='{ctx.text}', is_group={ctx.is_group}, is_at_bot={ctx.is_at_bot}, sender='{ctx.sender_name}'")
return ctx