fix: agent skills

This commit is contained in:
zhayujie
2026-02-28 16:46:49 +08:00
parent e9c57ddf4d
commit 6ed85029c5
7 changed files with 77 additions and 328 deletions

View File

@@ -100,6 +100,12 @@ SAFETY:
else:
logger.debug(f"[Bash] Process User: {os.environ.get('USERNAME', os.environ.get('USER', 'unknown'))}")
# On Windows, set console codepage to UTF-8 and prepend chcp for shell commands
if sys.platform == "win32":
env["PYTHONIOENCODING"] = "utf-8"
if command and not command.strip().lower().startswith("chcp"):
command = f"chcp 65001 >nul 2>&1 && {command}"
# Execute command with inherited environment variables
result = subprocess.run(
command,
@@ -108,6 +114,8 @@ SAFETY:
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
encoding="utf-8",
errors="replace",
timeout=timeout,
env=env
)
@@ -131,6 +139,8 @@ SAFETY:
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
encoding="utf-8",
errors="replace",
timeout=timeout,
env=env
)

View File

@@ -240,8 +240,8 @@ class Read(BaseTool):
"message": f"文件过大 ({format_size(file_size)} > 50MB),无法读取内容。文件路径: {absolute_path}"
})
# Read file
with open(absolute_path, 'r', encoding='utf-8') as f:
# Read file (utf-8-sig strips BOM automatically on Windows)
with open(absolute_path, 'r', encoding='utf-8-sig') as f:
content = f.read()
# Truncate content if too long (20K characters max for model context)

View File

@@ -8,8 +8,7 @@ The [LinkAI](https://link-ai.tech) platform lets you flexibly switch between Ope
```json
{
"use_linkai": true,
"linkai_api_key": "YOUR_API_KEY",
"linkai_app_code": "YOUR_APP_CODE"
"linkai_api_key": "YOUR_API_KEY"
}
```
@@ -17,7 +16,6 @@ The [LinkAI](https://link-ai.tech) platform lets you flexibly switch between Ope
| --- | --- |
| `use_linkai` | Set to `true` to enable LinkAI interface |
| `linkai_api_key` | Create at [LinkAI Console](https://link-ai.tech/console/interface) |
| `linkai_app_code` | Optional. Code of the LinkAI agent (app or workflow) |
| `model` | Leave empty to use the agent's default model. Can be switched flexibly on the platform. All models in the [model list](https://link-ai.tech/console/models) are supported |
See the [API documentation](https://docs.link-ai.tech/platform/api) for more details.

View File

@@ -8,8 +8,7 @@ description: 通过 LinkAI 平台统一接入多种模型
```json
{
"use_linkai": true,
"linkai_api_key": "YOUR_API_KEY",
"linkai_app_code": "YOUR_APP_CODE"
"linkai_api_key": "YOUR_API_KEY"
}
```
@@ -17,7 +16,6 @@ description: 通过 LinkAI 平台统一接入多种模型
| --- | --- |
| `use_linkai` | 设为 `true` 启用 LinkAI 接口 |
| `linkai_api_key` | 在 [控制台](https://link-ai.tech/console/interface) 创建 |
| `linkai_app_code` | LinkAI 智能体(应用或工作流)的 code选填 |
| `model` | 留空则使用智能体默认模型,可在平台中灵活切换,[模型列表](https://link-ai.tech/console/models) 中的全部模型均可使用 |
参考 [接口文档](https://docs.link-ai.tech/platform/api) 了解更多。

View File

@@ -59,7 +59,7 @@ cp config.json.template config.json
### 3. 调用应用
```bash
bash scripts/call.sh "G7z6vKwp" "What is artificial intelligence?"
bash(command='curl -sS --max-time 120 -X POST "https://api.link-ai.tech/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $LINKAI_API_KEY" -d "{\"app_code\":\"G7z6vKwp\",\"messages\":[{\"role\":\"user\",\"content\":\"What is artificial intelligence?\"}],\"stream\":false}"', timeout=130)
```
## 使用示例
@@ -67,46 +67,32 @@ bash scripts/call.sh "G7z6vKwp" "What is artificial intelligence?"
### 基础调用
```bash
# 调用默认模型
bash scripts/call.sh "G7z6vKwp" "解释一下量子计算"
# 调用默认模型 (通过 bash + curl)
bash(command='curl -sS --max-time 120 -X POST "https://api.link-ai.tech/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $LINKAI_API_KEY" -d "{\"app_code\":\"G7z6vKwp\",\"messages\":[{\"role\":\"user\",\"content\":\"解释一下量子计算\"}],\"stream\":false}"', timeout=130)
```
### 指定模型
```bash
# 使用 GPT-4.1 模型
bash scripts/call.sh "G7z6vKwp" "写一篇关于AI的文章" "LinkAI-4.1"
在 JSON body 中添加 `model` 字段:
# 使用 DeepSeek 模型
bash scripts/call.sh "G7z6vKwp" "帮我写代码" "deepseek-chat"
# 使用 Claude 模型
bash scripts/call.sh "G7z6vKwp" "分析这段文本" "claude-4-sonnet"
```json
{
"app_code": "G7z6vKwp",
"model": "LinkAI-4.1",
"messages": [{"role": "user", "content": "写一篇关于AI的文章"}],
"stream": false
}
```
### 调用工作流
```bash
# 工作流会按照配置的节点顺序执行
bash scripts/call.sh "workflow_code" "输入数据或问题"
```
工作流的 app_code 从 LinkAI 控制台获取,调用方式与普通应用相同。
## ⚠️ 重要提示
### 超时配置
LinkAI 应用(特别是视频/图片生成、复杂工作流)可能需要较长时间处理。
**脚本内置超时**
- 默认120 秒(适合大多数场景)
- 可通过第 5 个参数自定义:`bash scripts/call.sh <app_code> <question> "" "false" "180"`
**推荐超时时间**
- **文本问答**120 秒(默认)
- **图片生成**120-180 秒
- **视频生成**180-300 秒
Agent 调用时会自动设置合适的超时时间。
LinkAI 应用(特别是视频/图片生成、复杂工作流)可能需要较长时间处理。在 curl 命令中加入 `--max-time 180`,并相应增加 bash 工具的 `timeout` 参数。
## 配置说明
@@ -125,38 +111,6 @@ Agent 调用时会自动设置合适的超时时间。
3. 选择要集成的应用/工作流
4. 在应用详情页找到 `app_code`
## 支持的模型
LinkAI 支持多种主流 AI 模型:
**OpenAI 系列:**
- `LinkAI-4.1` - GPT-4.1 (1000K 上下文)
- `LinkAI-4.1-mini` - GPT-4.1 mini (1000K)
- `LinkAI-4.1-nano` - GPT-4.1 nano (1000K)
- `LinkAI-4o` - GPT-4o (128K)
- `LinkAI-4o-mini` - GPT-4o mini (128K)
**DeepSeek 系列:**
- `deepseek-chat` - DeepSeek-V3 对话模型 (64K)
- `deepseek-reasoner` - DeepSeek-R1 推理模型 (64K)
**Claude 系列:**
- `claude-4-sonnet` - Claude 4 Sonnet (200K)
- `claude-3-7-sonnet` - Claude 3.7 (200K)
- `claude-3-5-sonnet` - Claude 3.5 (200K)
**Google 系列:**
- `gemini-2.5-pro` - Gemini 2.5 Pro (1000K)
- `gemini-2.0-flash` - Gemini 2.0 Flash (1000K)
**国产模型:**
- `qwen3` - 通义千问3 (128K)
- `wenxin-4.5` - 文心一言4.5 (8K)
- `doubao-1.5-pro-256k` - 豆包1.5 (256K)
- `glm-4-plus` - 智谱GLM-4-Plus (4K)
完整模型列表https://link-ai.tech/console/models
## 应用类型
### 1. 普通应用
@@ -185,10 +139,16 @@ LinkAI 支持多种主流 AI 模型:
### 成功响应
API 返回 OpenAI 兼容格式,从 `choices[0].message.content` 获取回复内容:
```json
{
"app_code": "G7z6vKwp",
"content": "人工智能AI是计算机科学的一个分支...",
"choices": [{
"message": {
"role": "assistant",
"content": "人工智能AI是计算机科学的一个分支..."
}
}],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 150,
@@ -201,9 +161,10 @@ LinkAI 支持多种主流 AI 模型:
```json
{
"error": "LinkAI API error",
"message": "应用不存在",
"response": { ... }
"error": {
"message": "应用不存在",
"code": "xxx"
}
}
```
@@ -259,7 +220,7 @@ Agent 看到所有可用应用的完整信息
Agent 根据描述选择合适的应用
调用 call.sh <app_code> <question>
通过 bash + curl 调用 LinkAI API
LinkAI API 处理并返回结果
```

View File

@@ -1,6 +1,6 @@
---
name: linkai-agent
description: Call LinkAI applications and workflows. Use bash command to execute like 'bash <base_dir>/scripts/call.sh <app_code> <question>'.
description: Call LinkAI applications and workflows. Use bash with curl to invoke the chat completions API.
homepage: https://link-ai.tech
metadata:
emoji: 🤖
@@ -10,110 +10,61 @@ metadata:
primaryEnv: "LINKAI_API_KEY"
---
# LinkAI Agent Caller
# LinkAI Agent
Call LinkAI applications and workflows through API. Supports multiple apps/workflows configured in config.json.
The available apps are dynamically loaded from `config.json` at skill loading time.
Call LinkAI applications and workflows through the chat completions API. Available apps are loaded from config.json.
## Setup
This skill requires a LinkAI API key. If not configured:
This skill requires a LinkAI API key.
1. Get your API key from https://link-ai.tech/console/api-keys
2. Set the key using: `env_config(action="set", key="LINKAI_API_KEY", value="your-key")`
1. Get your API key from [LinkAI Console](https://link-ai.tech/console/interface)
2. Set the environment variable: `export LINKAI_API_KEY=Link_xxxxxxxxxxxx` (or use env_config tool)
## Configuration
1. Copy `config.json.template` to `config.json`
2. Configure your apps/workflows:
```json
{
"apps": [
{
"app_code": "your_app_code",
"app_name": "App Name",
"app_description": "What this app does"
}
]
}
```
3. The skill description will be automatically updated when the agent loads this skill
2. Add your apps/workflows in config.json. The skill description is auto-generated from this config when loaded.
## Usage
**Important**: Scripts are located relative to this skill's base directory.
When you see this skill in `<available_skills>`, note the `<base_dir>` path.
**CRITICAL**: Always use `bash` command to execute the script:
Use the bash tool with curl to call the API. **Prefer curl** to avoid encoding issues on Windows PowerShell.
```bash
# General pattern (MUST start with bash):
bash "<base_dir>/scripts/call.sh" "<app_code>" "<question>" [model] [stream] [timeout]
# DO NOT execute the script directly like this (WRONG):
# "<base_dir>/scripts/call.sh" ...
# Parameters:
# - app_code: LinkAI app or workflow code (required)
# - question: User question (required)
# - model: Override model (optional, uses app default if not specified)
# - stream: Enable streaming (true/false, default: false)
# - timeout: curl timeout in seconds (default: 120, recommended for video/image generation)
curl -X POST "https://api.link-ai.tech/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LINKAI_API_KEY" \
-d '{
"app_code": "<app_code>",
"messages": [{"role": "user", "content": "<question>"}],
"stream": false
}'
```
**IMPORTANT - Timeout Configuration**:
- The script has a **default timeout of 120 seconds** (suitable for most cases)
- For complex tasks (video generation, large workflows), pass a longer timeout as the 5th parameter
- The bash tool also needs sufficient timeout - set its `timeout` parameter accordingly
- Example: `bash(command="bash <script> <app_code> <question> '' 'false' 180", timeout=200)`
**Optional parameters**:
## Examples
- Add `--max-time 120` to curl for long-running tasks (video/image generation)
**On Windows cmd**: Use `%LINKAI_API_KEY%` instead of `$LINKAI_API_KEY`.
**Example** (via bash tool):
### Call an app (uses default 60s timeout)
```bash
bash(command='bash "<base_dir>/scripts/call.sh" "G7z6vKwp" "What is AI?"', timeout=60)
bash(command='curl -sS --max-time 120 -X POST "https://api.link-ai.tech/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $LINKAI_API_KEY" -d "{\"app_code\":\"G7z6vKwp\",\"messages\":[{\"role\":\"user\",\"content\":\"What is AI?\"}],\"stream\":false}"', timeout=130)
```
### Call an app with specific model
```bash
bash(command='bash "<base_dir>/scripts/call.sh" "G7z6vKwp" "Explain machine learning" "LinkAI-4.1"', timeout=60)
```
## Response
### Call a workflow with custom timeout (video generation)
```bash
# Pass timeout as 5th parameter to script, and set bash timeout slightly longer
bash(command='bash "<base_dir>/scripts/call.sh" "workflow_code" "Generate a sunset video" "" "false" "180"', timeout=180)
```
```bash
bash "<base_dir>/scripts/call.sh" "workflow_code" "Analyze this data: ..."
```
Success (extract `choices[0].message.content` from JSON):
## Supported Models
You can specify any LinkAI supported model:
- `LinkAI-4.1` - Latest GPT-4.1 model (1000K context)
- `LinkAI-4.1-mini` - GPT-4.1 mini (1000K context)
- `LinkAI-4o` - GPT-4o model (128K context)
- `LinkAI-4o-mini` - GPT-4o mini (128K context)
- `deepseek-chat` - DeepSeek-V3 (64K context)
- `deepseek-reasoner` - DeepSeek-R1 reasoning model
- `claude-4-sonnet` - Claude 4 Sonnet (200K context)
- `gemini-2.5-pro` - Gemini 2.5 Pro (1000K context)
- And many more...
Full model list: https://link-ai.tech/console/models
## Response Format
Success response:
```json
{
"app_code": "G7z6vKwp",
"content": "AI stands for Artificial Intelligence...",
"choices": [{
"message": {
"role": "assistant",
"content": "AI stands for Artificial Intelligence..."
}
}],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 50,
@@ -122,44 +73,13 @@ Success response:
}
```
Error response:
Error:
```json
{
"error": "Error description",
"message": "Detailed error message"
"error": {
"message": "Error description",
"code": "error_code"
}
}
```
## Features
-**Multiple Apps**: Configure and call multiple LinkAI apps/workflows
-**Dynamic Loading**: Apps are loaded from config.json at runtime
-**Model Override**: Optionally specify model per request
-**Streaming Support**: Enable streaming output
-**Knowledge Base**: Apps can use configured knowledge bases
-**Plugins**: Apps can use enabled plugins (image recognition, web search, etc.)
-**Workflows**: Execute complex multi-step workflows
## Notes
- Each app/workflow maintains its own configuration (prompt, model, temperature, etc.)
- Apps can have knowledge bases attached for domain-specific Q&A
- Workflows execute from start node to end node and return final output
- Token usage and costs depend on the model used
- See LinkAI documentation for pricing: https://link-ai.tech/console/funds
- The skill description is automatically generated from config.json when loaded
## Troubleshooting
**"LINKAI_API_KEY environment variable is not set"**
- Use env_config tool to set the API key
**"app_code is required"**
- Make sure you're passing the app_code as the first parameter
**"应用不存在" (App not found)**
- Check that the app_code is correct
- Ensure you have access to the app
**"账号积分额度不足" (Insufficient credits)**
- Top up your LinkAI account credits

View File

@@ -1,138 +0,0 @@
#!/usr/bin/env bash
# LinkAI Agent Caller
# API Docs: https://api.link-ai.tech/v1/chat/completions
set -euo pipefail
app_code="${1:-}"
question="${2:-}"
model="${3:-}"
stream="${4:-false}"
timeout="${5:-120}" # Default 120 seconds for video/image generation
if [ -z "$app_code" ]; then
echo '{"error": "app_code is required", "usage": "bash call.sh <app_code> <question> [model] [stream] [timeout]"}'
exit 1
fi
if [ -z "$question" ]; then
echo '{"error": "question is required", "usage": "bash call.sh <app_code> <question> [model] [stream] [timeout]"}'
exit 1
fi
if [ -z "${LINKAI_API_KEY:-}" ]; then
echo '{"error": "LINKAI_API_KEY environment variable is not set", "help": "Use env_config to set LINKAI_API_KEY"}'
exit 1
fi
# API endpoint
api_url="https://api.link-ai.tech/v1/chat/completions"
# Build JSON request body
if [ -n "$model" ]; then
request_body=$(cat <<EOF
{
"app_code": "$app_code",
"model": "$model",
"messages": [
{
"role": "user",
"content": "$question"
}
],
"stream": $stream
}
EOF
)
else
request_body=$(cat <<EOF
{
"app_code": "$app_code",
"messages": [
{
"role": "user",
"content": "$question"
}
],
"stream": $stream
}
EOF
)
fi
# Call LinkAI API
response=$(curl -sS --max-time "$timeout" \
-X POST \
-H "Authorization: Bearer $LINKAI_API_KEY" \
-H "Content-Type: application/json" \
-d "$request_body" \
"$api_url" 2>&1)
curl_exit_code=$?
if [ $curl_exit_code -ne 0 ]; then
echo "{\"error\": \"Failed to call LinkAI API\", \"details\": \"$response\"}"
exit 1
fi
# Simple JSON validation
if [[ ! "$response" =~ ^[[:space:]]*[\{\[] ]]; then
echo "{\"error\": \"Invalid JSON response from API\", \"response\": \"$response\"}"
exit 1
fi
# Check for API error (top-level error only, not content_filter_result)
if echo "$response" | grep -q '^[[:space:]]*{[[:space:]]*"error"[[:space:]]*:' || echo "$response" | grep -q '"error"[[:space:]]*:[[:space:]]*{[^}]*"code"[[:space:]]*:[[:space:]]*"[^"]*"[^}]*"message"'; then
# Make sure it's not just content_filter_result inside choices
if ! echo "$response" | grep -q '"choices"[[:space:]]*:[[:space:]]*\['; then
# Extract error message
error_msg=$(echo "$response" | grep -o '"message"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/"message"[[:space:]]*:[[:space:]]*"\(.*\)"/\1/' | head -1)
error_code=$(echo "$response" | grep -o '"code"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/"code"[[:space:]]*:[[:space:]]*"\(.*\)"/\1/' | head -1)
if [ -z "$error_msg" ]; then
error_msg="Unknown API error"
fi
# Provide friendly error message for content filter
if [ "$error_code" = "content_filter_error" ] || echo "$error_msg" | grep -qi "content.*filter"; then
echo "{\"error\": \"内容安全审核\", \"message\": \"您的问题或应用返回的内容触发了LinkAI的安全审核机制请换一种方式提问或检查应用配置\", \"details\": \"$error_msg\"}"
else
echo "{\"error\": \"LinkAI API error\", \"message\": \"$error_msg\", \"code\": \"$error_code\"}"
fi
exit 1
fi
fi
# For non-stream mode, extract and format the response
if [ "$stream" = "false" ]; then
# Extract content from response
content=$(echo "$response" | grep -o '"content"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/"content"[[:space:]]*:[[:space:]]*"\(.*\)"/\1/' | head -1)
# Extract usage information
prompt_tokens=$(echo "$response" | grep -o '"prompt_tokens"[[:space:]]*:[[:space:]]*[0-9]*' | grep -o '[0-9]*' | head -1)
completion_tokens=$(echo "$response" | grep -o '"completion_tokens"[[:space:]]*:[[:space:]]*[0-9]*' | grep -o '[0-9]*' | head -1)
total_tokens=$(echo "$response" | grep -o '"total_tokens"[[:space:]]*:[[:space:]]*[0-9]*' | grep -o '[0-9]*' | head -1)
if [ -n "$content" ]; then
# Unescape JSON content
content=$(echo "$content" | sed 's/\\n/\n/g' | sed 's/\\"/"/g')
cat <<EOF
{
"app_code": "$app_code",
"content": "$content",
"usage": {
"prompt_tokens": ${prompt_tokens:-0},
"completion_tokens": ${completion_tokens:-0},
"total_tokens": ${total_tokens:-0}
}
}
EOF
else
# Return full response if we can't extract content
echo "$response"
fi
else
# For stream mode, return raw response (caller needs to handle streaming)
echo "$response"
fi