Compare commits

..

No commits in common. "52a9d085f737e93257f7116cfeaef3753b5a920c" and "a3622ce26d302de4ec80c44cfaaf53abd852e4b0" have entirely different histories.

15 changed files with 89 additions and 292 deletions

8
.gitignore vendored
View File

@ -70,11 +70,3 @@ dmypy.json
# Ruff
.ruff_cache/
# Runtime data (sessions, audit logs, scheduled jobs)
data/
# Legacy paths (pre-consolidation)
sessions.json
scheduled_jobs.json
audit/

119
README.md
View File

@ -4,55 +4,27 @@ Feishu bot that lets users control Claude Code CLI from their phone.
## Architecture
PhoneWork uses a **Router + Host Client** architecture that supports both single-machine and multi-host deployments:
```
┌─────────────────┐ ┌──────────┐ WebSocket ┌────────────────────────────────────┐
│ Feishu App │ │ Feishu │◄────────────►│ Router (public VPS) │
│ (User's Phone) │◄───────►│ Cloud │ │ - Feishu event handler │
└─────────────────┘ └──────────┘ │ - Router LLM (routing only) │
│ - Node registry + active node map │
└───────────┬────────────────────────┘
│ WebSocket (host clients connect in)
┌───────────┴────────────────────────┐
│ │
┌──────────▼──────────┐ ┌────────────▼────────┐
│ Host Client A │ │ Host Client B │
│ (home-pc) │ │ (work-server) │
│ - Mailboy LLM │ │ - Mailboy LLM │
│ - CC sessions │ │ - CC sessions │
│ - Shell / files │ │ - Shell / files │
└─────────────────────┘ └─────────────────────┘
┌─────────────┐ WebSocket ┌──────────────┐ LangChain ┌─────────────┐
│ Feishu │ ◄──────────────► │ FastAPI │ ◄──────────────► │ LLM API │
│ (client) │ │ (server) │ │ (ZhipuAI) │
└─────────────┘ └──────────────┘ └─────────────┘
┌─────────────┐
│ Claude Code │
│ (headless) │
└─────────────┘
```
**Key design decisions:**
- Host clients connect TO the router (outbound WebSocket) — NAT-transparent
- A user can be registered on multiple nodes simultaneously
- The **router LLM** decides *which node* to route each message to
- The **node mailboy LLM** handles the full orchestration loop
- Each node maintains its own conversation history per user
**Deployment modes:**
- **Standalone (`python standalone.py`):** Runs router + host client at localhost. Same architecture, simpler setup for single-machine use.
- **Multi-host:** Router on a public VPS, host clients behind NAT on different machines.
**Components:**
| Module | Purpose |
|--------|---------|
| `standalone.py` | Single-process entry point: runs router + host client together |
| `main.py` | FastAPI entry point for router-only mode |
| `shared/protocol.py` | Wire protocol for router-host communication |
| `router/main.py` | FastAPI app factory, mounts `/ws/node` endpoint |
| `router/nodes.py` | Node registry, connection management, user-to-node mapping |
| `router/ws.py` | WebSocket endpoint for host clients, heartbeat, message routing |
| `router/rpc.py` | Request correlation with asyncio.Future, timeout handling |
| `router/routing_agent.py` | Single-shot routing LLM to decide which node handles each message |
| `host_client/main.py` | WebSocket client connecting to router, message handling, reconnection |
| `host_client/config.py` | Host client configuration loader |
| `main.py` | FastAPI entry point, starts WebSocket client + session manager + scheduler |
| `bot/handler.py` | Receives Feishu events via long-connection WebSocket |
| `bot/feishu.py` | Sends text/file replies back to Feishu |
| `bot/commands.py` | Slash command handler (`/new`, `/status`, `/shell`, `/remind`, `/tasks`, `/nodes`, `/node`) |
| `bot/feishu.py` | Sends text/file/card replies back to Feishu |
| `bot/commands.py` | Slash command handler (`/new`, `/status`, `/shell`, `/remind`, `/tasks`, etc.) |
| `orchestrator/agent.py` | LangChain agent with per-user history + direct/smart mode + direct Q&A |
| `orchestrator/tools.py` | Tools: session management, shell, file ops, web search, scheduler, task status |
| `agent/manager.py` | Session registry with persistence, idle timeout, and auto-background tasks |
@ -150,33 +122,6 @@ ALLOWED_OPEN_IDS:
# Optional: 秘塔AI Search API key for web search functionality
# Get your key at: https://metaso.cn/search-api/api-keys
METASO_API_KEY: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Optional: Multi-host mode configuration
# Set ROUTER_MODE to true to enable router mode (deploy on public VPS)
ROUTER_MODE: false
ROUTER_SECRET: your-shared-secret-for-router-host-auth
```
### Host Client Configuration (for multi-host mode)
Create `host_config.yaml` for each host client:
```yaml
NODE_ID: home-pc
DISPLAY_NAME: Home PC
ROUTER_URL: wss://router.example.com/ws/node
ROUTER_SECRET: <shared_secret>
OPENAI_BASE_URL: https://open.bigmodel.cn/api/paas/v4/
OPENAI_API_KEY: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
OPENAI_MODEL: glm-4.7
WORKING_DIR: C:/Users/me/projects
METASO_API_KEY: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Which Feishu open_ids this node serves
SERVES_USERS:
- ou_abc123def456
```
---
@ -217,8 +162,6 @@ Active sessions: `GET /sessions`
| `/shell <cmd>` | Run a shell command directly (bypasses LLM) |
| `/remind <time> <msg>` | Set a reminder (e.g., `/remind 10m check build`) |
| `/tasks` | List background tasks with status |
| `/nodes` | List connected host nodes (multi-host mode) |
| `/node <name>` | Switch active node (multi-host mode) |
| `/help` | Show command reference |
### Message Routing Modes
@ -297,39 +240,3 @@ Claude Code slash commands (like `/help`, `/clear`, `/compact`, `/cost`) are pas
- Schedule recurring reminders
- Notifications delivered to Feishu
- Persistent across server restarts
### Multi-Host Architecture (Milestone 3)
#### Deployment Options
**Single-Machine Mode:**
```bash
python standalone.py
```
Runs both router and host client in one process. Identical UX to pre-M3 setup.
**Router Mode (Public VPS):**
```bash
# Set ROUTER_MODE: true in keyring.yaml
python main.py
```
Runs only the router: Feishu handler + routing LLM + node registry.
**Host Client Mode (Behind NAT):**
```bash
# Create host_config.yaml with ROUTER_URL and ROUTER_SECRET
python -m host_client.main
```
Connects to router via WebSocket, runs full mailboy stack locally.
#### Node Management
- `/nodes` — View all connected host nodes with status
- `/node <name>` — Switch active node for your user
- Automatic routing: LLM decides which node handles each message
- Health monitoring: Router tracks node heartbeats
- Reconnection: Host clients auto-reconnect on disconnect
#### Security
- Shared secret authentication between router and host clients
- User isolation: Each node only serves configured users
- Path sandboxing: Sessions restricted to WORKING_DIR

View File

@ -1,9 +1,9 @@
# PhoneWork — Roadmap
## Milestone 2: Mailboy as a Versatile Assistant (COMPLETED)
## Milestone 2: Mailboy as a Versatile Assistant
**Goal:** Elevate the mailboy (GLM-4.7 orchestrator) from a mere Claude Code relay into a
fully capable phone assistant. Users can control their machine, manage files,
fully capable phone assistant. Users should be able to control their machine, manage files,
search the web, get direct answers, and track long-running tasks — all without necessarily
opening a Claude Code session.
@ -177,21 +177,21 @@ args: action ("remind" | "repeat"), delay_seconds (int), interval_seconds (int),
## Verification Checklist
- [x] M2.1: Ask "what is a Python generator?" — mailboy replies directly, no tool call
- [x] M2.2: Send "check git status in todo_app" — `ShellTool` runs, output returned
- [x] M2.2: Send "rm -rf /" — blocked by safety guard
- [x] M2.3: Send "show me the last 50 lines of audit/abc123.jsonl" — file content returned
- [x] M2.3: Send "send me the sessions.json file" — file arrives in Feishu chat
- [x] M2.4: Start a long CC task (e.g. `--timeout 120`) — bot replies immediately, notifies on finish
- [x] M2.4: `/tasks` — lists running task with elapsed time
- [x] M2.5: "Python 3.13 有哪些新特性?" — `web ask` returns RAG answer from metaso
- [x] M2.5: "帮我读取这个URL: https://example.com" — page content extracted as markdown
- [x] M2.6: `/remind 10m deploy check` — 10 min later, message arrives in Feishu
- [ ] M2.1: Ask "what is a Python generator?" — mailboy replies directly, no tool call
- [ ] M2.2: Send "check git status in todo_app" — `ShellTool` runs, output returned
- [ ] M2.2: Send "rm -rf /" — blocked by safety guard
- [ ] M2.3: Send "show me the last 50 lines of audit/abc123.jsonl" — file content returned
- [ ] M2.3: Send "send me the sessions.json file" — file arrives in Feishu chat
- [ ] M2.4: Start a long CC task (e.g. `--timeout 120`) — bot replies immediately, notifies on finish
- [ ] M2.4: `/tasks` — lists running task with elapsed time
- [ ] M2.5: "Python 3.13 有哪些新特性?" — `web ask` returns RAG answer from metaso
- [ ] M2.5: "帮我读取这个URL: https://example.com" — page content extracted as markdown
- [ ] M2.6: `/remind 10m deploy check` — 10 min later, message arrives in Feishu
---
---
## Milestone 3: Multi-Host Architecture (Router / Host Client Split) (COMPLETED)
## Milestone 3: Multi-Host Architecture (Router / Host Client Split)
**Goal:** Split PhoneWork into two deployable components — a public-facing **Router** and
one or more **Host Clients** behind NAT. A user can be served by multiple nodes simultaneously.
@ -519,16 +519,16 @@ PhoneWork/
## M3 Verification Checklist
- [x] `python standalone.py` — works identically to current `python main.py`
- [x] Router starts, host client connects, registration logged
- [x] Feishu message → routing LLM selects node → forwarded → reply returned
- [x] `/nodes` shows all connected nodes with active marker
- [x] `/node work-server` — switches active node, confirmed in next message
- [x] Two nodes serving same user — message routed to active node
- [x] Kill host client → router marks offline, user sees "Node home-pc is offline"
- [x] Host client reconnects → re-registered, messages flow again
- [x] Long CC task on node finishes → router forwards completion notification to Feishu
- [x] Wrong `ROUTER_SECRET` → connection rejected with 401
- [ ] `python standalone.py` — works identically to current `python main.py`
- [ ] Router starts, host client connects, registration logged
- [ ] Feishu message → routing LLM selects node → forwarded → reply returned
- [ ] `/nodes` shows all connected nodes with active marker
- [ ] `/node work-server` — switches active node, confirmed in next message
- [ ] Two nodes serving same user — message routed to active node
- [ ] Kill host client → router marks offline, user sees "Node home-pc is offline"
- [ ] Host client reconnects → re-registered, messages flow again
- [ ] Long CC task on node finishes → router forwards completion notification to Feishu
- [ ] Wrong `ROUTER_SECRET` → connection rejected with 401
---

View File

@ -10,7 +10,7 @@ from typing import Optional
logger = logging.getLogger(__name__)
AUDIT_DIR = Path(__file__).parent.parent / "data" / "audit"
AUDIT_DIR = Path(__file__).parent.parent / "audit"
def _ensure_audit_dir() -> None:

View File

@ -17,7 +17,7 @@ logger = logging.getLogger(__name__)
DEFAULT_IDLE_TIMEOUT = 30 * 60
DEFAULT_CC_TIMEOUT = 300.0
PERSISTENCE_FILE = Path(__file__).parent.parent / "data" / "sessions.json"
PERSISTENCE_FILE = Path(__file__).parent.parent / "sessions.json"
@dataclass
@ -105,7 +105,7 @@ class SessionManager:
if cc_timeout > 60:
from agent.task_runner import task_runner
from orchestrator.tools import get_current_chat, set_current_chat, set_current_user
from orchestrator.tools import get_current_chat
chat_id = get_current_chat()
@ -126,29 +126,10 @@ class SessionManager:
)
return output
async def on_task_complete(task) -> None:
if not chat_id or not user_id or not task.result:
return
set_current_user(user_id)
set_current_chat(chat_id)
from orchestrator.agent import agent
follow_up = (
f"CC task completed. Output:\n{task.result}\n\n"
f"Original request was: {message}\n\n"
"If the user asked you to send a file, use send_file now. "
"Otherwise just acknowledge completion."
)
reply = await agent.run(user_id, follow_up)
if reply:
from bot.feishu import send_text
await send_text(chat_id, "chat_id", reply)
task_id = await task_runner.submit(
run_task(),
run_task,
description=f"CC session {conv_id}: {message[:50]}",
notify_chat_id=chat_id,
user_id=user_id,
on_complete=on_task_complete,
)
return f"⏳ Task #{task_id} started (timeout: {int(cc_timeout)}s). I'll notify you when it's done."
@ -202,7 +183,6 @@ class SessionManager:
def _save(self) -> None:
try:
data = {cid: s.to_dict() for cid, s in self._sessions.items()}
PERSISTENCE_FILE.parent.mkdir(parents=True, exist_ok=True)
with open(PERSISTENCE_FILE, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2)
logger.debug("Saved %d sessions to %s", len(data), PERSISTENCE_FILE)

View File

@ -14,7 +14,7 @@ from typing import Any, Callable, Dict, Optional
logger = logging.getLogger(__name__)
PERSISTENCE_FILE = Path(__file__).parent.parent / "data" / "scheduled_jobs.json"
PERSISTENCE_FILE = Path(__file__).parent.parent / "scheduled_jobs.json"
class JobStatus(str, Enum):
@ -98,7 +98,6 @@ class Scheduler:
"""Save jobs to persistence file."""
try:
data = {jid: job.to_dict() for jid, job in self._jobs.items()}
PERSISTENCE_FILE.parent.mkdir(parents=True, exist_ok=True)
with open(PERSISTENCE_FILE, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
except Exception:

View File

@ -57,7 +57,6 @@ class TaskRunner:
description: str,
notify_chat_id: Optional[str] = None,
user_id: Optional[str] = None,
on_complete: Optional[Callable[[BackgroundTask], Awaitable[None]]] = None,
) -> str:
"""Submit a coroutine as a background task."""
task_id = str(uuid.uuid4())[:8]
@ -73,11 +72,11 @@ class TaskRunner:
async with self._lock:
self._tasks[task_id] = task
asyncio.create_task(self._run_task(task_id, coro, on_complete))
asyncio.create_task(self._run_task(task_id, coro))
logger.info("Submitted background task %s: %s", task_id, description)
return task_id
async def _run_task(self, task_id: str, coro: Awaitable[Any], on_complete: Optional[Callable[[BackgroundTask], Awaitable[None]]] = None) -> None:
async def _run_task(self, task_id: str, coro: Awaitable[Any]) -> None:
"""Execute a task and send notification on completion."""
async with self._lock:
task = self._tasks.get(task_id)
@ -108,12 +107,6 @@ class TaskRunner:
else:
await self._send_notification(task)
if on_complete and task.status == TaskStatus.COMPLETED:
try:
await on_complete(task)
except Exception:
logger.exception("on_complete callback failed for task %s", task_id)
async def _send_notification(self, task: BackgroundTask) -> None:
"""Send Feishu notification about task completion."""
from bot.feishu import send_text

View File

@ -184,20 +184,22 @@ async def send_file(receive_id: str, receive_id_type: str, file_path: str, file_
loop = asyncio.get_running_loop()
# Step 1: Upload file → get file_key
with open(path, "rb") as f:
file_data = f.read()
def _upload():
with open(path, "rb") as f:
req = (
CreateFileRequest.builder()
.request_body(
CreateFileRequestBody.builder()
.file_type(file_type)
.file_name(file_name)
.file(f)
.build()
)
req = (
CreateFileRequest.builder()
.request_body(
CreateFileRequestBody.builder()
.file_type(file_type)
.file_name(file_name)
.file(file_data)
.build()
)
return _client.im.v1.file.create(req)
.build()
)
return _client.im.v1.file.create(req)
upload_resp = await loop.run_in_executor(None, _upload)

View File

@ -184,14 +184,6 @@ def start_websocket_client(loop: asyncio.AbstractEventLoop) -> None:
backoff = 1.0
max_backoff = 60.0
# lark_oapi.ws.client captures the event loop at module import time.
# In standalone mode uvicorn already owns the main loop, so we create
# a fresh loop for this thread and redirect the lark module to use it.
thread_loop = asyncio.new_event_loop()
asyncio.set_event_loop(thread_loop)
import lark_oapi.ws.client as _lark_ws_client
_lark_ws_client.loop = thread_loop
while True:
try:
_ws_connected = False

View File

@ -43,7 +43,6 @@ class NodeClient:
self._running = False
self._last_heartbeat = time.time()
self._reconnect_delay = 1.0
self._forward_tasks: set[asyncio.Task] = set()
async def connect(self) -> bool:
"""Connect to the router WebSocket."""
@ -54,9 +53,9 @@ class NodeClient:
try:
self.ws = await websockets.connect(
self.config.router_url,
additional_headers=headers,
ping_interval=20,
ping_timeout=60,
extra_headers=headers,
ping_interval=30,
ping_timeout=10,
)
logger.info("Connected to router: %s", self.config.router_url)
self._reconnect_delay = 1.0
@ -146,9 +145,17 @@ class NodeClient:
except Exception as e:
logger.error("Failed to send status: %s", e)
async def handle_message_decoded(self, msg: Any) -> None:
"""Handle an already-decoded message from the router."""
if isinstance(msg, Heartbeat):
async def handle_message(self, data: str) -> None:
"""Handle an incoming message from the router."""
try:
msg = decode(data)
except Exception as e:
logger.error("Failed to decode message: %s", e)
return
if isinstance(msg, ForwardRequest):
await self.handle_forward(msg)
elif isinstance(msg, Heartbeat):
if msg.type == "ping":
if self.ws:
try:
@ -158,7 +165,7 @@ class NodeClient:
elif msg.type == "pong":
self._last_heartbeat = time.time()
else:
logger.debug("Received message type: %s", type(msg).__name__)
logger.debug("Received message type: %s", msg.type)
async def receive_loop(self) -> None:
"""Main receive loop for incoming messages."""
@ -167,20 +174,7 @@ class NodeClient:
try:
async for data in self.ws:
try:
msg = decode(data)
except Exception as e:
logger.error("Failed to decode message: %s", e)
continue
if isinstance(msg, ForwardRequest):
# Dispatch as a task so pings are handled without waiting
# for the full agent run to complete.
task = asyncio.create_task(self.handle_forward(msg))
self._forward_tasks.add(task)
task.add_done_callback(self._forward_tasks.discard)
else:
await self.handle_message_decoded(msg)
await self.handle_message(data)
except websockets.ConnectionClosed as e:
logger.warning("Connection closed: %s", e)
except Exception as e:
@ -190,14 +184,14 @@ class NodeClient:
"""Periodic heartbeat loop."""
while self._running:
await asyncio.sleep(30)
if self.ws:
if self.ws and self.ws.open:
await self.send_heartbeat()
async def status_loop(self) -> None:
"""Periodic status update loop."""
while self._running:
await asyncio.sleep(60)
if self.ws:
if self.ws and self.ws.open:
await self.send_status()
async def run(self) -> None:
@ -249,10 +243,6 @@ class NodeClient:
async def stop(self) -> None:
"""Stop the client."""
self._running = False
for task in list(self._forward_tasks):
task.cancel()
if self._forward_tasks:
await asyncio.gather(*self._forward_tasks, return_exceptions=True)
if self.ws:
await self.ws.close()
await manager.stop()

View File

@ -111,6 +111,4 @@ if __name__ == "__main__":
port=8000,
reload=False,
log_level="info",
ws_ping_interval=20,
ws_ping_timeout=60,
)

View File

@ -30,8 +30,6 @@ logger = logging.getLogger(__name__)
SYSTEM_PROMPT_TEMPLATE = """You are PhoneWork, an AI assistant that helps users control Claude Code \
from their phone via Feishu (飞书).
Today's date: {today}
You manage Claude Code sessions. Each session has a conv_id and runs in a project directory.
Base working directory: {working_dir}
@ -48,20 +46,12 @@ Your responsibilities:
4. Close session: call `close_conversation`.
5. GENERAL QUESTIONS: If the user asks a general question (not about a specific project or file), \
answer directly using your own knowledge. Do NOT create a session for simple Q&A.
6. WEB / SEARCH: Use the `web` tool when the user needs current information. \
Call it ONCE (or at most twice with a refined query). Then synthesize and reply \
do NOT keep searching in a loop. If the first search returns results, use them.
7. BACKGROUND TASKS: When `create_conversation` or `send_to_conversation` returns a \
"Task #... started" message, the task is running in the background. \
Immediately reply to the user that the task has started and they will be notified. \
Do NOT call `task_status` in a loop waiting for it the system sends a notification when done.
Guidelines:
- Relay Claude Code's output verbatim.
- If no active session and the user sends a task without naming a directory, ask them which project.
- For general knowledge questions (e.g., "what is a Python generator?", "explain async/await"), \
answer directly without creating a session.
- After using any tool, always produce a final text reply to the user. Never end a turn on a tool call.
- Keep your own words brief let Claude Code's output speak.
- Reply in the same language the user uses (Chinese or English).
"""
@ -121,8 +111,6 @@ class OrchestrationAgent:
self._passthrough: dict[str, bool] = defaultdict(lambda: False)
def _build_system_prompt(self, user_id: str) -> str:
from datetime import date
today = date.today().strftime("%Y-%m-%d")
conv_id = self._active_conv[user_id]
if conv_id:
active_line = f"ACTIVE SESSION: conv_id={conv_id!r} ← use this for all follow-up messages"
@ -131,7 +119,6 @@ class OrchestrationAgent:
return SYSTEM_PROMPT_TEMPLATE.format(
working_dir=WORKING_DIR,
active_session_line=active_line,
today=today,
)
def get_active_conv(self, user_id: str) -> Optional[str]:
@ -194,8 +181,6 @@ class OrchestrationAgent:
reply = ""
try:
web_calls = 0
task_status_calls = 0
for iteration in range(MAX_ITERATIONS):
logger.debug(" LLM call #%d", iteration)
ai_msg: AIMessage = await self._llm_with_tools.ainvoke(messages)
@ -216,26 +201,6 @@ class OrchestrationAgent:
)
logger.info("%s(%s)", tool_name, args_summary)
if tool_name == "web":
web_calls += 1
if web_calls > 2:
result = "Web search limit reached. Synthesize from results already obtained."
logger.warning(" web call limit exceeded, blocking")
messages.append(
ToolMessage(content=str(result), tool_call_id=tool_id)
)
continue
if tool_name == "task_status":
task_status_calls += 1
if task_status_calls > 1:
result = "Task is still running in the background. Stop polling and tell the user they will be notified when it completes."
logger.warning(" task_status poll limit exceeded, blocking")
messages.append(
ToolMessage(content=str(result), tool_call_id=tool_id)
)
continue
tool_obj = _TOOL_MAP.get(tool_name)
if tool_obj is None:
result = f"Unknown tool: {tool_name}"

View File

@ -553,31 +553,18 @@ class WebTool(BaseTool):
payload = {
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "metaso_web_search",
"arguments": {"q": query, "scope": scope or "webpage", "size": 5, "includeSummary": True},
},
"method": "metaso_web_search",
"params": {"query": query, "scope": scope or "webpage"},
}
resp = await client.post(base_url, json=payload, headers=headers)
data = resp.json()
if "error" in data:
return json.dumps({"error": data["error"]}, ensure_ascii=False)
content_text = data.get("result", {}).get("content", [{}])[0].get("text", "")
result_data = json.loads(content_text) if content_text else {}
webpages = result_data.get("webpages", [])[:5]
results = data.get("result", {}).get("results", [])[:5]
output = []
for r in webpages:
date = r.get("date", "")
title = r.get("title", "No title")
snippet = r.get("snippet", "")[:300]
link = r.get("link", "")
output.append(f"[{date}] **{title}**\n{snippet}\n{link}")
total = result_data.get("total", 0)
return json.dumps({
"total": total,
"results": "\n\n".join(output)[:max_chars],
}, ensure_ascii=False)
for r in results:
output.append(f"**{r.get('title', 'No title')}**\n{r.get('snippet', '')}\n{r.get('url', '')}")
return json.dumps({"results": "\n\n".join(output)[:max_chars]}, ensure_ascii=False)
elif action == "fetch":
if not url:
@ -585,18 +572,15 @@ class WebTool(BaseTool):
payload = {
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "metaso_web_reader",
"arguments": {"url": url, "format": "markdown"},
},
"method": "metaso_web_reader",
"params": {"url": url, "format": "markdown"},
}
resp = await client.post(base_url, json=payload, headers=headers)
data = resp.json()
if "error" in data:
return json.dumps({"error": data["error"]}, ensure_ascii=False)
content_text = data.get("result", {}).get("content", [{}])[0].get("text", "")
return json.dumps({"content": content_text[:max_chars]}, ensure_ascii=False)
content = data.get("result", {}).get("content", "")
return json.dumps({"content": content[:max_chars]}, ensure_ascii=False)
elif action == "ask":
if not query:
@ -604,18 +588,15 @@ class WebTool(BaseTool):
payload = {
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "metaso_chat",
"arguments": {"message": query},
},
"method": "metaso_chat",
"params": {"query": query},
}
resp = await client.post(base_url, json=payload, headers=headers)
data = resp.json()
if "error" in data:
return json.dumps({"error": data["error"]}, ensure_ascii=False)
content_text = data.get("result", {}).get("content", [{}])[0].get("text", "")
return json.dumps({"answer": content_text[:max_chars]}, ensure_ascii=False)
answer = data.get("result", {}).get("answer", "")
return json.dumps({"answer": answer[:max_chars]}, ensure_ascii=False)
else:
return json.dumps({"error": f"Unknown action: {action}"}, ensure_ascii=False)

View File

@ -53,11 +53,11 @@ async def ws_node_endpoint(websocket: WebSocket) -> None:
"""Send periodic pings to the host client."""
try:
while True:
await asyncio.sleep(30)
try:
await websocket.send_text(encode(Heartbeat(type="ping")))
except Exception:
break
await asyncio.sleep(30)
except asyncio.CancelledError:
pass

View File

@ -41,8 +41,6 @@ async def run_standalone() -> None:
host="0.0.0.0",
port=8000,
log_level="info",
ws_ping_interval=20,
ws_ping_timeout=60,
)
server = uvicorn.Server(config_obj)