feat: professionalize control plane and standalone delivery

This commit is contained in:
theshy
2026-04-07 10:46:30 +08:00
parent d0cf1fd0df
commit 862db502b0
100 changed files with 8313 additions and 1483 deletions

1
.gitignore vendored
View File

@ -12,6 +12,7 @@ systemd/rendered/
runtime/cookies.json
runtime/upload_config.json
runtime/biliup
runtime/logs/
frontend/node_modules/
frontend/dist/

View File

@ -13,6 +13,7 @@
- `worker` / `api` 运行脚本
- `systemd` 安装脚本
- Web 控制台
- 项目内日志落盘
- 主链路:
- `stage`
- `ingest`
@ -30,10 +31,9 @@
- `ffprobe`
- `codex`
- `biliup`
- 上层项目仍需提供:
- `../cookies.json`
- `../upload_config.json`
- `../.env` 中的运行时路径配置
- `biliup-next/runtime/cookies.json`
- `biliup-next/runtime/upload_config.json`
- `biliup-next/runtime/biliup`
## Install
@ -42,7 +42,7 @@ cd /home/theshy/biliup/biliup-next
bash setup.sh
```
如需把父项目中的运行资产复制到本地:
如需把当前机器上已有运行资产复制到本地:
```bash
cd /home/theshy/biliup/biliup-next
@ -75,6 +75,16 @@ bash run-worker.sh
bash run-api.sh
```
默认会写入:
- `runtime/logs/worker.log`
- `runtime/logs/api.log`
默认按大小轮转:
- 单文件 `20 MiB`
- 保留 `5` 份历史日志
systemd 方式:
```bash
@ -99,6 +109,5 @@ bash install-systemd.sh
## Known Limits
- 当前仍复用父项目中的 `cookies.json` / `upload_config.json` / `biliup`
- 当前 provider 仍有 legacy adapter
- 当前控制台认证是单 token本地可用但不等于完整权限系统
- `sync-legacy-assets` 仍是一次性导入工具,方便把已有资产复制到 `runtime/`

View File

@ -1,12 +1,12 @@
# biliup-next
`biliup-next`当前项目的并行重构版本
`biliup-next` 是当前仓库内独立运行的新流水线实现
目标:
- 不破坏旧项目运行
- 先完成控制面和核心模型
- 再逐步迁移转录、识歌、切歌、上传、评论、合集模块
- 使用单 worker + 状态机替代旧 watcher 流程
- 提供独立控制面、配置系统和隔离 workspace
- `biliup-next` 内独立运行完整主链路
## Current Scope
@ -43,24 +43,30 @@ bash setup.sh
- 创建 `biliup-next/.venv`
- `pip install -e .`
- 缺失时自动生成 standalone `settings.json`
- 初始化隔离 workspace
- 尝试把父项目中的 `cookies.json` / `upload_config.json` / `biliup` 同步到 `biliup-next/runtime/`
- 缺失时自动生成 runtime 模板文件
- 校验 `runtime/` 中的本地运行资产
- 执行一次 `doctor`
- 可选安装 `systemd` 服务
新机器冷启动步骤见:
- `docs/cold-start-checklist.md`
浏览器访问:
```text
http://127.0.0.1:8787/
```
React 迁移版控制台未来入口:
控制台保留入口:
```text
http://127.0.0.1:8787/ui/
http://127.0.0.1:8787/classic
```
`frontend/dist/` 存在时Python API 会自动托管这套前端;旧控制台 `/` 仍然保留
`frontend/dist/` 存在时Python API 会自动把 React 控制台托管为默认首页 `/`;旧控制台保留在 `/classic`
控制台当前支持:
@ -120,6 +126,7 @@ cd /home/theshy/biliup/biliup-next
```bash
cd /home/theshy/biliup/biliup-next
bash smoke-test.sh
bash cold-start-smoke.sh
```
## Runtime
@ -131,14 +138,25 @@ bash smoke-test.sh
- `session/`
- `biliup_next.db`
外部依赖目前仍复用旧项目中的
运行资产默认都位于 `biliup-next/runtime/`
- `../cookies.json`
- `../upload_config.json`
- `../biliup`
- `../.env` 中的 `CODEX_CMD` / `FFMPEG_BIN` / `FFPROBE_BIN`
- `runtime/cookies.json`
- `runtime/upload_config.json`
- `runtime/biliup`
- `runtime/logs/api.log`
- `runtime/logs/worker.log`
如果你希望进一步脱离父项目,可以执行:
`run-api.sh``run-worker.sh` 现在会自动把 stdout/stderr 追加写入对应日志文件,同时保留终端输出;控制台 `Logs` 页会直接读取这些日志文件。
默认日志轮转策略:
- 单文件上限 `20 MiB`
- 保留最近 `5` 个历史文件
- 可通过环境变量覆盖:
- `BILIUP_NEXT_LOG_MAX_BYTES`
- `BILIUP_NEXT_LOG_BACKUPS`
如果你要把当前机器上已有版本复制到本地 runtime可以执行
```bash
cd /home/theshy/biliup/biliup-next
@ -171,6 +189,28 @@ cd /home/theshy/biliup/biliup-next
只有在任务进入 `collection_synced` 后,才会按配置执行清理。
## Full Video BV Input
完整版 `BV` 目前支持 3 种来源:
- `stage/*.meta.json` 中的 `full_video_bvid`
- 前端 / API 手工绑定
- webhook`POST /webhooks/full-video-uploaded`
推荐 webhook 负载:
```json
{
"session_key": "王海颖:20260402T2203",
"source_title": "王海颖唱歌录播 04月02日 22时03分",
"streamer": "王海颖",
"room_id": "581192190066",
"full_video_bvid": "BV1uH9wBsELC"
}
```
如果 webhook 先于片段 ingest 到达,`biliup-next` 会先把它持久化;后续同 `session_key``source_title` 的任务进入时会自动继承该 `BV`
## Security
控制台支持可选 token 保护:

112
cold-start-smoke.sh Normal file
View File

@ -0,0 +1,112 @@
#!/usr/bin/env bash
set -euo pipefail
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
LOCAL_DEFAULT_PYTHON="$PROJECT_DIR/.venv/bin/python"
LEGACY_DEFAULT_PYTHON="$PROJECT_DIR/../.venv/bin/python"
PYTHON_BIN="${BILIUP_NEXT_PYTHON:-$LOCAL_DEFAULT_PYTHON}"
HOST="${BILIUP_NEXT_SMOKE_HOST:-127.0.0.1}"
PORT="${BILIUP_NEXT_SMOKE_PORT:-18787}"
if [[ ! -x "$PYTHON_BIN" ]]; then
if [[ -x "$LEGACY_DEFAULT_PYTHON" ]]; then
PYTHON_BIN="$LEGACY_DEFAULT_PYTHON"
else
PYTHON_BIN="${BILIUP_NEXT_PYTHON:-python3}"
fi
fi
if [[ ! -x "$PYTHON_BIN" ]]; then
echo "python not found: $PYTHON_BIN" >&2
exit 1
fi
API_PID=""
cleanup() {
if [[ -n "${API_PID:-}" ]]; then
kill "$API_PID" >/dev/null 2>&1 || true
wait "$API_PID" >/dev/null 2>&1 || true
fi
}
trap cleanup EXIT
cd "$PROJECT_DIR"
echo "==> check generated files"
for REQUIRED_FILE in \
"$PROJECT_DIR/config/settings.json" \
"$PROJECT_DIR/config/settings.staged.json" \
"$PROJECT_DIR/runtime/cookies.json" \
"$PROJECT_DIR/runtime/upload_config.json"
do
if [[ ! -f "$REQUIRED_FILE" ]]; then
echo "missing file: $REQUIRED_FILE" >&2
exit 1
fi
done
echo "==> doctor"
PYTHONPATH="$PROJECT_DIR/src" "$PYTHON_BIN" -m biliup_next.app.cli doctor >/dev/null
echo "==> init-workspace"
PYTHONPATH="$PROJECT_DIR/src" "$PYTHON_BIN" -m biliup_next.app.cli init-workspace >/dev/null
echo "==> start api"
PYTHONPATH="$PROJECT_DIR/src" "$PYTHON_BIN" -m biliup_next.app.cli serve --host "$HOST" --port "$PORT" >/tmp/biliup-next-cold-start-smoke.log 2>&1 &
API_PID="$!"
echo "==> wait for health"
for _ in $(seq 1 40); do
if "$PYTHON_BIN" - "$HOST" "$PORT" <<'PY'
import json
import sys
import urllib.request
host = sys.argv[1]
port = sys.argv[2]
try:
with urllib.request.urlopen(f"http://{host}:{port}/health", timeout=0.5) as resp:
payload = json.load(resp)
if payload.get("ok") is True:
raise SystemExit(0)
except Exception:
pass
raise SystemExit(1)
PY
then
break
fi
sleep 0.5
done
echo "==> api settings schema"
"$PYTHON_BIN" - "$HOST" "$PORT" <<'PY'
import json
import sys
import urllib.request
host = sys.argv[1]
port = sys.argv[2]
with urllib.request.urlopen(f"http://{host}:{port}/settings/schema", timeout=2) as resp:
payload = json.load(resp)
assert isinstance(payload, dict)
assert payload.get("title")
PY
echo "==> api tasks"
"$PYTHON_BIN" - "$HOST" "$PORT" <<'PY'
import json
import sys
import urllib.request
host = sys.argv[1]
port = sys.argv[2]
with urllib.request.urlopen(f"http://{host}:{port}/tasks?limit=5", timeout=2) as resp:
payload = json.load(resp)
assert isinstance(payload, dict)
assert "items" in payload
PY
echo "==> cold start smoke ok"

View File

@ -1,15 +1,15 @@
{
"runtime": {
"database_path": "/home/theshy/biliup/biliup-next/data/workspace/biliup_next.db",
"database_path": "data/workspace/biliup_next.db",
"control_token": "",
"log_level": "INFO"
},
"paths": {
"stage_dir": "/home/theshy/biliup/biliup-next/data/workspace/stage",
"backup_dir": "/home/theshy/biliup/biliup-next/data/workspace/backup",
"session_dir": "/home/theshy/biliup/biliup-next/data/workspace/session",
"cookies_file": "/home/theshy/biliup/biliup-next/runtime/cookies.json",
"upload_config_file": "/home/theshy/biliup/biliup-next/runtime/upload_config.json"
"stage_dir": "data/workspace/stage",
"backup_dir": "data/workspace/backup",
"session_dir": "data/workspace/session",
"cookies_file": "runtime/cookies.json",
"upload_config_file": "runtime/upload_config.json"
},
"scheduler": {
"candidate_scan_limit": 500,
@ -37,18 +37,21 @@
".mkv",
".mov"
],
"stage_min_free_space_mb": 2048,
"stability_wait_seconds": 30
"stage_min_free_space_mb": 1024,
"stability_wait_seconds": 30,
"session_gap_minutes": 60,
"meta_sidecar_enabled": true,
"meta_sidecar_suffix": ".meta.json"
},
"transcribe": {
"provider": "groq",
"groq_api_key": "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",
"groq_api_key": "",
"ffmpeg_bin": "ffmpeg",
"max_file_size_mb": 23
},
"song_detect": {
"provider": "codex",
"codex_cmd": "/home/theshy/.nvm/versions/node/v22.13.0/bin/codex",
"codex_cmd": "codex",
"poll_interval_seconds": 2
},
"split": {
@ -59,8 +62,8 @@
},
"publish": {
"provider": "biliup_cli",
"biliup_path": "/home/theshy/biliup/biliup-next/runtime/biliup",
"cookie_file": "/home/theshy/biliup/biliup-next/runtime/cookies.json",
"biliup_path": "runtime/biliup",
"cookie_file": "runtime/cookies.json",
"retry_count": 5,
"retry_schedule_minutes": [
15,
@ -83,14 +86,14 @@
"collection": {
"provider": "bilibili_collection",
"enabled": true,
"season_id_a": 7196643,
"season_id_b": 7196624,
"season_id_a": 0,
"season_id_b": 0,
"allow_fuzzy_full_video_match": false,
"append_collection_a_new_to_end": true,
"append_collection_b_new_to_end": true
},
"cleanup": {
"delete_source_video_after_collection_synced": true,
"delete_split_videos_after_collection_synced": true
"delete_source_video_after_collection_synced": false,
"delete_split_videos_after_collection_synced": false
}
}

View File

@ -46,35 +46,35 @@
"paths": {
"stage_dir": {
"type": "string",
"default": "../stage",
"default": "data/workspace/stage",
"title": "Stage Directory",
"ui_order": 10,
"ui_widget": "path"
},
"backup_dir": {
"type": "string",
"default": "../backup",
"default": "data/workspace/backup",
"title": "Backup Directory",
"ui_order": 20,
"ui_widget": "path"
},
"session_dir": {
"type": "string",
"default": "../session",
"default": "data/workspace/session",
"title": "Session Directory",
"ui_order": 30,
"ui_widget": "path"
},
"cookies_file": {
"type": "string",
"default": "../cookies.json",
"default": "runtime/cookies.json",
"title": "Cookies File",
"ui_order": 40,
"ui_widget": "path"
},
"upload_config_file": {
"type": "string",
"default": "../upload_config.json",
"default": "runtime/upload_config.json",
"title": "Upload Config File",
"ui_order": 50,
"ui_widget": "path"
@ -170,6 +170,30 @@
"ui_widget": "duration_seconds",
"description": "扫描 stage 时,文件最后修改后至少静默这么久才会开始处理。用于避免手动 copy 半截文件被提前接走。",
"minimum": 0
},
"session_gap_minutes": {
"type": "integer",
"default": 60,
"title": "Session Gap Minutes",
"ui_order": 70,
"ui_featured": true,
"ui_widget": "duration_minutes",
"description": "当没有显式 session_key 时,同一主播前后片段的最大归并间隔。系统会用上一段结束时间和下一段开始时间做连续性判断。",
"minimum": 0
},
"meta_sidecar_enabled": {
"type": "boolean",
"default": true,
"title": "Meta Sidecar Enabled",
"ui_order": 80,
"description": "是否读取 stage 中与视频同名的 sidecar 元数据文件,例如 .meta.json。"
},
"meta_sidecar_suffix": {
"type": "string",
"default": ".meta.json",
"title": "Meta Sidecar Suffix",
"ui_order": 90,
"description": "stage sidecar 元数据文件后缀。默认会读取 video.mp4 对应的 video.meta.json。"
}
},
"transcribe": {
@ -270,14 +294,14 @@
},
"biliup_path": {
"type": "string",
"default": "../biliup",
"default": "runtime/biliup",
"title": "Biliup Path",
"ui_order": 20,
"ui_widget": "path"
},
"cookie_file": {
"type": "string",
"default": "../cookies.json",
"default": "runtime/cookies.json",
"title": "Cookie File",
"ui_order": 40,
"ui_widget": "path"

View File

@ -15,7 +15,12 @@
"provider": "local_file",
"min_duration_seconds": 900,
"ffprobe_bin": "ffprobe",
"allowed_extensions": [".mp4", ".flv", ".mkv", ".mov"]
"allowed_extensions": [".mp4", ".flv", ".mkv", ".mov"],
"stage_min_free_space_mb": 2048,
"stability_wait_seconds": 30,
"session_gap_minutes": 60,
"meta_sidecar_enabled": true,
"meta_sidecar_suffix": ".meta.json"
},
"transcribe": {
"provider": "groq",

View File

@ -129,6 +129,12 @@ paths:
responses:
"201":
description: task created
/webhooks/full-video-uploaded:
post:
summary: 接收原视频上传成功后的完整版 BV webhook
responses:
"202":
description: accepted
/tasks/{taskId}:
get:
summary: 查询任务详情

View File

@ -166,14 +166,18 @@ biliup-next/
```text
created
-> ingested
-> running
-> transcribed
-> running
-> songs_detected
-> running
-> split_done
-> running
-> published
-> running
-> commented
-> running
-> collection_synced
-> completed
```
失败状态不结束任务,而是转入:
@ -194,5 +198,6 @@ created
- 外部依赖不可直接在业务模块中调用 shell 或 HTTP
- 配置统一由 `core.config` 读取
- 管理端展示的数据优先来自数据库,不直接从日志推断
- 工作区 flag 只表达交付副作用和产物标记,不作为 task 主状态事实源
- 配置系统必须 schema-first
- 插件系统必须 manifest-first

View File

@ -0,0 +1,79 @@
# biliup-next Cold Start Checklist
目标:在一台没有旧环境残留的新机器上,把 `biliup-next` 启动到“可配置、可 doctor、可进入控制面”的状态。
## 1. 基础环境
- 安装 `python3`
- 安装 `ffmpeg``ffprobe`
- 如需完整歌曲识别,安装 `codex`
- 如需完整上传链路,准备 `biliup` 可执行文件
## 2. 获取项目
```bash
git clone <your-repo> biliup
cd biliup/biliup-next
```
## 3. 一键初始化
```bash
bash setup.sh
```
初始化完成后,项目会自动生成:
- `config/settings.json`
- `config/settings.staged.json`
- `runtime/cookies.json`
- `runtime/upload_config.json`
- `data/workspace/*`
注意:
- 这些文件默认都是模板或占位内容
- 此时项目应当已经能执行 `doctor`,但不代表上传链路已经可用
## 4. 填写真实运行资产
- 编辑 `runtime/cookies.json`
- 编辑 `runtime/upload_config.json`
-`biliup` 放到 `runtime/biliup`,或在 `settings.json` 里改成系统路径
- 填写 `transcribe.groq_api_key`
- 按机器实际情况调整 `song_detect.codex_cmd`
- 按需要填写 `collection.season_id_a` / `collection.season_id_b`
## 5. 验收
```bash
./.venv/bin/biliup-next doctor
./.venv/bin/biliup-next init-workspace
./.venv/bin/biliup-next serve --host 127.0.0.1 --port 8787
bash cold-start-smoke.sh
```
浏览器打开:
```text
http://127.0.0.1:8787/
```
验收通过标准:
- `doctor` 输出可读,缺失项只剩你尚未填写的外部依赖
- 控制面可以打开
- `Settings` 页可正常保存
- `stage` 目录可导入或上传文件
- `cold-start-smoke.sh` 能完整通过
## 6. 完整链路前检查
在开始真实处理前,确认以下项目已经真实可用:
- `runtime/cookies.json`
- `runtime/upload_config.json`
- `publish.biliup_path`
- `song_detect.codex_cmd`
- `transcribe.groq_api_key`
- `collection.season_id_a` / `collection.season_id_b`

View File

@ -169,6 +169,11 @@ manifest 负责描述:
三者职责分离,不互相替代。
补充:
- 工作区 flag 可以保留,用于表示某些外部动作已经发生,例如评论、合集、上传等副作用完成。
- 但这些 flag 不应被提升为 task 主状态本身。
## Principle 9: Replaceability With Stable Core
可替换的是 provider不可随意漂移的是核心模型。

View File

@ -0,0 +1,335 @@
# Frontend Implementation Checklist
## Goal
把当前 `biliup-next` 已有的后端能力,整理成前端可直接开发的任务清单。
这份清单面向前端开发,不讨论后端架构,只回答 3 个问题:
1. 先做哪些页面最值钱
2. 每个页面要拆哪些组件
3. 每个组件依赖哪些接口和字段
## Priority
建议按这个顺序推进:
1. 任务列表页状态升级
2. 任务详情页
3. 手工绑定完整版 BV
4. Session 合并 / 重绑
5. 设置页常用配置强化
## Milestone 1: 任务列表页状态升级
目标:
- 用户一眼看懂任务是在运行、等待、失败还是完成
- 不需要理解内部状态机字段
### 页面任务
- 把当前任务列表中的内部状态替换成用户态状态
- 在任务列表中增加“当前步骤”列
- 在任务列表中增加“下次重试时间”列
- 在任务列表中增加“分P BV / 完整版 BV”列
- 在任务列表中增加“评论 / 合集 / 清理”状态列
### 组件任务
- `TaskStatusBadge`
- 输入:`task.status`, `task.retry_state`, `steps`
- 输出:`已接收 / 上传中 / 等待B站可见 / 需人工处理 / 已完成`
- `TaskStepBadge`
- 输入:`steps`
- 输出当前步骤文案
- `TaskDeliverySummary`
- 输入:`delivery_state`, `session_context`
- 输出:
- 分P BV
- 完整版 BV
- 评论状态
- 合集状态
- 清理状态
### 接口依赖
- `GET /tasks`
### 建议后端字段
- 现有可直接使用:
- `status`
- `retry_state`
- `delivery_state`
- `session_context`
- 建议前端先本地派生:
- `display_status`
- `current_step`
## Milestone 2: 任务详情页
目标:
- 用户不看日志也能知道这个任务发生了什么
- 用户能在单任务页完成最常见修复动作
### 页面任务
- 新建任务详情页 Hero 区
- 新建步骤时间线
- 新建交付结果卡片
- 新建 Session 信息卡片
- 新建产物列表卡片
- 新建历史动作卡片
- 新建错误说明卡片
### 组件任务
- `TaskHero`
- 标题
- 用户态状态
- 当前步骤
- 下次重试时间
- `TaskTimeline`
- ingest -> collection_b 全步骤
- `TaskDeliveryPanel`
- 分P `BV`
- 完整版 `BV`
- 分P链接
- 完整版链接
- 合集状态
- `TaskSessionPanel`
- `session_key`
- `streamer`
- `room_id`
- `segment_started_at`
- `segment_duration_seconds`
- `context_source`
- `TaskArtifactsPanel`
- source_video
- subtitle_srt
- songs.json
- songs.txt
- clip_video
- `TaskActionsPanel`
- 运行
- 重试
- 重置
- 绑定完整版 BV
### 接口依赖
- `GET /tasks/<id>`
- `GET /tasks/<id>/steps`
- `GET /tasks/<id>/artifacts`
- `GET /tasks/<id>/history`
- `GET /tasks/<id>/timeline`
- `GET /tasks/<id>/context`
### 操作接口依赖
- `POST /tasks/<id>/actions/run`
- `POST /tasks/<id>/actions/retry-step`
- `POST /tasks/<id>/actions/reset-to-step`
## Milestone 3: 手工绑定完整版 BV
目标:
- 用户在前端直接补 `full_video_bvid`
- 不需要再手工写 `full_video_bvid.txt`
### 页面任务
- 在任务详情页增加“绑定完整版 BV”表单
- 显示当前已绑定 BV
- 显示绑定来源:
- fallback
- task_context
- meta_sidecar
- webhook
### 组件任务
- `BindFullVideoForm`
- 输入框:`BV...`
- 提交按钮
- 成功反馈
- 错误反馈
### 接口依赖
- `POST /tasks/<id>/bind-full-video`
### 交互要求
- 提交前本地校验 `BV[0-9A-Za-z]+`
- 成功后刷新:
- `GET /tasks/<id>`
- `GET /tasks/<id>/context`
## Milestone 4: Session 合并 / 重绑
目标:
- 用户能处理“同一场多个断流片段”
- 用户能统一给整个 session 重绑完整版 BV
### 页面任务
- 在任务详情页显示当前任务所属 session
- 增加“查看同 session 任务”入口
- 增加“合并到现有 session”弹窗
- 增加“整个 session 重绑完整版 BV”表单
### 组件任务
- `SessionSummaryCard`
- `session_key`
- task count
- 当前 `full_video_bvid`
- `SessionTaskList`
- 列出该 session 下所有任务
- `MergeSessionDialog`
- 输入目标 `session_key`
- 选择任务
- `RebindSessionForm`
- 输入新的完整版 `BV`
### 接口依赖
- `GET /sessions/<session_key>`
- `POST /sessions/<session_key>/merge`
- `POST /sessions/<session_key>/rebind`
### 交互要求
- 合并成功后刷新:
- 当前任务详情
- session 详情
- 任务列表
- 如果目标 session 已有 `full_video_bvid`
- 前端提示“合并后会继承该完整版 BV”
## Milestone 5: 设置页常用配置强化
目标:
- 用户无需直接改 JSON 就能调优常用行为
### 页面任务
- 在设置页高亮常用 ingest/session 配置
- 在设置页高亮 comment 重试配置
- 在设置页高亮 cleanup 配置
### 应优先暴露的配置
- `ingest.session_gap_minutes`
- `ingest.meta_sidecar_enabled`
- `ingest.meta_sidecar_suffix`
- `comment.max_retries`
- `comment.base_delay_seconds`
- `cleanup.delete_source_video_after_collection_synced`
- `cleanup.delete_split_videos_after_collection_synced`
### 接口依赖
- `GET /settings`
- `GET /settings/schema`
- `PUT /settings`
## Common UX Rules
### 状态文案
- `failed_retryable` 不显示“失败”
- 优先显示:
- `等待自动重试`
- `等待B站可见`
- `正在处理中`
- `需人工处理`
### 错误提示
错误提示统一分成 2 行:
- 原因
- 建议动作
例如:
- 原因视频刚上传B站暂未可见
- 建议:系统会自动重试,无需人工处理
### 操作反馈
所有写操作都要有:
- loading 态
- 成功 toast
- 错误 toast
### 刷新策略
这些动作成功后必须自动刷新详情数据:
- `retry-step`
- `reset-to-step`
- `bind-full-video`
- `session merge`
- `session rebind`
## Suggested Frontend Types
建议前端统一定义这些类型:
```ts
type TaskDisplayStatus =
| "accepted"
| "processing"
| "waiting_retry"
| "waiting_visibility"
| "manual_action"
| "done";
type TaskSessionContext = {
task_id: string;
session_key: string | null;
streamer: string | null;
room_id: string | null;
source_title: string | null;
segment_started_at: string | null;
segment_duration_seconds: number | null;
full_video_bvid: string | null;
split_bvid: string | null;
context_source: string;
video_links: {
split_video_url: string | null;
full_video_url: string | null;
};
};
```
## Suggested Build Order Inside Frontend Repo
建议按这个顺序拆 PR
1. 状态映射工具函数
2. 任务列表页文案升级
3. 任务详情页 Session/Delivery 面板
4. 绑定完整版 BV 表单
5. Session 合并 / 重绑弹窗
6. 设置页常用配置高亮
## Definition Of Done
这一轮前端完成的标准建议是:
- 用户可以在任务列表页看懂所有任务当前状态
- 用户可以在任务详情页看到分P/完整版 BV 和链接
- 用户可以手工绑定完整版 BV
- 用户可以把多个任务合并为同一个 session
- 用户可以给整个 session 重绑完整版 BV
- 用户不需要 ssh 登录机器改 txt 文件

View File

@ -0,0 +1,383 @@
# Frontend Product Integration
## Goal
从用户视角,把当前 `biliup-next` 的任务状态机包装成可操作、可理解的控制面。
这份文档面向前端与后端联调,目标不是描述内部实现,而是明确:
- 前端应该有哪些页面
- 每个页面需要哪些字段
- 当前后端已经提供了哪些接口
- 哪些字段/接口还需要补
## User Goals
用户最关心的不是数据库状态,而是这 6 件事:
1. 视频有没有被接收
2. 现在卡在哪一步
3. 这是自动等待还是需要人工处理
4. 上传后的分P BV 和完整版 BV 是什么
5. 评论和合集有没有完成
6. 失败后应该点哪里恢复
因此,前端不应该直接暴露 `created/transcribed/failed_retryable` 这类内部状态,而应该提供一层用户可理解的派生展示。
## Information Architecture
建议前端固定成 4 个一级页面:
1. 总览页
2. 任务列表页
3. 任务详情页
4. 设置页
可选扩展页:
5. 日志页
6. Webhook / Sidecar 调试页
## Page Spec
### 1. 总览页
目标:让用户在 10 秒内知道系统是否正常、当前队列是否卡住。
核心模块:
- 任务摘要卡片
- 运行中
- 等待自动重试
- 需人工处理
- 已完成
- 最近 10 个任务
- 标题
- 用户态状态
- 当前步骤
- 下次重试时间
- 运行时摘要
- API 服务状态
- Worker 服务状态
- stage 目录文件数
- 最近一次调度结果
- 风险提示
- cookies 缺失
- 磁盘空间不足
- Groq/Codex/Biliup 不可用
现有接口可复用:
- `GET /health`
- `GET /doctor`
- `GET /tasks?limit=100`
- `GET /runtime/services`
- `GET /scheduler`
### 2. 任务列表页
目标:批量查看任务,快速定位失败或等待中的任务。
表格建议字段:
- 任务标题
- 用户态状态
- 当前步骤
- 完成进度
- 下次重试时间
- 分P BV
- 完整版 BV
- 评论状态
- 合集状态
- 清理状态
- 最近更新时间
筛选项建议:
- 全部
- 运行中
- 等待自动重试
- 需人工处理
- 已完成
- 仅显示未完成评论
- 仅显示未完成合集
- 仅显示未清理文件
现有接口可复用:
- `GET /tasks`
建议新增的派生字段:
- `display_status`
- `current_step`
- `progress_percent`
- `split_bvid`
- `full_video_bvid`
- `session_key`
- `session_binding_state`
### 3. 任务详情页
目标:让用户不看日志也能处理单个任务。
建议布局:
- Hero 区
- 标题
- 用户态状态
- 当前步骤
- 下次重试时间
- 主要操作按钮
- 步骤时间线
- ingest
- transcribe
- song_detect
- split
- publish
- comment
- collection_a
- collection_b
- 交付结果区
- 分P BV
- 完整版 BV
- 分P 链接
- 完整版链接
- 合集 A / B 链接
- Session 信息区
- session_key
- streamer
- room_id
- segment_started_at
- segment_duration_seconds
- 是否由 sidecar 提供
- 是否由时间连续性自动归并
- 文件与产物区
- source_video
- subtitle_srt
- songs.json
- songs.txt
- clip_video
- 历史动作区
- run
- retry-step
- reset-to-step
- 错误与建议区
- 错误码
- 错误摘要
- 系统建议动作
现有接口可复用:
- `GET /tasks/<id>`
- `GET /tasks/<id>/steps`
- `GET /tasks/<id>/artifacts`
- `GET /tasks/<id>/history`
- `GET /tasks/<id>/timeline`
- `POST /tasks/<id>/actions/run`
- `POST /tasks/<id>/actions/retry-step`
- `POST /tasks/<id>/actions/reset-to-step`
建议新增接口:
- `GET /tasks/<id>/context`
### 4. 设置页
目标:把常用配置变成可理解、可搜索、可修改的产品设置,而不是裸 JSON。
优先展示的用户级配置:
- `ingest.session_gap_minutes`
- `ingest.meta_sidecar_enabled`
- `ingest.meta_sidecar_suffix`
- `comment.max_retries`
- `comment.base_delay_seconds`
- `cleanup.delete_source_video_after_collection_synced`
- `cleanup.delete_split_videos_after_collection_synced`
- `collection.season_id_a`
- `collection.season_id_b`
现有接口可复用:
- `GET /settings`
- `GET /settings/schema`
- `PUT /settings`
## User-Facing Status Mapping
前端必须提供一层用户态状态,不要直接显示内部状态。
建议映射:
- `created` -> `已接收`
- `transcribed` -> `已转录`
- `songs_detected` -> `已识歌`
- `split_done` -> `已切片`
- `published` -> `已上传`
- `commented` -> `评论完成`
- `collection_synced` -> `已完成`
- `failed_retryable` + `step=comment` -> `等待B站可见`
- `failed_retryable` 其他 -> `等待自动重试`
- `failed_manual` -> `需人工处理`
- 任一步 `running` -> `<步骤名>处理中`
建议步骤名展示:
- `ingest` -> `接收视频`
- `transcribe` -> `转录字幕`
- `song_detect` -> `识别歌曲`
- `split` -> `切分分P`
- `publish` -> `上传分P`
- `comment` -> `发布评论`
- `collection_a` -> `加入完整版合集`
- `collection_b` -> `加入分P合集`
## API Integration
### Existing APIs That Frontend Should Reuse
- `GET /tasks`
- `GET /tasks/<id>`
- `GET /tasks/<id>/steps`
- `GET /tasks/<id>/artifacts`
- `GET /tasks/<id>/history`
- `GET /tasks/<id>/timeline`
- `POST /tasks/<id>/actions/run`
- `POST /tasks/<id>/actions/retry-step`
- `POST /tasks/<id>/actions/reset-to-step`
- `GET /settings`
- `GET /settings/schema`
- `PUT /settings`
- `GET /runtime/services`
- `POST /runtime/services/<service>/<action>`
- `POST /worker/run-once`
### Recommended New APIs
#### `GET /tasks/<id>/context`
用途:给任务详情页和 session 归并 UI 提供上下文。
返回建议:
```json
{
"task_id": "xxx",
"session_key": "王海颖:20260402T2203",
"streamer": "王海颖",
"room_id": "581192190066",
"source_title": "王海颖唱歌录播 04月02日 22时03分",
"segment_started_at": "2026-04-02T22:03:00+08:00",
"segment_duration_seconds": 4076.443,
"full_video_bvid": "BV1uH9wBsELC",
"binding_source": "meta_sidecar"
}
```
#### `POST /tasks/<id>/bind-full-video`
用途:用户在前端手工补绑完整版 BV。
请求:
```json
{
"full_video_bvid": "BV1uH9wBsELC"
}
```
#### `POST /sessions/<session_key>/merge`
用途:把多个任务手工归并到同一个 session。
请求:
```json
{
"task_ids": ["why-2205", "why-2306"]
}
```
#### `POST /sessions/<session_key>/rebind`
用途:修改 session 级完整版 BV。
请求:
```json
{
"full_video_bvid": "BV1uH9wBsELC"
}
```
## Derived Fields For UI
后端最好直接给前端这些派生字段,减少前端自行拼状态:
- `display_status`
- `display_step`
- `progress_percent`
- `split_bvid`
- `full_video_bvid`
- `video_links`
- `delivery_state`
- `retry_state`
- `session_context`
- `actions_available`
其中 `actions_available` 建议返回:
```json
{
"run": true,
"retry_step": true,
"reset_to_step": true,
"bind_full_video": true,
"merge_session": true
}
```
## Delivery State Contract
任务列表和详情页都依赖统一的交付状态模型。
建议结构:
```json
{
"split_bvid": "BV1GoDPBtEUg",
"full_video_bvid": "BV1uH9wBsELC",
"split_video_url": "https://www.bilibili.com/video/BV1GoDPBtEUg",
"full_video_url": "https://www.bilibili.com/video/BV1uH9wBsELC",
"comment_split_done": false,
"comment_full_done": false,
"collection_a_done": false,
"collection_b_done": false,
"source_video_present": true,
"split_videos_present": true
}
```
## Suggested Frontend Build Order
按实际价值排序:
1. 任务列表页状态文案升级
2. 任务详情页增加交付结果和重试说明
3. 详情页增加 session/context 区块
4. 设置页增加 session 归并相关配置
5. 增加“手工绑定完整版 BV”操作
6. 增加“合并 session”操作
## MVP Scope
如果只做一轮最小交付,建议先完成:
- 用户态状态映射
- 单任务详情页
- `GET /tasks/<id>/context`
- 手工绑定 `full_video_bvid`
- 前端重试/重置按钮统一化
这样即使 webhook 和自动 session 归并后面再完善,用户也已经能在前端完整处理问题。

View File

@ -0,0 +1,178 @@
# biliup-next Professionalization Roadmap - 2026-04-06
## 目标
`biliup-next` 从“方向正确的重构工程”推进到“边界清晰、契约稳定、可持续演进的专业级本地控制面系统”。
本路线图以当前仓库中已经明确吸收的 OpenClaw 设计哲学为参照:
- modular monolith
- control-plane first
- schema-first
- manifest-first
- registry over direct coupling
- single source of truth
重点不是重复这些口号,而是把它们继续落实到真实代码和工程制度中。
## 维度一:平台边界
### 当前差距
- provider 内仍大量直接调用 `subprocess``requests`
- adapter / provider / module service 的边界还不够硬
- 外部依赖的超时、重试、错误翻译和观测没有统一制度
### 目标状态
- 外部命令和外部 HTTP 都通过稳定 adapter 层进入系统
- provider 只消费标准化 adapter 能力和统一错误语义
- 超时、重试、限流、日志和诊断在 adapter 层具备统一约束
### 改进事项
-`ffmpeg``codex``biliup`、Bili API、Groq 定义统一 adapter 接口
- 将 provider 中的直接 `subprocess.run()``requests` 逐步下沉到 adapter
- 统一 adapter 错误模型,减少 provider 自己拼接临时错误码
- 为 adapter 增加可观测上下文,例如 command name、target、duration、attempt
### 完成标志
- 业务模块不再直接拼 shell/http 调用
- adapter 成为唯一外部依赖入口
## 维度二:领域模型
### 当前差距
- 核心规则分散在 `task_engine``task_policies``task_actions`、provider 和部分工作区文件
- 文档已有 domain model但还没有形成更稳定的应用服务/领域服务边界
- `task``session``full_video_bvid` 这类跨模块关系仍有隐式规则
### 目标状态
- task lifecycle、retry policy、session binding、delivery side effects 都有清晰归属
- 领域规则主要存在于少数稳定模块,而不是散落在控制器和 provider 中
- “谁负责写什么状态”有明确制度
### 改进事项
- 明确 `Task``TaskContext``SessionBinding` 的边界和 ownership
-`full_video_bvid`、session 归并、评论/合集副作用收敛成独立领域服务
- 评估是否引入显式 domain event 或最小事件记录层
- 为状态迁移建立更显式的 transition table 或 policy object
### 完成标志
- 关键规则不再分散在多个入口函数中重复实现
- task/session/delivery 的事实源和写入职责稳定
## 维度三:接口契约
### 当前差距
- API handler 仍承担较多 payload 组装和视图拼接工作
- OpenAPI 与真实控制面细节还不够同步
- 内部领域模型与外部 API 视图没有充分分层
### 目标状态
- API 对外暴露稳定 DTO而不是直接拼内部模型
- handler 更薄,组装逻辑集中在 service / presenter / serializer 层
- 契约变更可追踪、可校验
### 改进事项
- 为 task detail、task list、session detail、timeline 建立稳定 serializer
- 清理 API handler 中的重复组装逻辑
- 更新 `docs/api/openapi.yaml`,让其覆盖真实控制面接口
- 明确哪些字段属于内部实现细节,不直接暴露给前端
### 完成标志
- handler 只做路由、鉴权、输入解析和响应返回
- API 文档与真实返回结构保持同步
## 维度四:测试体系
### 当前差距
- 已有最小回归测试,但仍偏重纯逻辑
- repository、API、provider 契约、端到端场景覆盖不足
### 目标状态
- 核心编排、存储、API、adapter 都有分层测试
- 关键重构不需要依赖手工回归
### 改进事项
- 新增 repository 的 SQLite 集成测试
- 为 API handler 增加最小接口行为测试
- 为 adapter/provider 增加契约测试和失败场景测试
- 保留现有纯逻辑 unittest继续增加 smoke 回归脚本
### 完成标志
- 至少形成:
- 逻辑单元测试
- SQLite 集成测试
- API 行为测试
- smoke / regression 流程
## 维度五:运维成熟度
### 当前差距
- 已有 doctor、logs、systemd 控制和 workspace 隔离
- 但健康度、指标、审计、恢复机制还不够体系化
### 目标状态
- 控制面不仅能“看到状态”,还能帮助判断风险和恢复问题
- 运行问题可以靠结构化信号而不是人工翻日志定位
### 改进事项
- 区分 health / readiness / degraded
- 规范结构化日志字段
- 为 task/step 增加最小指标视图
- 完善审计事件分类
- 明确数据库/配置变更/运行资产的迁移与回滚流程
### 完成标志
- 常见运行问题可以靠控制面和标准日志定位
- 关键操作具备审计和回滚说明
## 推荐优先顺序
1. 平台边界
2. 领域模型
3. 接口契约
4. 测试体系
5. 运维成熟度
## 下一批优先项
### Priority A
-`biliup`、Bili API 和 `codex` 建立统一 adapter 边界
-`task_actions` 中与 session/delivery 相关的规则继续抽成稳定服务
- 为 task list / task detail / session detail 提供 serializer 层
### Priority B
- 新增 repository SQLite 集成测试
- 新增 API 行为测试
- 更新 OpenAPI 契约
### Priority C
- 设计 health/readiness/degraded 模型
- 规范日志和审计字段
## 备注
- 这份路线图描述的是“距离专业化还有哪些结构性工作”,不是说当前系统不可用。
- 当前项目已经具备正确方向;接下来的重点是把设计哲学继续固化为代码边界、测试制度和运维约束。

View File

@ -0,0 +1,134 @@
# biliup-next Refactor Plan - 2026-04-06
## 目标
围绕当前重构项目已暴露出的状态一致性、数据一致性、运行稳定性和控制面性能问题,分阶段推进改造,优先修复会影响真实运行结果的问题,再收敛模型和技术债。
## 改造原则
- 先修正单一事实源,再优化展示层。
- 先修正状态机真实行为,再修正文档和 UI 映射。
- 先处理运行稳定性,再处理性能和结构整理。
- 每一阶段都要求有可验证的验收结果,避免只做“结构看起来更好”。
## 阶段划分
### Phase 1: 状态与事实源收敛
目标:
- 让 task 具备真实可用的 `running` 语义。
-`full_video_bvid` 只有一套权威写入路径。
- 消除“数据库状态”和“工作区文件状态”互相覆盖的问题。
任务:
- 在 step 开始执行时同步更新 task 运行态。
- 明确 task 完成后 task 状态如何从 `running` 返回业务态。
- 统一 `bind/rebind/webhook/ingest``full_video_bvid` 的读写入口。
- 明确 `task_contexts``session_bindings``full_video_bvid.txt` 的职责。
验收标准:
- 控制台能正确筛选和显示运行中的任务。
- 手工绑定、session 重绑、webhook 注入后,新旧任务读取到相同 BV。
- 不再出现新任务 ingest 继承旧 BV 的情况。
### Phase 2: 运行稳定性加固
目标:
- 让 API 与 worker 并行运行时的 SQLite 行为可控。
- 降低锁冲突、脏状态和半成功写入风险。
任务:
- 为 SQLite 连接增加 `busy_timeout``WAL``foreign_keys=ON`
- 检查高频 repo 写入点,减少不必要的小事务。
- 梳理关键写路径是否需要合并成原子操作。
验收标准:
- API 和 worker 并行运行时,不再轻易触发数据库锁错误。
- 关键任务状态写入具备基本原子性,不出现“步骤更新了、任务没更新”一类半状态。
### Phase 3: 控制面装配与查询优化
目标:
- 去掉 API 请求路径上的重复初始化。
- 解决 `/tasks` 列表的全量扫描和 N+1 查询问题。
任务:
-`ensure_initialized()` 从“每次请求即装配”改为更稳定的应用级初始化方式。
- 收敛 provider/registry 生命周期,避免每次请求重复扫描 manifest 和实例化 provider。
- 优化任务列表接口,把可下推的过滤逻辑下推到 repository 或持久化层。
- 减少列表查询时对工作区文件的逐条读取。
验收标准:
- 常规 API 请求不再重复做全量装配。
- 大量任务下的列表页和筛选页响应明显改善。
### Phase 4: 状态机与文档对齐
目标:
- 让文档状态机、代码状态机、控制面展示口径一致。
任务:
- 决定是否保留 `ingested``completed``cancelled`
- 明确 flag 文件在系统中的角色。
- 如果数据库是任务状态唯一来源,则把 delivery flag 降级为产物或外部副作用标记。
- 更新状态机文档、控制面展示文案和开发约束。
验收标准:
- 文档中的状态集合与代码中的状态集合一致。
- UI 不再依赖不存在或含义不稳定的 task 状态。
### Phase 5: 回归测试与维护收尾
目标:
- 为核心编排逻辑补回归保护。
- 降低后续重构再次引入状态漂移的概率。
任务:
- 新增 `tests/`
- 优先覆盖:
- `task_engine`
- `task_policies`
- `task_actions`
- `retry_meta`
- `task_reset`
- 决定 classic 控制台的保留策略。
验收标准:
- 核心状态流转具备最小自动化回归覆盖。
- 控制台维护策略明确,不再长期双线漂移。
## 推荐执行顺序
1. Phase 1
2. Phase 2
3. Phase 3
4. Phase 4
5. Phase 5
## 本轮起步范围
本轮先从以下子项开始:
- Phase 1.1: task `running` 状态落地
- Phase 1.2: `full_video_bvid` 写路径统一
- Phase 2.1: SQLite 连接配置加固
## 过程记录
- 2026-04-06完成代码审查确认当前优先问题集中在 task 运行态缺失、`full_video_bvid` 多源不一致、SQLite 并发配置不足、重复初始化、列表查询 N+1、状态机文档与实现漂移、测试缺失。
- 2026-04-06将问题整理为本改造计划按阶段拆分并确定先做状态一致性与运行稳定性。

View File

@ -2,7 +2,7 @@
## Goal
定义 `biliup-next` 的任务状态机,取代旧系统依赖 flag 文件、日志和目录结构推断状态的方式
定义 `biliup-next` 当前实现使用的任务状态机,并明确数据库状态与工作区 flag 的职责边界
状态机目标:
@ -23,14 +23,13 @@
### Core Statuses
- `created`
- `ingested`
- `running`
- `transcribed`
- `songs_detected`
- `split_done`
- `published`
- `commented`
- `collection_synced`
- `completed`
### Failure Statuses
@ -39,8 +38,7 @@
### Terminal Statuses
- `completed`
- `cancelled`
- `collection_synced`
- `failed_manual`
## Step Status
@ -117,16 +115,26 @@
```text
created
-> ingested
-> running
-> transcribed
-> running
-> songs_detected
-> running
-> split_done
-> running
-> published
-> running
-> commented
-> running
-> collection_synced
-> completed
```
说明:
- `running` 是任务级瞬时状态,表示当前已有某个 step 被 claim 并正在执行。
- 当该 step 成功结束后task 会回到对应业务状态,例如 `transcribed``split_done``published`
- 当前实现中未使用 `ingested``completed``cancelled` 作为 task 状态。
### Failure Transition
任何步骤失败后:
@ -158,10 +166,10 @@ created
- `collection_a` 可作为独立步骤存在
- 任务整体完成不必强依赖 `collection_a` 成功
建议
当前实现
- `completed` 表示主链路完成
- `collection_synced` 表示所有合集同步完成
- `collection_synced` 表示当前任务已经完成既定收尾流程。
- `collection_a` / `collection_b` 仍作为独立 step 存在,但系统暂未额外引入 `completed` 状态。
## Retry Strategy
@ -196,6 +204,27 @@ created
- 错误信息
- 重试次数
## Flags And Files
工作区中的 flag 文件仍然存在,但它们不是 task 主状态的权威来源。
当前职责划分:
- 数据库:
- task 状态
- step 状态
- 重试信息
- 结构化上下文
- 工作区文件与 flag
- 外部副作用是否已执行
- 产物是否已落地
- 评论/合集等交付标记
换句话说:
- “任务现在处于什么状态”以数据库为准。
- “某个外部动作是否已经做过”可以由工作区 flag 辅助表达。
## UI Expectations
UI 至少需要直接展示:
@ -209,4 +238,4 @@ UI 至少需要直接展示:
## Non-Goals
- 不追求一个任务多个步骤完全并发执行
-允许继续依赖 flag 文件作为权威状态来源
-把工作区 flag 文件当作 task 主状态来源

196
docs/todo-2026-04-06.md Normal file
View File

@ -0,0 +1,196 @@
# biliup-next Todo - 2026-04-06
## 今日待办
### P0
- 修正任务级 `running` 状态缺失问题。
- 当前 step 会进入 `running`,但 task 不会进入 `running`,导致控制台“处理中”筛选、优先级判断和注意力状态失真。
- 相关位置:
- `src/biliup_next/app/task_engine.py`
- `src/biliup_next/app/api_server.py`
- `src/biliup_next/modules/*/service.py`
- 收敛 `full_video_bvid` 的单一事实源。
- 当前 `task_contexts``session_bindings``session/full_video_bvid.txt` 三处状态可能不一致。
- `rebind_session_full_video_action()` 没有同步更新 `session_bindings`,后续新任务 ingest 仍可能继承旧 BV。
- 相关位置:
- `src/biliup_next/app/task_actions.py`
- `src/biliup_next/modules/ingest/service.py`
- `src/biliup_next/infra/task_repository.py`
- 补强 SQLite 并发配置。
- 当前 API 与 worker 可并行运行,但数据库连接仍是最基础配置,缺少 `busy_timeout``WAL``foreign_keys=ON` 等保护。
- 后续任务量或并发操作增加时,容易出现 `database is locked` 一类问题。
- 相关位置:
- `src/biliup_next/infra/db.py`
### P1
- 消除 API 路径上的重复初始化。
- `ensure_initialized()` 目前会重复执行配置加载、DB 初始化、插件扫描和 provider 实例化。
- API 每次请求都可能再次触发整套装配,后续会拖慢控制面并增加维护成本。
- 相关位置:
- `src/biliup_next/app/bootstrap.py`
- `src/biliup_next/app/api_server.py`
- 优化 `/tasks` 的全量扫描和 N+1 查询。
- 当前 `attention/delivery` 过滤会先拉最多 5000 条任务,再逐条补 task payload、step、context 和文件系统状态。
- 任务规模上来后会明显拖慢列表页和筛选体验。
- 相关位置:
- `src/biliup_next/app/api_server.py`
- `src/biliup_next/infra/task_repository.py`
- 收敛文档状态机与代码实现。
- 文档中存在 `ingested``completed``cancelled`,并声明不再依赖 flag 文件作为权威状态。
- 实际实现中这些状态并未完整落地,评论/合集完成态仍依赖多个 flag 文件。
- 需要统一“文档模型”和“代码真实状态机”,避免后续继续漂移。
- 相关位置:
- `docs/state-machine.md`
- `src/biliup_next/app/api_server.py`
- `src/biliup_next/modules/comment/providers/bilibili_top_comment.py`
- `src/biliup_next/modules/collection/providers/bilibili_collection.py`
### P2
- 为状态机、重试和手工干预流程补测试。
- 当前仓库没有看到 `tests/` 或自动化回归覆盖。
- 优先覆盖:
- `task_engine`
- `task_policies`
- `task_actions`
- `retry_meta`
- `task_reset`
- 明确两套控制台的维护策略。
- 当前 React 控制台和 classic 控制台并存。
- 需要决定 classic 是长期保留、冻结维护,还是逐步退役。
## 备注
- 以上问题来自 2026-04-06 对 `biliup-next` 当前重构实现的代码审查。
- 优先顺序按“状态一致性 / 数据一致性 / 运行稳定性 / 控制面性能 / 可维护性”排列。
## 过程记录
- 2026-04-06完成首轮代码审查确认当前优先问题。
- 2026-04-06基于问题清单拆出分阶段改造计划`docs/refactor-plan-2026-04-06.md`
- 2026-04-06确定首批执行范围为 task `running` 状态落地、`full_video_bvid` 写路径统一、SQLite 连接加固。
- 2026-04-06已完成首轮代码改造。
- task 在 step 被 claim 后会进入 `running`
- `bind/rebind/webhook` 已统一复用 `full_video_bvid` 持久化路径。
- SQLite 连接已增加 `foreign_keys``busy_timeout``WAL``synchronous=NORMAL`
- 已执行 `python -m compileall biliup-next/src/biliup_next` 验证语法通过。
- 2026-04-06已完成第二轮控制面改造。
- `ensure_initialized()` 已改为进程内复用,避免 API 请求重复装配全套应用状态。
- `PUT /settings` 后会主动失效并重建缓存状态,避免新旧配置混用。
- `/tasks` 列表已改为批量预取 task context 和 steps减少列表页 N+1 查询。
- 已再次执行 `python -m compileall biliup-next/src/biliup_next` 验证语法通过。
- 2026-04-06已完成状态机文档对齐。
- `state-machine.md``architecture.md` 已改成当前代码真实状态集合:`created/running/transcribed/songs_detected/split_done/published/commented/collection_synced/failed_*`
- 已明确 `ingested/completed/cancelled` 当前未落地,不再作为现阶段实现口径。
- 已明确工作区 flag 仅表示交付副作用和产物标记,不作为 task 主状态事实源。
- 2026-04-06已补最小回归测试集。
- 新增 `tests/test_task_engine.py`
- 新增 `tests/test_retry_meta.py`
- 新增 `tests/test_task_actions.py`
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 7 个测试全部通过。
- 2026-04-06已继续收口 `task_actions` 的写路径。
- `rebind_session_full_video_action()` 不再重复 upsert session binding。
- `merge_session_action()` 在继承 `full_video_bvid` 时已复用统一持久化路径。
- 已补对应测试,当前测试数为 8全部通过。
- 2026-04-06已补第二层状态流转测试。
- 新增 `tests/test_task_policies.py`
- 新增 `tests/test_task_runner.py`
- 已覆盖 disabled step fallback、publish 重试调度、reset 后回退状态、step claim 后 task 进入 `running`
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 12 个测试全部通过。
- 2026-04-06已完成一轮 API 代码清理。
- `api_server.py` 新增批量 task payload 组装 helper。
- `/tasks``/sessions/:session_key` 已复用同一套 task payload 预取与组装逻辑。
- 已重新执行测试,当前 12 个测试全部通过。
- 2026-04-06已整理专业化路线图。
- 新增 `docs/professionalization-roadmap-2026-04-06.md`
- 按平台边界、领域模型、接口契约、测试体系、运维成熟度五个维度拆解后续改进方向。
- 已明确下一批优先项为 adapter 边界、session/delivery 领域服务收敛、serializer 层、SQLite/API 测试与 OpenAPI 对齐。
- 2026-04-06已开始落最小 adapter 边界。
- 新增 `infra/adapters/codex_cli.py`
- 新增 `infra/adapters/biliup_cli.py`
- 新增 `infra/adapters/bilibili_api.py`
- `codex``biliup_cli``bilibili_top_comment``bilibili_collection` provider 已改为依赖 adapter
- 已执行 unittest 与 `python -m compileall biliup-next/src/biliup_next`,当前验证通过。
- 2026-04-06已开始落 serializer 层。
- 新增 `app/serializers.py`
- task list / task detail / session detail 的 payload 组装已从 `api_server.py` 抽到 `ControlPlaneSerializer`
- `api_server.py` 进一步收敛为路由、鉴权和响应控制
- 已执行 unittest 与 `python -m compileall biliup-next/src/biliup_next`,当前验证通过。
- 2026-04-06已继续收口 serializer 层。
- task timeline 的组装逻辑已从 `api_server.py` 抽到 `ControlPlaneSerializer.timeline_payload()`
- `api_server.py` 中 task 详情相关展示逻辑继续变薄
- 已重新执行 unittest 与 `python -m compileall biliup-next/src/biliup_next`,当前验证通过。
- 2026-04-06已补 serializer 层测试。
- 新增 `tests/test_serializers.py`
- 已覆盖 task payload、session payload、timeline payload 的控制面展示契约
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 15 个测试全部通过。
- 2026-04-06已补 repository 的 SQLite 集成测试。
- 新增 `tests/test_task_repository_sqlite.py`
- 已覆盖 `query_tasks`、批量 context/steps 查询、`session_bindings` upsert 与 fallback 读取
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 18 个测试全部通过。
- 2026-04-06已补 API 行为测试。
- 扩展 `tests/test_api_server.py`
- 已覆盖 `GET /tasks``GET /tasks/:id/timeline``GET /sessions/:session_key``PUT /settings`
- 已覆盖 control token 鉴权分支
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 2026-04-06已继续补执行面 API 行为测试。
- `tests/test_api_server.py` 已新增 `POST /tasks``POST /tasks/:id/actions/run``POST /tasks/:id/actions/retry-step``POST /tasks/:id/actions/reset-to-step`
- 已覆盖写操作成功分支与 `missing step_name` 参数校验
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 28 个测试全部通过。
- 2026-04-06已补人工干预相关 API 行为测试。
- `tests/test_api_server.py` 已新增 `POST /tasks/:id/bind-full-video``POST /sessions/:session_key/rebind``POST /sessions/:session_key/merge``POST /webhooks/full-video-uploaded`
- 已覆盖成功分支、参数校验,以及 `TASK_NOT_FOUND/SESSION_NOT_FOUND` 的状态码映射
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 37 个测试全部通过。
- 2026-04-06已补运行面 API 行为测试。
- `tests/test_api_server.py` 已新增 `POST /worker/run-once``POST /scheduler/run-once``POST /runtime/services/:name/:action``POST /stage/import`
- 已覆盖 action record 落库、副作用返回值、`invalid action``missing source_path` 错误分支
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 43 个测试全部通过。
- 2026-04-06已补剩余控制面 GET 与上传接口测试。
- `tests/test_api_server.py` 已新增 `GET /history``GET /modules``GET /scheduler/preview``GET /settings/schema``POST /stage/upload`
- `stage/upload` 成功分支已通过 patch `cgi.FieldStorage` 固定最小 handler 契约,避免 multipart 解析细节导致测试脆弱
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 49 个测试全部通过。
- 2026-04-06已开始收口 session / delivery 领域服务。
- 新增 `app/session_delivery_service.py`,承接 `bind/rebind/merge/webhook` 的核心规则与持久化路径
- `app/task_actions.py` 已改为薄封装,仅保留 `ensure_initialized()`、审计记录与 service 调用
- 新增 `tests/test_session_delivery_service.py`
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 51 个测试全部通过。
- 2026-04-06已继续收口 task control 领域服务。
- 新增 `app/task_control_service.py`,承接 `run/retry/reset` 编排
- `app/task_actions.py` 已进一步变薄,`run_task_action/retry_step_action/reset_to_step_action` 改为纯 service 封装 + 审计
- 新增 `tests/test_task_control_service.py`
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 54 个测试全部通过。
- 2026-04-06已将 POST 路径分发从 API handler 中下沉。
- 新增 `app/control_plane_post_dispatcher.py`,统一承接 POST 路径的用例分发、状态码映射和运行面 action record
- `app/api_server.py``do_POST()` 已收敛为请求解析、dispatcher 调用和响应写出
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 54 个测试全部通过。
- 2026-04-06已补 dispatcher 直测。
- 新增 `tests/test_control_plane_get_dispatcher.py`
- 新增 `tests/test_control_plane_post_dispatcher.py`
- 已覆盖 dispatcher 层的状态码映射、过滤逻辑、运行面 action record 与创建任务冲突映射
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 62 个测试全部通过。
- 2026-04-06已开始做可迁移交付清理。
- `config/settings.json``config/settings.staged.json` 已替换为 standalone 默认模板,不再携带本机绝对路径和真实密钥
- `runtime/cookies.json``runtime/upload_config.json` 已替换为可分发模板
- 新增 `docs/cold-start-checklist.md`
- `README.md` 已补充冷启动入口说明
- 已执行 `PYTHONPATH=biliup-next/src python -m unittest discover -s biliup-next/tests -v`
- 当前 63 个测试全部通过。

View File

@ -54,12 +54,18 @@ http://127.0.0.1:5173/ui/
生产构建完成后,把输出放到 `frontend/dist/`,当前 Python API 会自动在以下地址托管它:
```text
http://127.0.0.1:8787/ui/
http://127.0.0.1:8787/
```
## 下一步
旧控制台回退入口:
- 迁移 `Settings`
- 将任务表改为真正服务端驱动的分页/排序/筛选
- 增加 React 路由和查询缓存
- 最终替换当前 `src/biliup_next/app/static/` 入口
```text
http://127.0.0.1:8787/classic
```
## 当前状态
- React 控制台已接管默认首页
- 任务页已支持 `session context / bind full video / session merge / session rebind`
- 高频任务操作已改为局部刷新
- 旧原生控制台仍保留作回退路径

View File

@ -1,122 +1,246 @@
import { useEffect, useState, useDeferredValue, startTransition } from "react";
import { useRef } from "react";
import { fetchJson, uploadFile } from "./api/client.js";
import { fetchJson, fetchJsonCached, invalidateJsonCache, primeJsonCache, uploadFile } from "./api/client.js";
import LogsPanel from "./components/LogsPanel.jsx";
import OverviewPanel from "./components/OverviewPanel.jsx";
import SettingsPanel from "./components/SettingsPanel.jsx";
import TaskTable from "./components/TaskTable.jsx";
import TaskDetailCard from "./components/TaskDetailCard.jsx";
import { summarizeAttention, summarizeDelivery } from "./lib/format.js";
import {
attentionLabel,
currentStepLabel,
summarizeAttention,
summarizeDelivery,
taskDisplayStatus,
taskPrimaryActionLabel,
} from "./lib/format.js";
const NAV_ITEMS = ["Overview", "Tasks", "Settings", "Logs"];
function PlaceholderView({ title, description }) {
function buildTasksUrl(query) {
const params = new URLSearchParams();
params.set("limit", String(query.limit || 24));
params.set("offset", String(query.offset || 0));
params.set("sort", String(query.sort || "updated_desc"));
if (query.status) params.set("status", query.status);
if (query.search) params.set("search", query.search);
if (query.attention) params.set("attention", query.attention);
if (query.delivery) params.set("delivery", query.delivery);
return `/tasks?${params.toString()}`;
}
function parseHashState() {
const raw = window.location.hash.replace(/^#/, "");
const [viewPart, queryPart = ""] = raw.split("?");
const params = new URLSearchParams(queryPart);
return {
view: NAV_ITEMS.includes(viewPart) ? viewPart : "Tasks",
taskId: params.get("task") || "",
};
}
function syncHashState(view, taskId) {
const params = new URLSearchParams();
if (taskId && view === "Tasks") params.set("task", taskId);
const suffix = params.toString() ? `?${params.toString()}` : "";
window.history.replaceState(null, "", `#${view}${suffix}`);
}
function FocusQueue({ tasks, selectedTaskId, onSelectTask, onRunTask }) {
const focusItems = tasks
.filter((task) => ["manual_now", "retry_now", "waiting_retry"].includes(summarizeAttention(task)))
.sort((a, b) => {
const score = { manual_now: 0, retry_now: 1, waiting_retry: 2 };
const diff = score[summarizeAttention(a)] - score[summarizeAttention(b)];
if (diff !== 0) return diff;
return String(b.updated_at).localeCompare(String(a.updated_at));
})
.slice(0, 6);
if (!focusItems.length) return null;
return (
<section className="placeholder-view">
<h2>{title}</h2>
<p>{description}</p>
</section>
<article className="panel">
<div className="panel-head">
<div>
<p className="eyebrow">Priority Queue</p>
<h2>需要优先处理的任务</h2>
</div>
<div className="panel-meta">{focusItems.length} tasks</div>
</div>
<div className="focus-grid">
{focusItems.map((task) => (
<button
key={task.id}
type="button"
className={selectedTaskId === task.id ? "focus-card active" : "focus-card"}
onClick={() => onSelectTask(task.id)}
onMouseEnter={() => onSelectTask(task.id, { prefetch: true })}
>
<div className="focus-card-head">
<span className="status-badge">{attentionLabel(summarizeAttention(task))}</span>
<span className="status-badge">{taskDisplayStatus(task)}</span>
</div>
<strong>{task.title}</strong>
<p>{currentStepLabel(task)}</p>
<div className="row-actions">
<button
className="nav-btn compact-btn"
onClick={(event) => {
event.stopPropagation();
onSelectTask(task.id);
}}
>
打开详情
</button>
<button
className="nav-btn compact-btn strong-btn"
onClick={(event) => {
event.stopPropagation();
onRunTask?.(task.id);
}}
>
{taskPrimaryActionLabel(task)}
</button>
</div>
</button>
))}
</div>
</article>
);
}
function TasksView({
tasks,
taskTotal,
taskQuery,
selectedTaskId,
onSelectTask,
onRunTask,
taskDetail,
session,
loading,
detailLoading,
actionBusy,
selectedStepName,
onSelectStep,
onRetryStep,
onResetStep,
onBindFullVideo,
onOpenSessionTask,
onSessionMerge,
onSessionRebind,
onTaskQueryChange,
}) {
const [search, setSearch] = useState("");
const [statusFilter, setStatusFilter] = useState("");
const [attentionFilter, setAttentionFilter] = useState("");
const [deliveryFilter, setDeliveryFilter] = useState("");
const [sort, setSort] = useState("updated_desc");
const deferredSearch = useDeferredValue(search);
const deferredSearch = useDeferredValue(taskQuery.search);
const filtered = tasks
.filter((task) => {
const haystack = `${task.id} ${task.title}`.toLowerCase();
if (deferredSearch && !haystack.includes(deferredSearch.toLowerCase())) return false;
if (statusFilter && task.status !== statusFilter) return false;
if (attentionFilter && summarizeAttention(task) !== attentionFilter) return false;
if (deliveryFilter && summarizeDelivery(task.delivery_state) !== deliveryFilter) return false;
return true;
})
.sort((a, b) => {
if (sort === "title_asc") return String(a.title).localeCompare(String(b.title), "zh-CN");
if (sort === "title_desc") return String(b.title).localeCompare(String(a.title), "zh-CN");
if (sort === "attention") return summarizeAttention(a).localeCompare(summarizeAttention(b), "zh-CN");
return String(b.updated_at).localeCompare(String(a.updated_at), "zh-CN");
});
const filtered = tasks.filter((task) => {
const haystack = `${task.id} ${task.title}`.toLowerCase();
if (deferredSearch && !haystack.includes(deferredSearch.toLowerCase())) return false;
return true;
});
const pageStart = taskTotal ? taskQuery.offset + 1 : 0;
const pageEnd = taskQuery.offset + tasks.length;
const canPrev = taskQuery.offset > 0;
const canNext = taskQuery.offset + taskQuery.limit < taskTotal;
return (
<section className="tasks-layout-react">
<article className="panel">
<div className="panel-head">
<div>
<p className="eyebrow">Tasks Workspace</p>
<h2>Task Table</h2>
<div className="tasks-main-stack">
<FocusQueue tasks={tasks} selectedTaskId={selectedTaskId} onSelectTask={onSelectTask} onRunTask={onRunTask} />
<article className="panel">
<div className="panel-head">
<div>
<p className="eyebrow">Tasks Workspace</p>
<h2>Task Table</h2>
</div>
<div className="panel-meta">{loading ? "syncing..." : `${pageStart}-${pageEnd} / ${taskTotal}`}</div>
</div>
<div className="panel-meta">{loading ? "syncing..." : `${filtered.length} visible`}</div>
</div>
<div className="toolbar-grid">
<input
value={search}
onChange={(event) => setSearch(event.target.value)}
placeholder="搜索任务标题或 task id"
/>
<select value={statusFilter} onChange={(event) => setStatusFilter(event.target.value)}>
<option value="">全部状态</option>
<option value="running">处理中</option>
<option value="failed_retryable">重试</option>
<option value="failed_manual">待人工</option>
<option value="published">待收尾</option>
<option value="collection_synced">已完成</option>
</select>
<select value={attentionFilter} onChange={(event) => setAttentionFilter(event.target.value)}>
<option value="">全部关注状态</option>
<option value="manual_now">仅看需人工</option>
<option value="retry_now">仅看到点重试</option>
<option value="waiting_retry">仅看等待重试</option>
</select>
<select value={deliveryFilter} onChange={(event) => setDeliveryFilter(event.target.value)}>
<option value="">全部交付状态</option>
<option value="legacy_untracked">主视频评论未追踪</option>
<option value="pending_comment">评论待完成</option>
<option value="cleanup_removed">已清理视频</option>
</select>
<select value={sort} onChange={(event) => setSort(event.target.value)}>
<option value="updated_desc">最近更新</option>
<option value="title_asc">标题 A-Z</option>
<option value="title_desc">标题 Z-A</option>
<option value="attention">按关注状态</option>
</select>
</div>
<TaskTable tasks={filtered} selectedTaskId={selectedTaskId} onSelectTask={onSelectTask} onRunTask={onRunTask} />
</article>
<div className="toolbar-grid">
<input
value={taskQuery.search}
onChange={(event) => onTaskQueryChange({ search: event.target.value, offset: 0 })}
placeholder="搜索任务标题或 task id"
/>
<select value={taskQuery.status} onChange={(event) => onTaskQueryChange({ status: event.target.value, offset: 0 })}>
<option value="">全部状态</option>
<option value="running">处理中</option>
<option value="failed_retryable">待重试</option>
<option value="failed_manual">待人工</option>
<option value="published">收尾</option>
<option value="collection_synced">已完成</option>
</select>
<select value={taskQuery.attention} onChange={(event) => onTaskQueryChange({ attention: event.target.value, offset: 0 })}>
<option value="">全部关注状态</option>
<option value="manual_now">仅看需人工</option>
<option value="retry_now">仅看到点重试</option>
<option value="waiting_retry">仅看等待重试</option>
</select>
<select value={taskQuery.delivery} onChange={(event) => onTaskQueryChange({ delivery: event.target.value, offset: 0 })}>
<option value="">全部交付状态</option>
<option value="pending_comment">评论待完成</option>
<option value="cleanup_removed">已清理视频</option>
</select>
<select value={taskQuery.sort} onChange={(event) => onTaskQueryChange({ sort: event.target.value, offset: 0 })}>
<option value="updated_desc">最近更新</option>
<option value="updated_asc">最早更新</option>
<option value="title_asc">标题 A-Z</option>
<option value="title_desc">标题 Z-A</option>
<option value="status_asc">按状态</option>
</select>
<select value={String(taskQuery.limit)} onChange={(event) => onTaskQueryChange({ limit: Number(event.target.value), offset: 0 })}>
<option value="12">12 / </option>
<option value="24">24 / </option>
<option value="48">48 / </option>
</select>
</div>
<div className="row-actions" style={{ marginBottom: 12 }}>
<button className="nav-btn compact-btn" onClick={() => onTaskQueryChange({ offset: Math.max(0, taskQuery.offset - taskQuery.limit) })} disabled={!canPrev || loading}>
上一页
</button>
<button className="nav-btn compact-btn" onClick={() => onTaskQueryChange({ offset: taskQuery.offset + taskQuery.limit })} disabled={!canNext || loading}>
下一页
</button>
</div>
<TaskTable tasks={filtered} selectedTaskId={selectedTaskId} onSelectTask={onSelectTask} onRunTask={onRunTask} />
</article>
</div>
<TaskDetailCard
payload={taskDetail}
session={session}
loading={detailLoading}
actionBusy={actionBusy}
selectedStepName={selectedStepName}
onSelectStep={onSelectStep}
onRetryStep={onRetryStep}
onResetStep={onResetStep}
onBindFullVideo={onBindFullVideo}
onOpenSessionTask={onOpenSessionTask}
onSessionMerge={onSessionMerge}
onSessionRebind={onSessionRebind}
/>
</section>
);
}
export default function App() {
const [view, setView] = useState("Tasks");
const initialLocation = parseHashState();
const [view, setView] = useState(initialLocation.view);
const [health, setHealth] = useState(false);
const [doctorOk, setDoctorOk] = useState(false);
const [tasks, setTasks] = useState([]);
const [taskTotal, setTaskTotal] = useState(0);
const [taskQuery, setTaskQuery] = useState({
search: "",
status: "",
attention: "",
delivery: "",
sort: "updated_desc",
limit: 24,
offset: 0,
});
const [services, setServices] = useState({ items: [] });
const [scheduler, setScheduler] = useState(null);
const [history, setHistory] = useState({ items: [] });
@ -127,21 +251,34 @@ export default function App() {
const [autoRefreshLogs, setAutoRefreshLogs] = useState(false);
const [settings, setSettings] = useState({});
const [settingsSchema, setSettingsSchema] = useState(null);
const [selectedTaskId, setSelectedTaskId] = useState("");
const [selectedTaskId, setSelectedTaskId] = useState(initialLocation.taskId);
const [selectedStepName, setSelectedStepName] = useState("");
const [taskDetail, setTaskDetail] = useState(null);
const [currentSession, setCurrentSession] = useState(null);
const [loading, setLoading] = useState(true);
const [detailLoading, setDetailLoading] = useState(false);
const [overviewLoading, setOverviewLoading] = useState(false);
const [logLoading, setLogLoading] = useState(false);
const [settingsLoading, setSettingsLoading] = useState(false);
const [banner, setBanner] = useState(null);
const [actionBusy, setActionBusy] = useState("");
const [panelBusy, setPanelBusy] = useState("");
const [toasts, setToasts] = useState([]);
const detailCacheRef = useRef(new Map());
function pushToast(kind, text) {
const id = `${Date.now()}-${Math.random().toString(36).slice(2, 8)}`;
setToasts((current) => [...current, { id, kind, text }]);
}
function removeToast(id) {
setToasts((current) => current.filter((item) => item.id !== id));
}
async function loadOverviewPanels() {
const [servicesPayload, schedulerPayload, historyPayload] = await Promise.all([
fetchJson("/runtime/services"),
fetchJson("/scheduler/preview"),
fetchJson("/history?limit=20"),
fetchJsonCached("/runtime/services"),
fetchJsonCached("/scheduler/preview"),
fetchJsonCached("/history?limit=20"),
]);
setServices(servicesPayload);
setScheduler(schedulerPayload);
@ -152,13 +289,14 @@ export default function App() {
setLoading(true);
try {
const [healthPayload, doctorPayload, taskPayload] = await Promise.all([
fetchJson("/health"),
fetchJson("/doctor"),
fetchJson("/tasks?limit=100"),
fetchJsonCached("/health"),
fetchJsonCached("/doctor"),
fetchJson(buildTasksUrl(taskQuery)),
]);
setHealth(Boolean(healthPayload.ok));
setDoctorOk(Boolean(doctorPayload.ok));
setTasks(taskPayload.items || []);
setTaskTotal(taskPayload.total || 0);
startTransition(() => {
if (!selectedTaskId && taskPayload.items?.length) {
setSelectedTaskId(taskPayload.items[0].id);
@ -169,17 +307,53 @@ export default function App() {
}
}
async function loadTasksOnly(query = taskQuery) {
const url = buildTasksUrl(query);
const taskPayload = await fetchJson(url);
primeJsonCache(url, taskPayload);
setTasks(taskPayload.items || []);
setTaskTotal(taskPayload.total || 0);
return taskPayload.items || [];
}
async function loadSessionDetail(sessionKey) {
if (!sessionKey) {
setCurrentSession(null);
return null;
}
const payload = await fetchJson(`/sessions/${encodeURIComponent(sessionKey)}`);
primeJsonCache(`/sessions/${encodeURIComponent(sessionKey)}`, payload);
setCurrentSession(payload);
return payload;
}
async function loadTaskDetail(taskId) {
const cached = detailCacheRef.current.get(taskId);
if (cached) {
setTaskDetail(cached);
void loadSessionDetail(cached.context?.session_key);
setDetailLoading(false);
}
setDetailLoading(true);
try {
const [task, steps, artifacts, history, timeline] = await Promise.all([
fetchJson(`/tasks/${encodeURIComponent(taskId)}`),
fetchJson(`/tasks/${encodeURIComponent(taskId)}/steps`),
fetchJson(`/tasks/${encodeURIComponent(taskId)}/artifacts`),
fetchJson(`/tasks/${encodeURIComponent(taskId)}/history`),
fetchJson(`/tasks/${encodeURIComponent(taskId)}/timeline`),
const [task, steps, artifacts, history, timeline, context] = await Promise.all([
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/steps`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/artifacts`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/history`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/timeline`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/context`),
]);
setTaskDetail({ task, steps, artifacts, history, timeline });
const payload = { task, steps, artifacts, history, timeline, context };
detailCacheRef.current.set(taskId, payload);
primeJsonCache(`/tasks/${encodeURIComponent(taskId)}`, task);
primeJsonCache(`/tasks/${encodeURIComponent(taskId)}/steps`, steps);
primeJsonCache(`/tasks/${encodeURIComponent(taskId)}/artifacts`, artifacts);
primeJsonCache(`/tasks/${encodeURIComponent(taskId)}/history`, history);
primeJsonCache(`/tasks/${encodeURIComponent(taskId)}/timeline`, timeline);
primeJsonCache(`/tasks/${encodeURIComponent(taskId)}/context`, context);
setTaskDetail(payload);
await loadSessionDetail(context?.session_key);
if (!selectedStepName) {
const suggested = steps.items?.find((step) => ["failed_retryable", "failed_manual", "running"].includes(step.status))?.step_name
|| steps.items?.find((step) => step.status !== "succeeded")?.step_name
@ -191,15 +365,87 @@ export default function App() {
}
}
async function refreshSelectedTask(taskId = selectedTaskId, { refreshTasks = true } = {}) {
if (refreshTasks) {
const refreshedTasks = await loadTasksOnly();
if (!taskId && refreshedTasks.length) {
taskId = refreshedTasks[0].id;
}
}
if (!taskId) {
setTaskDetail(null);
setCurrentSession(null);
return;
}
await loadTaskDetail(taskId);
}
function invalidateTaskCaches(taskId) {
invalidateJsonCache("/tasks?");
if (taskId) {
detailCacheRef.current.delete(taskId);
invalidateJsonCache(`/tasks/${encodeURIComponent(taskId)}`);
}
}
function invalidateSessionCaches(sessionKey) {
if (!sessionKey) return;
invalidateJsonCache(`/sessions/${encodeURIComponent(sessionKey)}`);
}
async function prefetchTaskDetail(taskId) {
if (!taskId || detailCacheRef.current.has(taskId)) return;
try {
const [task, steps, artifacts, history, timeline, context] = await Promise.all([
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/steps`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/artifacts`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/history`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/timeline`),
fetchJsonCached(`/tasks/${encodeURIComponent(taskId)}/context`),
]);
detailCacheRef.current.set(taskId, { task, steps, artifacts, history, timeline, context });
} catch {
// Ignore prefetch failures; normal navigation will surface the actual error.
}
}
useEffect(() => {
let cancelled = false;
loadShell().catch((error) => {
if (!cancelled) setBanner({ kind: "hot", text: `初始化失败: ${error}` });
if (!cancelled) pushToast("hot", `初始化失败: ${error}`);
});
return () => {
cancelled = true;
};
}, [selectedTaskId]);
}, []);
useEffect(() => {
syncHashState(view, selectedTaskId);
}, [view, selectedTaskId]);
useEffect(() => {
if (view !== "Tasks") return;
loadTasksOnly(taskQuery).catch((error) => pushToast("hot", `任务列表加载失败: ${error}`));
}, [taskQuery, view]);
useEffect(() => {
if (!toasts.length) return undefined;
const timer = window.setTimeout(() => setToasts((current) => current.slice(1)), 3200);
return () => window.clearTimeout(timer);
}, [toasts]);
useEffect(() => {
function handleHashChange() {
const next = parseHashState();
setView(next.view);
if (next.taskId) {
setSelectedTaskId(next.taskId);
}
}
window.addEventListener("hashchange", handleHashChange);
return () => window.removeEventListener("hashchange", handleHashChange);
}, []);
useEffect(() => {
if (view !== "Overview") return;
@ -208,9 +454,9 @@ export default function App() {
setOverviewLoading(true);
try {
const [servicesPayload, schedulerPayload, historyPayload] = await Promise.all([
fetchJson("/runtime/services"),
fetchJson("/scheduler/preview"),
fetchJson("/history?limit=20"),
fetchJsonCached("/runtime/services"),
fetchJsonCached("/scheduler/preview"),
fetchJsonCached("/history?limit=20"),
]);
if (cancelled) return;
setServices(servicesPayload);
@ -230,7 +476,7 @@ export default function App() {
if (!selectedTaskId) return;
let cancelled = false;
loadTaskDetail(selectedTaskId).catch((error) => {
if (!cancelled) setBanner({ kind: "hot", text: `任务详情加载失败: ${error}` });
if (!cancelled) pushToast("hot", `任务详情加载失败: ${error}`);
});
return () => {
cancelled = true;
@ -279,7 +525,7 @@ export default function App() {
if (view !== "Logs" || !selectedLogName) return;
let cancelled = false;
loadCurrentLogContent(selectedLogName).catch((error) => {
if (!cancelled) setBanner({ kind: "hot", text: `日志加载失败: ${error}` });
if (!cancelled) pushToast("hot", `日志加载失败: ${error}`);
});
return () => {
cancelled = true;
@ -301,8 +547,8 @@ export default function App() {
setSettingsLoading(true);
try {
const [settingsPayload, schemaPayload] = await Promise.all([
fetchJson("/settings"),
fetchJson("/settings/schema"),
fetchJsonCached("/settings"),
fetchJsonCached("/settings/schema"),
]);
if (cancelled) return;
setSettings(settingsPayload);
@ -329,42 +575,77 @@ export default function App() {
history={history}
loading={overviewLoading}
onRefreshScheduler={async () => {
const payload = await fetchJson("/scheduler/preview");
setScheduler(payload);
setBanner({ kind: "good", text: "Scheduler 已刷新" });
setPanelBusy("refresh_scheduler");
try {
const payload = await fetchJson("/scheduler/preview");
setScheduler(payload);
pushToast("good", "Scheduler 已刷新");
} finally {
setPanelBusy("");
}
}}
onRefreshHistory={async () => {
const payload = await fetchJson("/history?limit=20");
setHistory(payload);
setBanner({ kind: "good", text: "Recent Actions 已刷新" });
setPanelBusy("refresh_history");
try {
const payload = await fetchJson("/history?limit=20");
setHistory(payload);
pushToast("good", "Recent Actions 已刷新");
} finally {
setPanelBusy("");
}
}}
onStageImport={async (sourcePath) => {
const result = await fetchJson("/stage/import", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ source_path: sourcePath }),
});
await loadShell();
setBanner({ kind: "good", text: `已导入到 stage: ${result.target_path}` });
setPanelBusy("stage_import");
try {
const result = await fetchJson("/stage/import", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ source_path: sourcePath }),
});
await loadTasksOnly();
pushToast("good", `已导入到 stage: ${result.target_path}`);
} finally {
setPanelBusy("");
}
}}
onStageUpload={async (file) => {
const result = await uploadFile("/stage/upload", file);
await loadShell();
setBanner({ kind: "good", text: `已上传到 stage: ${result.target_path}` });
setPanelBusy("stage_upload");
try {
const result = await uploadFile("/stage/upload", file);
await loadTasksOnly();
pushToast("good", `已上传到 stage: ${result.target_path}`);
} finally {
setPanelBusy("");
}
}}
onRunOnce={async () => {
await fetchJson("/worker/run-once", { method: "POST" });
await loadShell();
setBanner({ kind: "good", text: "Worker 已执行一轮" });
setPanelBusy("run_once");
try {
await fetchJson("/worker/run-once", { method: "POST" });
invalidateJsonCache("/tasks?");
await loadTasksOnly();
if (selectedTaskId) await refreshSelectedTask(selectedTaskId, { refreshTasks: false });
pushToast("good", "Worker 已执行一轮");
} finally {
setPanelBusy("");
}
}}
onServiceAction={async (serviceId, action) => {
await fetchJson(`/runtime/services/${serviceId}/${action}`, { method: "POST" });
await loadShell();
if (view === "Overview") {
await loadOverviewPanels();
const busyKey = `service:${serviceId}:${action}`;
setPanelBusy(busyKey);
try {
await fetchJson(`/runtime/services/${serviceId}/${action}`, { method: "POST" });
invalidateJsonCache("/runtime/services");
await loadShell();
if (view === "Overview") {
await loadOverviewPanels();
}
pushToast("good", `${serviceId} ${action} 完成`);
} finally {
setPanelBusy("");
}
setBanner({ kind: "good", text: `${serviceId} ${action} 完成` });
}}
busy={panelBusy}
/>
);
}
@ -372,46 +653,139 @@ export default function App() {
return (
<TasksView
tasks={tasks}
taskTotal={taskTotal}
taskQuery={taskQuery}
selectedTaskId={selectedTaskId}
onSelectTask={(taskId) => {
onSelectTask={(taskId, options = {}) => {
if (options.prefetch) {
prefetchTaskDetail(taskId);
return;
}
startTransition(() => {
setSelectedTaskId(taskId);
setSelectedStepName("");
});
}}
onRunTask={async (taskId) => {
const result = await fetchJson(`/tasks/${encodeURIComponent(taskId)}/actions/run`, { method: "POST" });
await loadShell();
await loadTaskDetail(taskId);
setBanner({ kind: "good", text: `任务已推进: ${taskId} / processed=${result.processed.length}` });
setActionBusy("run");
try {
const result = await fetchJson(`/tasks/${encodeURIComponent(taskId)}/actions/run`, { method: "POST" });
invalidateTaskCaches(taskId);
invalidateSessionCaches(taskDetail?.context?.session_key);
await refreshSelectedTask(taskId);
pushToast("good", `任务已推进: ${taskId} / processed=${result.processed.length}`);
} finally {
setActionBusy("");
}
}}
taskDetail={taskDetail}
session={currentSession}
loading={loading}
detailLoading={detailLoading}
actionBusy={actionBusy}
selectedStepName={selectedStepName}
onSelectStep={setSelectedStepName}
onRetryStep={async (stepName) => {
if (!selectedTaskId || !stepName) return;
const result = await fetchJson(`/tasks/${encodeURIComponent(selectedTaskId)}/actions/retry-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: stepName }),
});
await loadShell();
await loadTaskDetail(selectedTaskId);
setBanner({ kind: "good", text: `已重试 ${stepName} / processed=${result.processed.length}` });
setActionBusy("retry");
try {
const result = await fetchJson(`/tasks/${encodeURIComponent(selectedTaskId)}/actions/retry-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: stepName }),
});
invalidateTaskCaches(selectedTaskId);
invalidateSessionCaches(taskDetail?.context?.session_key);
await refreshSelectedTask(selectedTaskId);
pushToast("good", `已重试 ${stepName} / processed=${result.processed.length}`);
} finally {
setActionBusy("");
}
}}
onResetStep={async (stepName) => {
if (!selectedTaskId || !stepName) return;
if (!window.confirm(`确认重置到 step=${stepName} 并清理其后的产物吗?`)) return;
const result = await fetchJson(`/tasks/${encodeURIComponent(selectedTaskId)}/actions/reset-to-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: stepName }),
setActionBusy("reset");
try {
const result = await fetchJson(`/tasks/${encodeURIComponent(selectedTaskId)}/actions/reset-to-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: stepName }),
});
invalidateTaskCaches(selectedTaskId);
invalidateSessionCaches(taskDetail?.context?.session_key);
await refreshSelectedTask(selectedTaskId);
pushToast("good", `已重置到 ${stepName} / processed=${result.run.processed.length}`);
} finally {
setActionBusy("");
}
}}
onBindFullVideo={async (fullVideoBvid) => {
if (!selectedTaskId || !fullVideoBvid) return;
setActionBusy("bind_full_video");
try {
await fetchJson(`/tasks/${encodeURIComponent(selectedTaskId)}/bind-full-video`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ full_video_bvid: fullVideoBvid }),
});
invalidateTaskCaches(selectedTaskId);
invalidateSessionCaches(taskDetail?.context?.session_key);
await refreshSelectedTask(selectedTaskId);
pushToast("good", `已绑定完整版 BV: ${fullVideoBvid}`);
} finally {
setActionBusy("");
}
}}
onOpenSessionTask={(taskId) => {
startTransition(() => {
setSelectedTaskId(taskId);
setSelectedStepName("");
});
await loadShell();
await loadTaskDetail(selectedTaskId);
setBanner({ kind: "good", text: `已重置到 ${stepName} / processed=${result.run.processed.length}` });
}}
onSessionMerge={async (rawTaskIds) => {
const sessionKey = currentSession?.session_key || taskDetail?.context?.session_key;
const taskIds = String(rawTaskIds)
.split(",")
.map((item) => item.trim())
.filter(Boolean);
if (!sessionKey || !taskIds.length) return;
setActionBusy("session_merge");
try {
await fetchJson(`/sessions/${encodeURIComponent(sessionKey)}/merge`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ task_ids: taskIds }),
});
invalidateJsonCache("/tasks?");
invalidateSessionCaches(sessionKey);
taskIds.forEach((taskId) => invalidateTaskCaches(taskId));
await refreshSelectedTask(selectedTaskId);
pushToast("good", `已合并 ${taskIds.length} 个任务到 session ${sessionKey}`);
} finally {
setActionBusy("");
}
}}
onSessionRebind={async (fullVideoBvid) => {
const sessionKey = currentSession?.session_key || taskDetail?.context?.session_key;
if (!sessionKey || !fullVideoBvid) return;
setActionBusy("session_rebind");
try {
await fetchJson(`/sessions/${encodeURIComponent(sessionKey)}/rebind`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ full_video_bvid: fullVideoBvid }),
});
invalidateSessionCaches(sessionKey);
if (selectedTaskId) invalidateTaskCaches(selectedTaskId);
await refreshSelectedTask(selectedTaskId);
pushToast("good", `已为 session ${sessionKey} 绑定完整版 BV`);
} finally {
setActionBusy("");
}
}}
onTaskQueryChange={(patch) => {
setTaskQuery((current) => ({ ...current, ...patch }));
}}
/>
);
@ -428,9 +802,11 @@ export default function App() {
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
invalidateJsonCache("/settings");
invalidateJsonCache("/settings/schema");
const refreshed = await fetchJson("/settings");
setSettings(refreshed);
setBanner({ kind: "good", text: "Settings 已保存并刷新" });
pushToast("good", "Settings 已保存并刷新");
return refreshed;
}}
/>
@ -450,9 +826,15 @@ export default function App() {
onToggleAutoRefresh={setAutoRefreshLogs}
onRefreshLog={async () => {
if (!selectedLogName) return;
await loadCurrentLogContent(selectedLogName);
setBanner({ kind: "good", text: "日志已刷新" });
setPanelBusy("refresh_log");
try {
await loadCurrentLogContent(selectedLogName);
pushToast("good", "日志已刷新");
} finally {
setPanelBusy("");
}
}}
busy={panelBusy === "refresh_log"}
/>
);
})();
@ -484,10 +866,19 @@ export default function App() {
<div className="status-row">
<span className={`status-badge ${health ? "good" : "hot"}`}>API {health ? "ok" : "down"}</span>
<span className={`status-badge ${doctorOk ? "good" : "warn"}`}>Doctor {doctorOk ? "ready" : "warn"}</span>
<span className="status-badge">{tasks.length} tasks</span>
<span className="status-badge">{taskTotal} tasks</span>
</div>
</header>
{banner ? <div className={`status-banner ${banner.kind}`}>{banner.text}</div> : null}
{toasts.length ? (
<div className="toast-stack">
{toasts.map((toast) => (
<div key={toast.id} className={`status-banner ${toast.kind}`}>
<span>{toast.text}</span>
<button className="toast-close" onClick={() => removeToast(toast.id)}>关闭</button>
</div>
))}
</div>
) : null}
{currentView}
</main>
</div>

View File

@ -1,3 +1,13 @@
const jsonCache = new Map();
function cacheKey(url, options = {}) {
return JSON.stringify({
url,
method: options.method || "GET",
headers: options.headers || {},
});
}
export async function fetchJson(url, options = {}) {
const token = localStorage.getItem("biliup_next_token") || "";
const headers = { ...(options.headers || {}) };
@ -10,6 +20,34 @@ export async function fetchJson(url, options = {}) {
return payload;
}
export async function fetchJsonCached(url, options = {}, ttlMs = 8000) {
const method = options.method || "GET";
if (method !== "GET") {
return fetchJson(url, options);
}
const key = cacheKey(url, options);
const cached = jsonCache.get(key);
if (cached && Date.now() - cached.time < ttlMs) {
return cached.payload;
}
const payload = await fetchJson(url, options);
jsonCache.set(key, { time: Date.now(), payload });
return payload;
}
export function primeJsonCache(url, payload, options = {}) {
const key = cacheKey(url, options);
jsonCache.set(key, { time: Date.now(), payload });
}
export function invalidateJsonCache(match) {
for (const key of jsonCache.keys()) {
if (typeof match === "string" ? key.includes(match) : match.test(key)) {
jsonCache.delete(key);
}
}
}
export async function uploadFile(url, file) {
const token = localStorage.getItem("biliup_next_token") || "";
const form = new FormData();

View File

@ -18,6 +18,7 @@ export default function LogsPanel({
onToggleFilterCurrentTask,
autoRefresh,
onToggleAutoRefresh,
busy,
}) {
const [search, setSearch] = useState("");
const [lineFilter, setLineFilter] = useState("");
@ -67,7 +68,9 @@ export default function LogsPanel({
<p className="eyebrow">Log Detail</p>
<h2>{selectedLogName || "选择一个日志"}</h2>
</div>
<button className="nav-btn" onClick={onRefreshLog}>刷新</button>
<button className="nav-btn" onClick={onRefreshLog} disabled={busy}>
{busy ? "刷新中..." : "刷新"}
</button>
</div>
<div className="toolbar-grid compact-grid">
<input value={lineFilter} onChange={(event) => setLineFilter(event.target.value)} placeholder="过滤日志行内容" />

View File

@ -27,6 +27,7 @@ export default function OverviewPanel({
onServiceAction,
onStageImport,
onStageUpload,
busy,
}) {
const [stageSourcePath, setStageSourcePath] = useState("");
const [stageFile, setStageFile] = useState(null);
@ -65,13 +66,14 @@ export default function OverviewPanel({
/>
<button
className="nav-btn compact-btn"
disabled={busy === "stage_import"}
onClick={async () => {
if (!stageSourcePath.trim()) return;
await onStageImport?.(stageSourcePath.trim());
setStageSourcePath("");
}}
>
复制到隔离 Stage
{busy === "stage_import" ? "导入中..." : "复制到隔离 Stage"}
</button>
</div>
<div className="stage-input-grid upload-grid-react">
@ -81,13 +83,14 @@ export default function OverviewPanel({
/>
<button
className="nav-btn compact-btn strong-btn"
disabled={!stageFile || busy === "stage_upload"}
onClick={async () => {
if (!stageFile) return;
await onStageUpload?.(stageFile);
setStageFile(null);
}}
>
上传到隔离 Stage
{busy === "stage_upload" ? "上传中..." : "上传到隔离 Stage"}
</button>
</div>
<p className="muted">只会导入到 `biliup-next/data/workspace/stage/`不会移动原文件</p>
@ -96,7 +99,9 @@ export default function OverviewPanel({
<article className="detail-card">
<div className="card-head-inline">
<h3>Runtime Services</h3>
<button className="nav-btn compact-btn strong-btn" onClick={onRunOnce}>执行一轮 Worker</button>
<button className="nav-btn compact-btn strong-btn" onClick={onRunOnce} disabled={busy === "run_once"}>
{busy === "run_once" ? "执行中..." : "执行一轮 Worker"}
</button>
</div>
<div className="list-stack">
{serviceItems.map((service) => (
@ -107,9 +112,9 @@ export default function OverviewPanel({
</div>
<div className="service-actions">
<StatusBadge tone={service.active_state === "active" ? "good" : "hot"}>{service.active_state}</StatusBadge>
<button className="nav-btn compact-btn" onClick={() => onServiceAction?.(service.id, "start")}>start</button>
<button className="nav-btn compact-btn" onClick={() => onServiceAction?.(service.id, "restart")}>restart</button>
<button className="nav-btn compact-btn" onClick={() => onServiceAction?.(service.id, "stop")}>stop</button>
<button className="nav-btn compact-btn" onClick={() => onServiceAction?.(service.id, "start")} disabled={busy === `service:${service.id}:start`}>start</button>
<button className="nav-btn compact-btn" onClick={() => onServiceAction?.(service.id, "restart")} disabled={busy === `service:${service.id}:restart`}>restart</button>
<button className="nav-btn compact-btn" onClick={() => onServiceAction?.(service.id, "stop")} disabled={busy === `service:${service.id}:stop`}>stop</button>
</div>
</div>
))}
@ -120,7 +125,9 @@ export default function OverviewPanel({
<article className="detail-card">
<div className="card-head-inline">
<h3>Scheduler Queue</h3>
<button className="nav-btn compact-btn" onClick={onRefreshScheduler}>刷新</button>
<button className="nav-btn compact-btn" onClick={onRefreshScheduler} disabled={busy === "refresh_scheduler"}>
{busy === "refresh_scheduler" ? "刷新中..." : "刷新"}
</button>
</div>
<div className="list-stack">
<div className="list-row"><span>scheduled</span><strong>{scheduled.length}</strong></div>
@ -148,7 +155,9 @@ export default function OverviewPanel({
<article className="detail-card">
<div className="card-head-inline">
<h3>Recent Actions</h3>
<button className="nav-btn compact-btn" onClick={onRefreshHistory}>刷新</button>
<button className="nav-btn compact-btn" onClick={onRefreshHistory} disabled={busy === "refresh_history"}>
{busy === "refresh_history" ? "刷新中..." : "刷新"}
</button>
</div>
<div className="list-stack">
{actionItems.slice(0, 8).map((item) => (

View File

@ -1,7 +1,17 @@
import { useMemo } from "react";
import { useEffect, useState } from "react";
import StatusBadge from "./StatusBadge.jsx";
import { attentionLabel, deliveryLabel, formatDate, summarizeAttention, summarizeDelivery } from "../lib/format.js";
import {
actionAdvice,
attentionLabel,
currentStepLabel,
deliveryLabel,
formatDate,
summarizeAttention,
summarizeDelivery,
recommendedAction,
taskDisplayStatus,
} from "../lib/format.js";
function SummaryRow({ label, value }) {
return (
@ -20,12 +30,43 @@ function suggestedStepName(steps) {
export default function TaskDetailCard({
payload,
session,
loading,
actionBusy,
selectedStepName,
onSelectStep,
onRetryStep,
onResetStep,
onBindFullVideo,
onOpenSessionTask,
onSessionMerge,
onSessionRebind,
}) {
const [fullVideoInput, setFullVideoInput] = useState("");
const [sessionRebindInput, setSessionRebindInput] = useState("");
const [sessionMergeInput, setSessionMergeInput] = useState("");
const task = payload?.task;
const steps = payload?.steps;
const artifacts = payload?.artifacts;
const history = payload?.history;
const context = payload?.context;
const delivery = task?.delivery_state || {};
const latestAction = history?.items?.[0];
const sessionContext = task?.session_context || context || {};
const activeStepName = selectedStepName || suggestedStepName(steps);
const splitUrl = sessionContext.video_links?.split_video_url;
const fullUrl = sessionContext.video_links?.full_video_url;
const nextAction = recommendedAction(task);
useEffect(() => {
setFullVideoInput(sessionContext.full_video_bvid || "");
}, [sessionContext.full_video_bvid, task?.id]);
useEffect(() => {
setSessionRebindInput(session?.full_video_bvid || "");
setSessionMergeInput("");
}, [session?.full_video_bvid, session?.session_key]);
if (loading) {
return (
<article className="panel detail-panel">
@ -52,37 +93,45 @@ export default function TaskDetailCard({
);
}
const { task, steps, artifacts, history } = payload;
const delivery = task.delivery_state || {};
const latestAction = history?.items?.[0];
const activeStepName = useMemo(
() => selectedStepName || suggestedStepName(steps),
[selectedStepName, steps],
);
return (
<article className="panel detail-panel">
<div className="panel-head">
<div>
<p className="eyebrow">Task Detail</p>
<h2>{task.title}</h2>
<p className="muted detail-lead">{actionAdvice(task)}</p>
</div>
<div className="status-row">
<StatusBadge>{task.status}</StatusBadge>
<StatusBadge>{taskDisplayStatus(task)}</StatusBadge>
<StatusBadge>{attentionLabel(summarizeAttention(task))}</StatusBadge>
<button className="nav-btn compact-btn" onClick={() => onRetryStep?.(activeStepName)} disabled={!activeStepName}>
Retry Step
<button className="nav-btn compact-btn" onClick={() => onRetryStep?.(activeStepName)} disabled={!activeStepName || actionBusy}>
{actionBusy === "retry" ? "重试中..." : "重试当前步骤"}
</button>
<button className="nav-btn compact-btn strong-btn" onClick={() => onResetStep?.(activeStepName)} disabled={!activeStepName}>
Reset To Step
<button className="nav-btn compact-btn strong-btn" onClick={() => onResetStep?.(activeStepName)} disabled={!activeStepName || actionBusy}>
{actionBusy === "reset" ? "重置中..." : "重置到此步骤"}
</button>
</div>
</div>
<div className="detail-grid">
<section className="detail-card">
<h3>Recommended Next Step</h3>
<SummaryRow label="Action" value={nextAction.label} />
<p className="muted">{nextAction.detail}</p>
<div className="row-actions" style={{ marginTop: 12 }}>
{nextAction.action === "retry" ? (
<button className="nav-btn compact-btn strong-btn" onClick={() => onRetryStep?.(activeStepName)} disabled={!activeStepName || actionBusy}>
{actionBusy === "retry" ? "重试中..." : nextAction.label}
</button>
) : splitUrl ? (
<a className="nav-btn compact-btn strong-btn" href={splitUrl} target="_blank" rel="noreferrer">打开当前结果</a>
) : null}
</div>
</section>
<section className="detail-card">
<h3>Current State</h3>
<SummaryRow label="Task ID" value={task.id} />
<SummaryRow label="Current Step" value={currentStepLabel(task, steps?.items || [])} />
<SummaryRow label="Updated" value={formatDate(task.updated_at)} />
<SummaryRow label="Next Retry" value={formatDate(task.retry_state?.next_retry_at)} />
<SummaryRow label="Split Comment" value={deliveryLabel(delivery.split_comment || "pending")} />
@ -104,6 +153,31 @@ export default function TaskDetailCard({
</div>
<div className="detail-grid">
<section className="detail-card">
<h3>Delivery & Context</h3>
<SummaryRow label="Split BV" value={sessionContext.split_bvid || "-"} />
<SummaryRow label="Full BV" value={sessionContext.full_video_bvid || "-"} />
<SummaryRow label="Session Key" value={sessionContext.session_key || "-"} />
<SummaryRow label="Streamer" value={sessionContext.streamer || "-"} />
<SummaryRow label="Context Source" value={sessionContext.context_source || "-"} />
<div className="row-actions" style={{ marginTop: 12 }}>
{splitUrl ? <a className="nav-btn compact-btn" href={splitUrl} target="_blank" rel="noreferrer">打开分P</a> : null}
{fullUrl ? <a className="nav-btn compact-btn" href={fullUrl} target="_blank" rel="noreferrer">打开完整版</a> : null}
</div>
<div className="bind-block">
<label className="muted">绑定完整版 BV</label>
<input value={fullVideoInput} onChange={(event) => setFullVideoInput(event.target.value)} placeholder="BV1..." />
<div className="row-actions">
<button
className="nav-btn compact-btn strong-btn"
onClick={() => onBindFullVideo?.(fullVideoInput.trim())}
disabled={actionBusy}
>
{actionBusy === "bind_full_video" ? "绑定中..." : "绑定完整版 BV"}
</button>
</div>
</div>
</section>
<section className="detail-card">
<h3>Steps</h3>
<div className="list-stack">
@ -137,6 +211,60 @@ export default function TaskDetailCard({
</div>
</section>
</div>
<div className="detail-grid">
<section className="detail-card session-card-full">
<h3>Session Workspace</h3>
{!session?.session_key ? (
<p className="muted">当前任务如果已绑定 session_key这里会显示同场片段和完整版绑定信息</p>
) : (
<>
<SummaryRow label="Session Key" value={session.session_key} />
<SummaryRow label="Task Count" value={String(session.task_count || 0)} />
<SummaryRow label="Session Full BV" value={session.full_video_bvid || "-"} />
<div className="bind-block">
<label className="muted">整个 Session 重绑 BV</label>
<input value={sessionRebindInput} onChange={(event) => setSessionRebindInput(event.target.value)} placeholder="BV1..." />
<div className="row-actions">
<button
className="nav-btn compact-btn"
onClick={() => onSessionRebind?.(sessionRebindInput.trim())}
disabled={actionBusy}
>
{actionBusy === "session_rebind" ? "重绑中..." : "Session 重绑 BV"}
</button>
</div>
</div>
<div className="bind-block">
<label className="muted">合并任务到当前 Session</label>
<input value={sessionMergeInput} onChange={(event) => setSessionMergeInput(event.target.value)} placeholder="输入 task id用逗号分隔" />
<div className="row-actions">
<button
className="nav-btn compact-btn"
onClick={() => onSessionMerge?.(sessionMergeInput)}
disabled={actionBusy}
>
{actionBusy === "session_merge" ? "合并中..." : "合并到当前 Session"}
</button>
</div>
</div>
<div className="list-stack">
{(session.tasks || []).map((item) => (
<button
key={item.id}
type="button"
className="list-row selectable"
onClick={() => onOpenSessionTask?.(item.id)}
>
<span>{item.title}</span>
<StatusBadge>{taskDisplayStatus(item)}</StatusBadge>
</button>
))}
</div>
</>
)}
</section>
</div>
</article>
);
}

View File

@ -1,5 +1,14 @@
import StatusBadge from "./StatusBadge.jsx";
import { attentionLabel, deliveryLabel, formatDate, summarizeAttention, summarizeDelivery } from "../lib/format.js";
import {
attentionLabel,
currentStepLabel,
deliveryLabel,
formatDate,
summarizeAttention,
summarizeDelivery,
taskDisplayStatus,
taskPrimaryActionLabel,
} from "../lib/format.js";
function deliveryStateLabel(task) {
const delivery = task.delivery_state || {};
@ -12,73 +21,69 @@ function deliveryStateLabel(task) {
export default function TaskTable({ tasks, selectedTaskId, onSelectTask, onRunTask }) {
return (
<div className="table-wrap-react">
<table className="task-table-react">
<thead>
<tr>
<th>任务</th>
<th>状态</th>
<th>关注</th>
<th>纯享评论</th>
<th>主视频评论</th>
<th>清理</th>
<th>下次重试</th>
<th>更新时间</th>
<th>操作</th>
</tr>
</thead>
<tbody>
{tasks.map((task) => {
const delivery = deliveryStateLabel(task);
return (
<tr
key={task.id}
className={selectedTaskId === task.id ? "active" : ""}
onClick={() => onSelectTask(task.id)}
>
<td>
<div className="task-title">{task.title}</div>
<div className="task-subtitle">{task.id}</div>
</td>
<td><StatusBadge>{task.status}</StatusBadge></td>
<td><StatusBadge>{attentionLabel(summarizeAttention(task))}</StatusBadge></td>
<td><StatusBadge>{delivery.splitComment}</StatusBadge></td>
<td><StatusBadge>{delivery.fullComment}</StatusBadge></td>
<td><StatusBadge>{delivery.cleanup}</StatusBadge></td>
<td>
<div>{formatDate(task.retry_state?.next_retry_at)}</div>
{task.retry_state?.retry_remaining_seconds != null ? (
<div className="task-subtitle">{task.retry_state.retry_remaining_seconds}s</div>
) : null}
</td>
<td>{formatDate(task.updated_at)}</td>
<td>
<div className="row-actions">
<button
className="nav-btn compact-btn"
onClick={(event) => {
event.stopPropagation();
onSelectTask(task.id);
}}
>
打开
</button>
<button
className="nav-btn compact-btn strong-btn"
onClick={(event) => {
event.stopPropagation();
onRunTask?.(task.id);
}}
>
执行
</button>
</div>
</td>
</tr>
);
})}
</tbody>
</table>
<div className="task-cards-grid">
{tasks.map((task) => {
const delivery = deliveryStateLabel(task);
return (
<button
key={task.id}
type="button"
className={selectedTaskId === task.id ? "task-card active" : "task-card"}
onClick={() => onSelectTask(task.id)}
onMouseEnter={() => onSelectTask?.(task.id, { prefetch: true })}
>
<div className="task-card-head">
<StatusBadge>{taskDisplayStatus(task)}</StatusBadge>
<StatusBadge>{attentionLabel(summarizeAttention(task))}</StatusBadge>
</div>
<div>
<div className="task-title">{task.title}</div>
<div className="task-subtitle">{currentStepLabel(task)}</div>
</div>
<div className="task-card-metrics">
<div className="task-metric">
<span>纯享评论</span>
<strong>{delivery.splitComment}</strong>
</div>
<div className="task-metric">
<span>主视频评论</span>
<strong>{delivery.fullComment}</strong>
</div>
<div className="task-metric">
<span>清理</span>
<strong>{delivery.cleanup}</strong>
</div>
<div className="task-metric">
<span>下次重试</span>
<strong>{formatDate(task.retry_state?.next_retry_at)}</strong>
</div>
</div>
<div className="task-card-foot">
<div className="task-subtitle">更新于 {formatDate(task.updated_at)}</div>
<div className="row-actions">
<button
className="nav-btn compact-btn"
onClick={(event) => {
event.stopPropagation();
onSelectTask(task.id);
}}
>
打开
</button>
<button
className="nav-btn compact-btn strong-btn"
onClick={(event) => {
event.stopPropagation();
onRunTask?.(task.id);
}}
>
{taskPrimaryActionLabel(task)}
</button>
</div>
</div>
</button>
);
})}
</div>
);
}

View File

@ -1,7 +1,7 @@
export function statusClass(status) {
if (["collection_synced", "published", "done", "resolved", "present"].includes(status)) return "good";
if (["failed_manual"].includes(status)) return "hot";
if (["failed_retryable", "pending", "legacy_untracked", "running", "retry_now", "waiting_retry", "manual_now"].includes(status)) return "warn";
if (["failed_retryable", "pending", "running", "retry_now", "waiting_retry", "manual_now"].includes(status)) return "warn";
return "";
}
@ -31,7 +31,6 @@ export function attentionLabel(value) {
}
export function summarizeDelivery(delivery = {}) {
if (delivery.full_video_timeline_comment === "legacy_untracked") return "legacy_untracked";
if (delivery.split_comment === "pending" || delivery.full_video_timeline_comment === "pending") return "pending_comment";
if (delivery.source_video_present === false || delivery.split_videos_present === false) return "cleanup_removed";
return "stable";
@ -41,7 +40,6 @@ export function deliveryLabel(value) {
return {
done: "已发送",
pending: "待处理",
legacy_untracked: "历史未追踪",
present: "保留",
removed: "已清理",
cleanup_removed: "已清理视频",
@ -49,3 +47,96 @@ export function deliveryLabel(value) {
stable: "正常",
}[value] || value;
}
export function taskDisplayStatus(task) {
if (!task) return "-";
if (task.status === "failed_manual") return "需人工处理";
if (task.status === "failed_retryable" && task.retry_state?.step_name === "comment") return "等待B站可见";
if (task.status === "failed_retryable") return "等待自动重试";
return {
created: "已接收",
transcribed: "已转录",
songs_detected: "已识歌",
split_done: "已切片",
published: "已上传",
commented: "评论完成",
collection_synced: "已完成",
running: "处理中",
}[task.status] || task.status || "-";
}
export function stepLabel(stepName) {
return {
ingest: "接收视频",
transcribe: "转录字幕",
song_detect: "识别歌曲",
split: "切分分P",
publish: "上传分P",
comment: "发布评论",
collection_a: "加入完整版合集",
collection_b: "加入分P合集",
}[stepName] || stepName || "-";
}
export function currentStepLabel(task, steps = []) {
const running = steps.find((step) => step.status === "running");
if (running) return stepLabel(running.step_name);
if (task?.retry_state?.step_name) return `${stepLabel(task.retry_state.step_name)} · ${taskDisplayStatus(task)}`;
const pending = steps.find((step) => step.status === "pending");
if (pending) return stepLabel(pending.step_name);
return {
created: "转录字幕",
transcribed: "识别歌曲",
songs_detected: "切分分P",
split_done: "上传分P",
published: "评论与合集",
commented: "同步合集",
collection_synced: "链路完成",
}[task?.status] || "-";
}
export function taskPrimaryActionLabel(task) {
if (!task) return "执行";
if (task.status === "failed_manual") return "人工重跑";
if (task.retry_state?.retry_due) return "立即重试";
if (task.status === "failed_retryable") return "继续处理";
if (task.status === "collection_synced") return "查看";
return "执行";
}
export function actionAdvice(task) {
if (!task) return "";
if (task.status === "failed_retryable" && task.retry_state?.step_name === "comment") {
return "B站通常需要一段时间完成转码和审核系统会自动重试评论。";
}
if (task.status === "failed_retryable") {
return "当前错误可自动恢复,等到重试时间或手工触发即可。";
}
if (task.status === "failed_manual") {
return "先看错误信息,再决定是重试步骤还是绑定完整版 BV。";
}
if (task.status === "collection_synced") {
return "链路已完成可以直接打开分P或完整版链接检查结果。";
}
return "系统会继续推进后续步骤,必要时可在这里手工干预。";
}
export function recommendedAction(task) {
if (!task) return { label: "查看任务", detail: "先打开详情,确认当前步骤和最近动作。", action: "open" };
if (task.status === "failed_manual") {
return { label: "处理失败步骤", detail: "这是需要人工介入的任务,优先查看错误并决定是否重试。", action: "retry" };
}
if (task.status === "failed_retryable" && task.retry_state?.step_name === "comment") {
return { label: "等待平台可见", detail: "B站通常需要转码和审核暂时不需要人工操作。", action: "wait" };
}
if (task.retry_state?.retry_due) {
return { label: "立即重试", detail: "已经到达重试窗口,可以立即推进当前步骤。", action: "retry" };
}
if (task.status === "published") {
return { label: "检查评论与合集", detail: "上传已经完成,下一步是确认评论和合集同步。", action: "open" };
}
if (task.status === "collection_synced") {
return { label: "检查最终结果", detail: "链路已经完成,可直接打开视频或做清理确认。", action: "open" };
}
return { label: "继续观察", detail: "当前任务仍在正常推进,必要时可手工执行一轮。", action: "open" };
}

View File

@ -66,11 +66,20 @@ button {
gap: 16px;
}
.toast-stack {
display: grid;
gap: 10px;
}
.status-banner {
border-radius: 18px;
padding: 12px 16px;
border: 1px solid var(--line);
background: rgba(255,255,255,0.86);
display: flex;
justify-content: space-between;
gap: 12px;
align-items: center;
}
.status-banner.good {
@ -88,6 +97,13 @@ button {
color: var(--accent);
}
.toast-close {
border: 0;
background: transparent;
color: inherit;
font-weight: 600;
}
.react-topbar {
padding: 18px 22px;
display: flex;
@ -225,6 +241,11 @@ button {
gap: 16px;
}
.tasks-main-stack {
display: grid;
gap: 16px;
}
.overview-stack-react {
display: grid;
gap: 16px;
@ -259,49 +280,61 @@ button {
background: rgba(255,255,255,0.92);
}
.table-wrap-react {
max-height: calc(100vh - 280px);
overflow: auto;
.task-cards-grid {
display: grid;
grid-template-columns: repeat(2, minmax(0, 1fr));
gap: 12px;
}
.task-card {
display: grid;
gap: 12px;
border: 1px solid var(--line);
border-radius: 16px;
border-radius: 18px;
padding: 16px;
background: rgba(255,255,255,0.84);
}
.task-table-react {
width: 100%;
min-width: 980px;
border-collapse: collapse;
}
.task-table-react th,
.task-table-react td {
padding: 12px 14px;
border-bottom: 1px solid var(--line);
color: var(--ink);
text-align: left;
vertical-align: top;
}
.task-table-react th {
position: sticky;
top: 0;
background: rgba(243, 239, 232, 0.96);
.task-card.active {
border-color: rgba(178, 75, 26, 0.28);
background: linear-gradient(135deg, rgba(255, 248, 240, 0.98), rgba(249, 242, 234, 0.95));
}
.task-card-head,
.task-card-foot {
display: flex;
justify-content: space-between;
gap: 10px;
align-items: flex-start;
flex-wrap: wrap;
}
.task-card-metrics {
display: grid;
grid-template-columns: repeat(2, minmax(0, 1fr));
gap: 10px;
}
.task-metric {
border: 1px solid var(--line);
border-radius: 14px;
padding: 10px 12px;
background: rgba(255,255,255,0.72);
}
.task-metric span {
display: block;
color: var(--muted);
font-size: 12px;
text-transform: uppercase;
letter-spacing: 0.08em;
margin-bottom: 6px;
}
.task-table-react tbody tr {
cursor: pointer;
transition: background 140ms ease;
}
.task-table-react tbody tr:hover {
background: rgba(178, 75, 26, 0.06);
}
.task-table-react tbody tr.active {
background: linear-gradient(135deg, rgba(255, 248, 240, 0.98), rgba(249, 242, 234, 0.95));
.task-metric strong {
display: block;
font-size: 14px;
line-height: 1.4;
}
.task-title {
@ -315,6 +348,40 @@ button {
word-break: break-all;
}
.focus-grid {
display: grid;
grid-template-columns: repeat(2, minmax(0, 1fr));
gap: 12px;
}
.focus-card {
display: grid;
gap: 10px;
border: 1px solid var(--line);
border-radius: 18px;
padding: 16px;
background: rgba(255,255,255,0.84);
text-align: left;
color: var(--ink);
}
.focus-card.active {
border-color: rgba(178, 75, 26, 0.28);
background: linear-gradient(135deg, rgba(255, 248, 240, 0.98), rgba(249, 242, 234, 0.95));
}
.focus-card p {
margin: 0;
color: var(--muted);
}
.focus-card-head {
display: flex;
align-items: center;
gap: 8px;
flex-wrap: wrap;
}
.detail-panel .detail-row,
.list-row {
display: flex;
@ -351,6 +418,31 @@ button {
font-size: 13px;
}
.detail-lead {
margin-top: 8px;
max-width: 56ch;
}
.bind-block {
display: grid;
gap: 10px;
margin-top: 14px;
padding-top: 12px;
border-top: 1px solid var(--line);
}
.bind-block input {
width: 100%;
border: 1px solid var(--line);
border-radius: 14px;
padding: 11px 12px;
background: rgba(255,255,255,0.96);
}
.session-card-full {
grid-column: 1 / -1;
}
.row-actions,
.service-actions,
.card-head-inline {
@ -550,4 +642,34 @@ button {
.toolbar-grid {
grid-template-columns: 1fr;
}
.session-card-full {
grid-column: auto;
}
.focus-grid {
grid-template-columns: 1fr;
}
.task-cards-grid,
.task-card-metrics {
grid-template-columns: 1fr;
}
}
@media (max-width: 760px) {
.react-shell {
width: min(100vw - 20px, 100%);
margin: 10px auto 24px;
}
.panel,
.react-topbar,
.react-sidebar {
border-radius: 18px;
}
.task-card {
padding: 14px;
}
}

View File

@ -5,6 +5,7 @@ description = "Next-generation control-plane-first biliup pipeline"
requires-python = ">=3.11"
dependencies = [
"requests>=2.32.0",
"groq>=0.18.0",
]
[project.scripts]

View File

@ -16,7 +16,15 @@ fi
cd "$PROJECT_DIR"
export PYTHONPATH="$PROJECT_DIR/src"
LOG_DIR="$PROJECT_DIR/runtime/logs"
LOG_FILE="$LOG_DIR/api.log"
mkdir -p "$LOG_DIR"
LOG_MAX_BYTES="${BILIUP_NEXT_LOG_MAX_BYTES:-20971520}"
LOG_BACKUPS="${BILIUP_NEXT_LOG_BACKUPS:-5}"
exec "$PYTHON_BIN" -m biliup_next.app.cli serve \
echo "[$(date '+%Y-%m-%d %H:%M:%S %z')] starting biliup-next api" | "$PROJECT_DIR/scripts/log-tee.sh" "$LOG_FILE" "$LOG_MAX_BYTES" "$LOG_BACKUPS"
"$PYTHON_BIN" -u -m biliup_next.app.cli serve \
--host "${BILIUP_NEXT_API_HOST:-0.0.0.0}" \
--port "${BILIUP_NEXT_API_PORT:-8787}"
--port "${BILIUP_NEXT_API_PORT:-8787}" \
2>&1 | "$PROJECT_DIR/scripts/log-tee.sh" "$LOG_FILE" "$LOG_MAX_BYTES" "$LOG_BACKUPS"

View File

@ -16,6 +16,15 @@ fi
cd "$PROJECT_DIR"
export PYTHONPATH="$PROJECT_DIR/src"
LOG_DIR="$PROJECT_DIR/runtime/logs"
LOG_FILE="$LOG_DIR/worker.log"
mkdir -p "$LOG_DIR"
LOG_MAX_BYTES="${BILIUP_NEXT_LOG_MAX_BYTES:-20971520}"
LOG_BACKUPS="${BILIUP_NEXT_LOG_BACKUPS:-5}"
"$PYTHON_BIN" -m biliup_next.app.cli init-workspace
exec "$PYTHON_BIN" -m biliup_next.app.cli worker --interval "${BILIUP_NEXT_WORKER_INTERVAL:-5}"
echo "[$(date '+%Y-%m-%d %H:%M:%S %z')] starting biliup-next worker" | "$PROJECT_DIR/scripts/log-tee.sh" "$LOG_FILE" "$LOG_MAX_BYTES" "$LOG_BACKUPS"
"$PYTHON_BIN" -u -m biliup_next.app.cli init-workspace \
2>&1 | "$PROJECT_DIR/scripts/log-tee.sh" "$LOG_FILE" "$LOG_MAX_BYTES" "$LOG_BACKUPS"
"$PYTHON_BIN" -u -m biliup_next.app.cli worker --interval "${BILIUP_NEXT_WORKER_INTERVAL:-5}" \
2>&1 | "$PROJECT_DIR/scripts/log-tee.sh" "$LOG_FILE" "$LOG_MAX_BYTES" "$LOG_BACKUPS"

View File

@ -7,10 +7,21 @@
- `cookies.json`
- `upload_config.json`
- `biliup`
- `logs/api.log`
- `logs/worker.log`
- `logs/api.log.1` ~ `logs/api.log.5`
- `logs/worker.log.1` ~ `logs/worker.log.5`
可通过以下命令从父项目导入当前可用版本
可通过以下命令把当前机器上已有版本复制到这里
```bash
cd /home/theshy/biliup/biliup-next
./.venv/bin/biliup-next sync-legacy-assets
```
如果你是在新机器首次初始化,`setup.sh` 会在缺失时自动生成:
- `cookies.json` <- `cookies.example.json`
- `upload_config.json` <- `upload_config.example.json`
它们只用于占位能保证项目进入可配置 doctor的状态但不代表上传链路已经可用

View File

@ -0,0 +1,9 @@
{
"cookie_info": {
"cookies": []
},
"token_info": {
"access_token": "",
"refresh_token": ""
}
}

View File

@ -0,0 +1,5 @@
{
"line": "AUTO",
"limit": 3,
"threads": 3
}

35
scripts/log-tee.sh Executable file
View File

@ -0,0 +1,35 @@
#!/usr/bin/env bash
set -euo pipefail
LOG_FILE="${1:?log file required}"
MAX_BYTES="${2:-20971520}"
BACKUPS="${3:-5}"
mkdir -p "$(dirname "$LOG_FILE")"
touch "$LOG_FILE"
rotate_logs() {
local size
size="$(stat -c%s "$LOG_FILE" 2>/dev/null || echo 0)"
if [[ "$size" -lt "$MAX_BYTES" ]]; then
return
fi
local index
for ((index=BACKUPS; index>=1; index--)); do
if [[ -f "${LOG_FILE}.${index}" ]]; then
if [[ "$index" -eq "$BACKUPS" ]]; then
rm -f "${LOG_FILE}.${index}"
else
mv "${LOG_FILE}.${index}" "${LOG_FILE}.$((index + 1))"
fi
fi
done
mv "$LOG_FILE" "${LOG_FILE}.1"
: > "$LOG_FILE"
}
while IFS= read -r line || [[ -n "$line" ]]; do
rotate_logs
printf '%s\n' "$line" | tee -a "$LOG_FILE"
done

View File

@ -3,8 +3,6 @@ set -euo pipefail
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
LOCAL_VENV="$PROJECT_DIR/.venv"
LEGACY_VENV="$PROJECT_DIR/../.venv"
echo "==> biliup-next setup"
echo "project: $PROJECT_DIR"
@ -29,21 +27,57 @@ echo "==> install package"
if [[ -f "$PROJECT_DIR/config/settings.json" ]]; then
echo "==> settings file exists"
elif [[ -f "$PROJECT_DIR/config/settings.standalone.example.json" ]]; then
echo "==> seed standalone settings.json from template"
cp "$PROJECT_DIR/config/settings.standalone.example.json" "$PROJECT_DIR/config/settings.json"
fi
if [[ ! -f "$PROJECT_DIR/config/settings.staged.json" && -f "$PROJECT_DIR/config/settings.json" ]]; then
echo "==> seed settings.staged.json"
cp "$PROJECT_DIR/config/settings.json" "$PROJECT_DIR/config/settings.staged.json"
fi
echo "==> init workspace"
PYTHONPATH="$PROJECT_DIR/src" "$VENV_PYTHON" -m biliup_next.app.cli init-workspace
mkdir -p "$PROJECT_DIR/runtime/logs"
if [[ ! -f "$PROJECT_DIR/runtime/cookies.json" && -f "$PROJECT_DIR/runtime/cookies.example.json" ]]; then
echo "==> seed runtime/cookies.json from template"
cp "$PROJECT_DIR/runtime/cookies.example.json" "$PROJECT_DIR/runtime/cookies.json"
fi
if [[ ! -f "$PROJECT_DIR/runtime/upload_config.json" && -f "$PROJECT_DIR/runtime/upload_config.example.json" ]]; then
echo "==> seed runtime/upload_config.json from template"
cp "$PROJECT_DIR/runtime/upload_config.example.json" "$PROJECT_DIR/runtime/upload_config.json"
fi
echo "==> sync local runtime assets when available"
PYTHONPATH="$PROJECT_DIR/src" "$VENV_PYTHON" -m biliup_next.app.cli sync-legacy-assets || true
echo "==> verify bundled runtime assets"
for REQUIRED_ASSET in \
"$PROJECT_DIR/runtime/cookies.json" \
"$PROJECT_DIR/runtime/upload_config.json"
do
if [[ ! -e "$REQUIRED_ASSET" ]]; then
echo "missing required runtime asset: $REQUIRED_ASSET" >&2
echo "populate biliup-next/runtime first, or run sync-legacy-assets as a one-time import." >&2
exit 1
fi
done
if [[ ! -e "$PROJECT_DIR/runtime/biliup" ]]; then
echo "warning: runtime/biliup not found; publish provider will remain unavailable until you copy or install it." >&2
fi
echo "==> runtime doctor"
PYTHONPATH="$PROJECT_DIR/src" "$VENV_PYTHON" -m biliup_next.app.cli doctor
echo
echo "Optional external dependencies expected by current legacy-backed providers:"
echo "Optional external dependencies expected by current providers:"
echo " ffmpeg / ffprobe / codex / biliup"
echo " cookies.json / upload_config.json / .env from parent project may still be reused"
echo " runtime assets must live under biliup-next/runtime"
echo
read -r -p "Install systemd services now? [y/N] " INSTALL_SYSTEMD

View File

@ -18,7 +18,6 @@ src/biliup_next/core/models.py
src/biliup_next/core/providers.py
src/biliup_next/core/registry.py
src/biliup_next/infra/db.py
src/biliup_next/infra/legacy_paths.py
src/biliup_next/infra/log_reader.py
src/biliup_next/infra/plugin_loader.py
src/biliup_next/infra/runtime_doctor.py
@ -26,17 +25,18 @@ src/biliup_next/infra/stage_importer.py
src/biliup_next/infra/systemd_runtime.py
src/biliup_next/infra/task_repository.py
src/biliup_next/infra/task_reset.py
src/biliup_next/infra/adapters/bilibili_collection_legacy.py
src/biliup_next/infra/adapters/bilibili_top_comment_legacy.py
src/biliup_next/infra/adapters/biliup_publish_legacy.py
src/biliup_next/infra/adapters/codex_legacy.py
src/biliup_next/infra/adapters/ffmpeg_split_legacy.py
src/biliup_next/infra/adapters/groq_legacy.py
src/biliup_next/infra/adapters/full_video_locator.py
src/biliup_next/modules/collection/service.py
src/biliup_next/modules/collection/providers/bilibili_collection.py
src/biliup_next/modules/comment/service.py
src/biliup_next/modules/comment/providers/bilibili_top_comment.py
src/biliup_next/modules/ingest/service.py
src/biliup_next/modules/ingest/providers/local_file.py
src/biliup_next/modules/publish/service.py
src/biliup_next/modules/publish/providers/biliup_cli.py
src/biliup_next/modules/song_detect/service.py
src/biliup_next/modules/song_detect/providers/codex.py
src/biliup_next/modules/split/service.py
src/biliup_next/modules/split/providers/ffmpeg_copy.py
src/biliup_next/modules/transcribe/service.py
src/biliup_next/modules/transcribe/providers/groq.py

View File

@ -8,13 +8,21 @@ from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
from pathlib import Path
from urllib.parse import parse_qs, unquote, urlparse
from biliup_next.app.task_actions import bind_full_video_action
from biliup_next.app.task_actions import merge_session_action
from biliup_next.app.task_actions import receive_full_video_webhook
from biliup_next.app.task_actions import rebind_session_full_video_action
from biliup_next.app.task_actions import reset_to_step_action
from biliup_next.app.task_actions import retry_step_action
from biliup_next.app.task_actions import run_task_action
from biliup_next.app.bootstrap import ensure_initialized
from biliup_next.app.bootstrap import reset_initialized_state
from biliup_next.app.control_plane_get_dispatcher import ControlPlaneGetDispatcher
from biliup_next.app.dashboard import render_dashboard_html
from biliup_next.app.control_plane_post_dispatcher import ControlPlanePostDispatcher
from biliup_next.app.retry_meta import retry_meta_for_step
from biliup_next.app.scheduler import build_scheduler_preview
from biliup_next.app.serializers import ControlPlaneSerializer
from biliup_next.app.worker import run_once
from biliup_next.core.config import SettingsService
from biliup_next.core.models import ActionRecord, utc_now_iso
@ -28,61 +36,32 @@ from biliup_next.infra.systemd_runtime import SystemdRuntime
class ApiHandler(BaseHTTPRequestHandler):
server_version = "biliup-next/0.1"
def _task_payload(self, task_id: str, state: dict[str, object]) -> dict[str, object] | None:
task = state["repo"].get_task(task_id)
if task is None:
return None
payload = task.to_dict()
retry_state = self._task_retry_state(task_id, state)
if retry_state:
payload["retry_state"] = retry_state
payload["delivery_state"] = self._task_delivery_state(task_id, state)
return payload
@staticmethod
def _attention_state(task_payload: dict[str, object]) -> str:
if task_payload.get("status") == "failed_manual":
return "manual_now"
retry_state = task_payload.get("retry_state")
if isinstance(retry_state, dict) and retry_state.get("retry_due"):
return "retry_now"
if task_payload.get("status") == "failed_retryable" and isinstance(retry_state, dict) and retry_state.get("next_retry_at"):
return "waiting_retry"
if task_payload.get("status") == "running":
return "running"
return "stable"
@staticmethod
def _delivery_state_label(task_payload: dict[str, object]) -> str:
delivery_state = task_payload.get("delivery_state")
if not isinstance(delivery_state, dict):
return "stable"
if delivery_state.get("split_comment") == "pending" or delivery_state.get("full_video_timeline_comment") == "pending":
return "pending_comment"
if delivery_state.get("source_video_present") is False or delivery_state.get("split_videos_present") is False:
return "cleanup_removed"
return "stable"
def _step_payload(self, step, state: dict[str, object]) -> dict[str, object]: # type: ignore[no-untyped-def]
payload = step.to_dict()
retry_meta = retry_meta_for_step(step, state["settings"])
if retry_meta:
payload.update(retry_meta)
return payload
def _task_retry_state(self, task_id: str, state: dict[str, object]) -> dict[str, object] | None:
for step in state["repo"].list_steps(task_id):
retry_meta = retry_meta_for_step(step, state["settings"])
if retry_meta:
return {"step_name": step.step_name, **retry_meta}
return None
def _task_delivery_state(self, task_id: str, state: dict[str, object]) -> dict[str, object]:
task = state["repo"].get_task(task_id)
if task is None:
return {}
session_dir = Path(str(state["settings"]["paths"]["session_dir"])) / task.title
source_path = Path(task.source_path)
split_dir = session_dir / "split_video"
legacy_comment_done = (session_dir / "comment_done.flag").exists()
def comment_status(flag_name: str, *, enabled: bool) -> str:
if not enabled:
return "disabled"
if flag_name == "comment_full_done.flag" and legacy_comment_done and not (session_dir / flag_name).exists():
return "legacy_untracked"
return "done" if (session_dir / flag_name).exists() else "pending"
return {
"split_comment": comment_status("comment_split_done.flag", enabled=state["settings"]["comment"].get("post_split_comment", True)),
"full_video_timeline_comment": comment_status(
"comment_full_done.flag",
enabled=state["settings"]["comment"].get("post_full_video_timeline_comment", True),
),
"full_video_bvid_resolved": (session_dir / "full_video_bvid.txt").exists(),
"source_video_present": source_path.exists(),
"split_videos_present": split_dir.exists(),
"cleanup_enabled": {
"delete_source_video_after_collection_synced": state["settings"].get("cleanup", {}).get("delete_source_video_after_collection_synced", False),
"delete_split_videos_after_collection_synced": state["settings"].get("cleanup", {}).get("delete_split_videos_after_collection_synced", False),
},
}
return ControlPlaneSerializer(state).step_payload(step)
def _serve_asset(self, asset_name: str) -> None:
root = ensure_initialized()["root"]
@ -116,10 +95,22 @@ class ApiHandler(BaseHTTPRequestHandler):
dist = self._frontend_dist_dir()
if not (dist / "index.html").exists():
return False
if parsed_path in {"/ui", "/ui/"}:
if parsed_path in {"/", "/ui", "/ui/"}:
self._html((dist / "index.html").read_text(encoding="utf-8"))
return True
if parsed_path.startswith("/assets/"):
relative = parsed_path.removeprefix("/")
asset_path = dist / relative
if asset_path.exists() and asset_path.is_file():
body = asset_path.read_bytes()
self.send_response(HTTPStatus.OK)
self.send_header("Content-Type", self._guess_content_type(asset_path))
self.send_header("Content-Length", str(len(body)))
self.end_headers()
self.wfile.write(body)
return True
if not parsed_path.startswith("/ui/"):
return False
@ -143,13 +134,16 @@ class ApiHandler(BaseHTTPRequestHandler):
def do_GET(self) -> None: # noqa: N802
parsed = urlparse(self.path)
if parsed.path.startswith("/ui") and self._serve_frontend_dist(parsed.path):
if (parsed.path == "/" or parsed.path.startswith("/ui") or parsed.path.startswith("/assets/")) and self._serve_frontend_dist(parsed.path):
return
if not self._check_auth(parsed.path):
return
if parsed.path.startswith("/assets/"):
self._serve_asset(parsed.path.removeprefix("/assets/"))
return
if parsed.path == "/classic":
self._html(render_dashboard_html())
return
if parsed.path == "/":
self._html(render_dashboard_html())
return
@ -158,16 +152,23 @@ class ApiHandler(BaseHTTPRequestHandler):
self._json({"ok": True})
return
state = ensure_initialized()
get_dispatcher = ControlPlaneGetDispatcher(
state,
attention_state_fn=self._attention_state,
delivery_state_label_fn=self._delivery_state_label,
build_scheduler_preview_fn=build_scheduler_preview,
settings_service_factory=SettingsService,
)
if parsed.path == "/settings":
state = ensure_initialized()
service = SettingsService(state["root"])
self._json(service.load_redacted().settings)
body, status = get_dispatcher.handle_settings()
self._json(body, status=status)
return
if parsed.path == "/settings/schema":
state = ensure_initialized()
service = SettingsService(state["root"])
self._json(service.load().schema)
body, status = get_dispatcher.handle_settings_schema()
self._json(body, status=status)
return
if parsed.path == "/doctor":
@ -180,8 +181,8 @@ class ApiHandler(BaseHTTPRequestHandler):
return
if parsed.path == "/scheduler/preview":
state = ensure_initialized()
self._json(build_scheduler_preview(state, include_stage_scan=False, limit=200))
body, status = get_dispatcher.handle_scheduler_preview()
self._json(body, status=status)
return
if parsed.path == "/logs":
@ -196,146 +197,78 @@ class ApiHandler(BaseHTTPRequestHandler):
return
if parsed.path == "/history":
state = ensure_initialized()
query = parse_qs(parsed.query)
limit = int(query.get("limit", ["100"])[0])
task_id = query.get("task_id", [None])[0]
action_name = query.get("action_name", [None])[0]
status = query.get("status", [None])[0]
items = [
item.to_dict()
for item in state["repo"].list_action_records(
task_id=task_id,
limit=limit,
action_name=action_name,
status=status,
)
]
self._json({"items": items})
body, http_status = get_dispatcher.handle_history(
limit=limit,
task_id=task_id,
action_name=action_name,
status=status,
)
self._json(body, status=http_status)
return
if parsed.path == "/modules":
state = ensure_initialized()
self._json({"items": state["registry"].list_manifests(), "discovered_manifests": state["manifests"]})
body, status = get_dispatcher.handle_modules()
self._json(body, status=status)
return
if parsed.path == "/tasks":
state = ensure_initialized()
query = parse_qs(parsed.query)
limit = int(query.get("limit", ["100"])[0])
tasks = [self._task_payload(task.id, state) for task in state["repo"].list_tasks(limit=limit)]
self._json({"items": tasks})
offset = int(query.get("offset", ["0"])[0])
status = query.get("status", [None])[0]
search = query.get("search", [None])[0]
sort = query.get("sort", ["updated_desc"])[0]
attention = query.get("attention", [None])[0]
delivery = query.get("delivery", [None])[0]
body, http_status = get_dispatcher.handle_tasks(
limit=limit,
offset=offset,
status=status,
search=search,
sort=sort,
attention=attention,
delivery=delivery,
)
self._json(body, status=http_status)
return
if parsed.path.startswith("/tasks/"):
state = ensure_initialized()
if parsed.path.startswith("/sessions/"):
parts = [unquote(p) for p in parsed.path.split("/") if p]
if len(parts) == 2:
task = self._task_payload(parts[1], state)
if task is None:
self._json({"error": "task not found"}, status=HTTPStatus.NOT_FOUND)
return
self._json(task)
body, status = get_dispatcher.handle_session(parts[1])
self._json(body, status=status)
return
if parsed.path.startswith("/tasks/"):
parts = [unquote(p) for p in parsed.path.split("/") if p]
if len(parts) == 2:
body, status = get_dispatcher.handle_task(parts[1])
self._json(body, status=status)
return
if len(parts) == 3 and parts[2] == "steps":
steps = [self._step_payload(step, state) for step in state["repo"].list_steps(parts[1])]
self._json({"items": steps})
body, status = get_dispatcher.handle_task_steps(parts[1])
self._json(body, status=status)
return
if len(parts) == 3 and parts[2] == "context":
body, status = get_dispatcher.handle_task_context(parts[1])
self._json(body, status=status)
return
if len(parts) == 3 and parts[2] == "artifacts":
artifacts = [artifact.to_dict() for artifact in state["repo"].list_artifacts(parts[1])]
self._json({"items": artifacts})
body, status = get_dispatcher.handle_task_artifacts(parts[1])
self._json(body, status=status)
return
if len(parts) == 3 and parts[2] == "history":
actions = [item.to_dict() for item in state["repo"].list_action_records(parts[1], limit=100)]
self._json({"items": actions})
body, status = get_dispatcher.handle_task_history(parts[1])
self._json(body, status=status)
return
if len(parts) == 3 and parts[2] == "timeline":
task = state["repo"].get_task(parts[1])
if task is None:
self._json({"error": "task not found"}, status=HTTPStatus.NOT_FOUND)
return
steps = state["repo"].list_steps(parts[1])
artifacts = state["repo"].list_artifacts(parts[1])
actions = state["repo"].list_action_records(parts[1], limit=200)
items: list[dict[str, object]] = []
if task.created_at:
items.append({
"kind": "task",
"time": task.created_at,
"title": "Task Created",
"summary": task.title,
"status": task.status,
})
if task.updated_at and task.updated_at != task.created_at:
items.append({
"kind": "task",
"time": task.updated_at,
"title": "Task Updated",
"summary": task.status,
"status": task.status,
})
for step in steps:
if step.started_at:
items.append({
"kind": "step",
"time": step.started_at,
"title": f"{step.step_name} started",
"summary": step.status,
"status": step.status,
})
if step.finished_at:
retry_meta = retry_meta_for_step(step, state["settings"])
retry_note = ""
if retry_meta and retry_meta.get("next_retry_at"):
retry_note = f" | next retry: {retry_meta['next_retry_at']}"
items.append({
"kind": "step",
"time": step.finished_at,
"title": f"{step.step_name} finished",
"summary": f"{step.error_message or step.status}{retry_note}",
"status": step.status,
"retry_state": retry_meta,
})
for artifact in artifacts:
if artifact.created_at:
items.append({
"kind": "artifact",
"time": artifact.created_at,
"title": artifact.artifact_type,
"summary": artifact.path,
"status": "created",
})
for action in actions:
summary = action.summary
try:
details = json.loads(action.details_json or "{}")
except json.JSONDecodeError:
details = {}
if action.action_name == "comment" and isinstance(details, dict):
split_status = details.get("split", {}).get("status")
full_status = details.get("full", {}).get("status")
fragments = []
if split_status:
fragments.append(f"split={split_status}")
if full_status:
fragments.append(f"full={full_status}")
if fragments:
summary = f"{summary} | {' '.join(fragments)}"
if action.action_name in {"collection_a", "collection_b"} and isinstance(details, dict):
cleanup = details.get("result", {}).get("cleanup") or details.get("cleanup")
if isinstance(cleanup, dict):
removed = cleanup.get("removed") or []
if removed:
summary = f"{summary} | cleanup removed={len(removed)}"
items.append({
"kind": "action",
"time": action.created_at,
"title": action.action_name,
"summary": summary,
"status": action.status,
})
items.sort(key=lambda item: str(item["time"]), reverse=True)
self._json({"items": items})
body, status = get_dispatcher.handle_task_timeline(parts[1])
self._json(body, status=status)
return
self._json({"error": "not found"}, status=HTTPStatus.NOT_FOUND)
@ -353,74 +286,86 @@ class ApiHandler(BaseHTTPRequestHandler):
service = SettingsService(root)
service.save_staged_from_redacted(payload)
service.promote_staged()
reset_initialized_state()
ensure_initialized()
self._json({"ok": True})
def do_POST(self) -> None: # noqa: N802
parsed = urlparse(self.path)
if not self._check_auth(parsed.path):
return
state = ensure_initialized()
dispatcher = ControlPlanePostDispatcher(
state,
bind_full_video_action=bind_full_video_action,
merge_session_action=merge_session_action,
receive_full_video_webhook=receive_full_video_webhook,
rebind_session_full_video_action=rebind_session_full_video_action,
reset_to_step_action=reset_to_step_action,
retry_step_action=retry_step_action,
run_task_action=run_task_action,
run_once=run_once,
stage_importer_factory=StageImporter,
systemd_runtime_factory=SystemdRuntime,
)
if parsed.path == "/webhooks/full-video-uploaded":
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
body, status = dispatcher.handle_webhook_full_video(payload)
self._json(body, status=status)
return
if parsed.path != "/tasks":
if parsed.path.startswith("/sessions/"):
parts = [unquote(p) for p in parsed.path.split("/") if p]
if len(parts) == 3 and parts[0] == "sessions" and parts[2] == "merge":
session_key = parts[1]
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
body, status = dispatcher.handle_session_merge(session_key, payload)
self._json(body, status=status)
return
if len(parts) == 3 and parts[0] == "sessions" and parts[2] == "rebind":
session_key = parts[1]
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
body, status = dispatcher.handle_session_rebind(session_key, payload)
self._json(body, status=status)
return
if parsed.path.startswith("/tasks/"):
parts = [unquote(p) for p in parsed.path.split("/") if p]
if len(parts) == 3 and parts[0] == "tasks" and parts[2] == "bind-full-video":
task_id = parts[1]
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
body, status = dispatcher.handle_bind_full_video(task_id, payload)
self._json(body, status=status)
return
if len(parts) == 4 and parts[0] == "tasks" and parts[2] == "actions":
task_id = parts[1]
action = parts[3]
if action == "run":
result = run_task_action(task_id)
self._json(result, status=HTTPStatus.ACCEPTED)
return
if action == "retry-step":
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
step_name = payload.get("step_name")
if not step_name:
self._json({"error": "missing step_name"}, status=HTTPStatus.BAD_REQUEST)
return
result = retry_step_action(task_id, step_name)
self._json(result, status=HTTPStatus.ACCEPTED)
return
if action == "reset-to-step":
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
step_name = payload.get("step_name")
if not step_name:
self._json({"error": "missing step_name"}, status=HTTPStatus.BAD_REQUEST)
return
result = reset_to_step_action(task_id, step_name)
self._json(result, status=HTTPStatus.ACCEPTED)
if action in {"run", "retry-step", "reset-to-step"}:
payload = {}
if action != "run":
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
body, status = dispatcher.handle_task_action(task_id, action, payload)
self._json(body, status=status)
return
if parsed.path == "/worker/run-once":
payload = run_once()
self._record_action(None, "worker_run_once", "ok", "worker run once invoked", payload)
self._json(payload, status=HTTPStatus.ACCEPTED)
body, status = dispatcher.handle_worker_run_once()
self._json(body, status=status)
return
if parsed.path.startswith("/runtime/services/"):
parts = [unquote(p) for p in parsed.path.split("/") if p]
if len(parts) == 4 and parts[0] == "runtime" and parts[1] == "services":
try:
payload = SystemdRuntime().act(parts[2], parts[3])
except ValueError as exc:
self._json({"error": str(exc)}, status=HTTPStatus.BAD_REQUEST)
return
self._record_action(None, "service_action", "ok" if payload.get("command_ok") else "error", f"{parts[3]} {parts[2]}", payload)
self._json(payload, status=HTTPStatus.ACCEPTED)
body, status = dispatcher.handle_runtime_service_action(parts[2], parts[3])
self._json(body, status=status)
return
if parsed.path == "/stage/import":
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
source_path = payload.get("source_path")
if not source_path:
self._json({"error": "missing source_path"}, status=HTTPStatus.BAD_REQUEST)
return
state = ensure_initialized()
stage_dir = Path(state["settings"]["paths"]["stage_dir"])
try:
result = StageImporter().import_file(Path(source_path), stage_dir)
except Exception as exc:
self._json({"error": str(exc)}, status=HTTPStatus.BAD_REQUEST)
return
self._record_action(None, "stage_import", "ok", "imported file into stage", result)
self._json(result, status=HTTPStatus.CREATED)
body, status = dispatcher.handle_stage_import(payload)
self._json(body, status=status)
return
if parsed.path == "/stage/upload":
content_type = self.headers.get("Content-Type", "")
@ -437,44 +382,19 @@ class ApiHandler(BaseHTTPRequestHandler):
},
)
file_item = form["file"] if "file" in form else None
if file_item is None or not getattr(file_item, "filename", None):
self._json({"error": "missing file"}, status=HTTPStatus.BAD_REQUEST)
return
state = ensure_initialized()
stage_dir = Path(state["settings"]["paths"]["stage_dir"])
try:
result = StageImporter().import_upload(file_item.filename, file_item.file, stage_dir)
except Exception as exc:
self._json({"error": str(exc)}, status=HTTPStatus.BAD_REQUEST)
return
self._record_action(None, "stage_upload", "ok", "uploaded file into stage", result)
self._json(result, status=HTTPStatus.CREATED)
body, status = dispatcher.handle_stage_upload(file_item)
self._json(body, status=status)
return
if parsed.path == "/scheduler/run-once":
result = run_once()
self._record_action(None, "scheduler_run_once", "ok", "scheduler run once completed", result.get("scheduler", {}))
self._json(result, status=HTTPStatus.ACCEPTED)
body, status = dispatcher.handle_scheduler_run_once()
self._json(body, status=status)
return
self._json({"error": "not found"}, status=HTTPStatus.NOT_FOUND)
return
length = int(self.headers.get("Content-Length", "0"))
payload = json.loads(self.rfile.read(length) or b"{}")
source_path = payload.get("source_path")
if not source_path:
self._json({"error": "missing source_path"}, status=HTTPStatus.BAD_REQUEST)
return
state = ensure_initialized()
try:
task = state["ingest_service"].create_task_from_file(
Path(source_path),
state["settings"]["ingest"],
)
except Exception as exc: # keep API small for now
status = HTTPStatus.CONFLICT if exc.__class__.__name__ == "ModuleError" else HTTPStatus.INTERNAL_SERVER_ERROR
payload = exc.to_dict() if hasattr(exc, "to_dict") else {"error": str(exc)}
self._json(payload, status=status)
return
self._json(task.to_dict(), status=HTTPStatus.CREATED)
body, status = dispatcher.handle_create_task(payload)
self._json(body, status=status)
def log_message(self, format: str, *args) -> None: # noqa: A003
return
@ -510,7 +430,7 @@ class ApiHandler(BaseHTTPRequestHandler):
)
def _check_auth(self, path: str) -> bool:
if path in {"/", "/health", "/ui", "/ui/"} or path.startswith("/assets/") or path.startswith("/ui/assets/"):
if path in {"/", "/health", "/ui", "/ui/", "/classic"} or path.startswith("/assets/") or path.startswith("/ui/assets/"):
return True
state = ensure_initialized()
expected = str(state["settings"]["runtime"].get("control_token", "")).strip()

View File

@ -1,11 +1,11 @@
from __future__ import annotations
from pathlib import Path
from dataclasses import asdict
from pathlib import Path
from threading import RLock
from biliup_next.core.config import SettingsService
from biliup_next.core.registry import Registry
from biliup_next.infra.comment_flag_migration import CommentFlagMigrationService
from biliup_next.infra.db import Database
from biliup_next.infra.plugin_loader import PluginLoader
from biliup_next.infra.task_repository import TaskRepository
@ -22,56 +22,67 @@ def project_root() -> Path:
return Path(__file__).resolve().parents[3]
_APP_STATE: dict[str, object] | None = None
_APP_STATE_LOCK = RLock()
def reset_initialized_state() -> None:
global _APP_STATE
with _APP_STATE_LOCK:
_APP_STATE = None
def ensure_initialized() -> dict[str, object]:
root = project_root()
settings_service = SettingsService(root)
bundle = settings_service.load()
db_path = (root / bundle.settings["runtime"]["database_path"]).resolve()
db = Database(db_path)
db.initialize()
repo = TaskRepository(db)
registry = Registry()
plugin_loader = PluginLoader(root)
manifests = plugin_loader.load_manifests()
for manifest in manifests:
if not manifest.enabled_by_default:
continue
provider = plugin_loader.instantiate_provider(manifest)
provider_manifest = getattr(provider, "manifest", None)
if provider_manifest is None:
raise RuntimeError(f"provider missing manifest: {manifest.entrypoint}")
if provider_manifest.id != manifest.id or provider_manifest.provider_type != manifest.provider_type:
raise RuntimeError(f"provider manifest mismatch: {manifest.entrypoint}")
registry.register(
manifest.provider_type,
manifest.id,
provider,
provider_manifest,
)
session_dir = (root / bundle.settings["paths"]["session_dir"]).resolve()
imported = repo.bootstrap_from_legacy_sessions(session_dir)
comment_flag_migration = CommentFlagMigrationService().migrate(session_dir)
ingest_service = IngestService(registry, repo)
transcribe_service = TranscribeService(registry, repo)
song_detect_service = SongDetectService(registry, repo)
split_service = SplitService(registry, repo)
publish_service = PublishService(registry, repo)
comment_service = CommentService(registry, repo)
collection_service = CollectionService(registry, repo)
return {
"root": root,
"settings": bundle.settings,
"db": db,
"repo": repo,
"registry": registry,
"manifests": [asdict(m) for m in manifests],
"ingest_service": ingest_service,
"transcribe_service": transcribe_service,
"song_detect_service": song_detect_service,
"split_service": split_service,
"publish_service": publish_service,
"comment_service": comment_service,
"collection_service": collection_service,
"imported": imported,
"comment_flag_migration": comment_flag_migration,
}
global _APP_STATE
with _APP_STATE_LOCK:
if _APP_STATE is not None:
return _APP_STATE
root = project_root()
settings_service = SettingsService(root)
bundle = settings_service.load()
db_path = (root / bundle.settings["runtime"]["database_path"]).resolve()
db = Database(db_path)
db.initialize()
repo = TaskRepository(db)
registry = Registry()
plugin_loader = PluginLoader(root)
manifests = plugin_loader.load_manifests()
for manifest in manifests:
if not manifest.enabled_by_default:
continue
provider = plugin_loader.instantiate_provider(manifest)
provider_manifest = getattr(provider, "manifest", None)
if provider_manifest is None:
raise RuntimeError(f"provider missing manifest: {manifest.entrypoint}")
if provider_manifest.id != manifest.id or provider_manifest.provider_type != manifest.provider_type:
raise RuntimeError(f"provider manifest mismatch: {manifest.entrypoint}")
registry.register(
manifest.provider_type,
manifest.id,
provider,
provider_manifest,
)
ingest_service = IngestService(registry, repo)
transcribe_service = TranscribeService(registry, repo)
song_detect_service = SongDetectService(registry, repo)
split_service = SplitService(registry, repo)
publish_service = PublishService(registry, repo)
comment_service = CommentService(registry, repo)
collection_service = CollectionService(registry, repo)
_APP_STATE = {
"root": root,
"settings": bundle.settings,
"db": db,
"repo": repo,
"registry": registry,
"manifests": [asdict(m) for m in manifests],
"ingest_service": ingest_service,
"transcribe_service": transcribe_service,
"song_detect_service": song_detect_service,
"split_service": split_service,
"publish_service": publish_service,
"comment_service": comment_service,
"collection_service": collection_service,
}
return _APP_STATE

View File

@ -40,8 +40,8 @@ def main() -> None:
args = parser.parse_args()
if args.command == "init":
state = ensure_initialized()
print(json.dumps({"ok": True, "imported": state["imported"]}, ensure_ascii=False, indent=2))
ensure_initialized()
print(json.dumps({"ok": True}, ensure_ascii=False, indent=2))
return
if args.command == "doctor":
@ -93,9 +93,11 @@ def main() -> None:
if args.command == "create-task":
state = ensure_initialized()
settings = dict(state["settings"]["ingest"])
settings.update(state["settings"]["paths"])
task = state["ingest_service"].create_task_from_file(
Path(args.source_path),
state["settings"]["ingest"],
settings,
)
print(json.dumps(task.to_dict(), ensure_ascii=False, indent=2))
return

View File

@ -0,0 +1,123 @@
from __future__ import annotations
from http import HTTPStatus
from biliup_next.app.serializers import ControlPlaneSerializer
class ControlPlaneGetDispatcher:
def __init__(
self,
state: dict[str, object],
*,
attention_state_fn,
delivery_state_label_fn,
build_scheduler_preview_fn,
settings_service_factory,
) -> None: # type: ignore[no-untyped-def]
self.state = state
self.repo = state["repo"]
self.serializer = ControlPlaneSerializer(state)
self.attention_state_fn = attention_state_fn
self.delivery_state_label_fn = delivery_state_label_fn
self.build_scheduler_preview_fn = build_scheduler_preview_fn
self.settings_service_factory = settings_service_factory
def handle_settings(self) -> tuple[object, HTTPStatus]:
service = self.settings_service_factory(self.state["root"])
return service.load_redacted().settings, HTTPStatus.OK
def handle_settings_schema(self) -> tuple[object, HTTPStatus]:
service = self.settings_service_factory(self.state["root"])
return service.load().schema, HTTPStatus.OK
def handle_scheduler_preview(self) -> tuple[object, HTTPStatus]:
return self.build_scheduler_preview_fn(self.state, include_stage_scan=False, limit=200), HTTPStatus.OK
def handle_history(self, *, limit: int, task_id: str | None, action_name: str | None, status: str | None) -> tuple[object, HTTPStatus]:
items = [
item.to_dict()
for item in self.repo.list_action_records(
task_id=task_id,
limit=limit,
action_name=action_name,
status=status,
)
]
return {"items": items}, HTTPStatus.OK
def handle_modules(self) -> tuple[object, HTTPStatus]:
return {"items": self.state["registry"].list_manifests(), "discovered_manifests": self.state["manifests"]}, HTTPStatus.OK
def handle_tasks(
self,
*,
limit: int,
offset: int,
status: str | None,
search: str | None,
sort: str,
attention: str | None,
delivery: str | None,
) -> tuple[object, HTTPStatus]:
if attention or delivery:
task_items, _ = self.repo.query_tasks(
limit=5000,
offset=0,
status=status,
search=search,
sort=sort,
)
all_tasks = self.serializer.task_payloads_from_tasks(task_items)
filtered_tasks: list[dict[str, object]] = []
for item in all_tasks:
if attention and self.attention_state_fn(item) != attention:
continue
if delivery and self.delivery_state_label_fn(item) != delivery:
continue
filtered_tasks.append(item)
total = len(filtered_tasks)
tasks = filtered_tasks[offset:offset + limit]
else:
task_items, total = self.repo.query_tasks(
limit=limit,
offset=offset,
status=status,
search=search,
sort=sort,
)
tasks = self.serializer.task_payloads_from_tasks(task_items)
return {"items": tasks, "total": total, "limit": limit, "offset": offset}, HTTPStatus.OK
def handle_session(self, session_key: str) -> tuple[object, HTTPStatus]:
payload = self.serializer.session_payload(session_key)
if payload is None:
return {"error": "session not found"}, HTTPStatus.NOT_FOUND
return payload, HTTPStatus.OK
def handle_task(self, task_id: str) -> tuple[object, HTTPStatus]:
payload = self.serializer.task_payload(task_id)
if payload is None:
return {"error": "task not found"}, HTTPStatus.NOT_FOUND
return payload, HTTPStatus.OK
def handle_task_steps(self, task_id: str) -> tuple[object, HTTPStatus]:
return {"items": [self.serializer.step_payload(step) for step in self.repo.list_steps(task_id)]}, HTTPStatus.OK
def handle_task_context(self, task_id: str) -> tuple[object, HTTPStatus]:
payload = self.serializer.task_context_payload(task_id)
if payload is None:
return {"error": "task context not found"}, HTTPStatus.NOT_FOUND
return payload, HTTPStatus.OK
def handle_task_artifacts(self, task_id: str) -> tuple[object, HTTPStatus]:
return {"items": [artifact.to_dict() for artifact in self.repo.list_artifacts(task_id)]}, HTTPStatus.OK
def handle_task_history(self, task_id: str) -> tuple[object, HTTPStatus]:
return {"items": [item.to_dict() for item in self.repo.list_action_records(task_id, limit=100)]}, HTTPStatus.OK
def handle_task_timeline(self, task_id: str) -> tuple[object, HTTPStatus]:
payload = self.serializer.timeline_payload(task_id)
if payload is None:
return {"error": "task not found"}, HTTPStatus.NOT_FOUND
return payload, HTTPStatus.OK

View File

@ -0,0 +1,164 @@
from __future__ import annotations
import json
from http import HTTPStatus
from pathlib import Path
from biliup_next.core.models import ActionRecord, utc_now_iso
from biliup_next.infra.storage_guard import mb_to_bytes
class ControlPlanePostDispatcher:
def __init__(
self,
state: dict[str, object],
*,
bind_full_video_action,
merge_session_action,
receive_full_video_webhook,
rebind_session_full_video_action,
reset_to_step_action,
retry_step_action,
run_task_action,
run_once,
stage_importer_factory,
systemd_runtime_factory,
) -> None: # type: ignore[no-untyped-def]
self.state = state
self.repo = state["repo"]
self.bind_full_video_action = bind_full_video_action
self.merge_session_action = merge_session_action
self.receive_full_video_webhook = receive_full_video_webhook
self.rebind_session_full_video_action = rebind_session_full_video_action
self.reset_to_step_action = reset_to_step_action
self.retry_step_action = retry_step_action
self.run_task_action = run_task_action
self.run_once = run_once
self.stage_importer_factory = stage_importer_factory
self.systemd_runtime_factory = systemd_runtime_factory
def handle_webhook_full_video(self, payload: object) -> tuple[object, HTTPStatus]:
if not isinstance(payload, dict):
return {"error": "invalid payload"}, HTTPStatus.BAD_REQUEST
result = self.receive_full_video_webhook(payload)
if "error" in result:
return result, HTTPStatus.BAD_REQUEST
return result, HTTPStatus.ACCEPTED
def handle_session_merge(self, session_key: str, payload: object) -> tuple[object, HTTPStatus]:
if not isinstance(payload, dict) or not isinstance(payload.get("task_ids"), list):
return {"error": "missing task_ids"}, HTTPStatus.BAD_REQUEST
result = self.merge_session_action(session_key, [str(item) for item in payload["task_ids"]])
if "error" in result:
return result, HTTPStatus.BAD_REQUEST
return result, HTTPStatus.ACCEPTED
def handle_session_rebind(self, session_key: str, payload: object) -> tuple[object, HTTPStatus]:
full_video_bvid = str((payload or {}).get("full_video_bvid", "")).strip() if isinstance(payload, dict) else ""
if not full_video_bvid:
return {"error": "missing full_video_bvid"}, HTTPStatus.BAD_REQUEST
result = self.rebind_session_full_video_action(session_key, full_video_bvid)
if "error" in result:
status = HTTPStatus.NOT_FOUND if result["error"].get("code") == "SESSION_NOT_FOUND" else HTTPStatus.BAD_REQUEST
return result, status
return result, HTTPStatus.ACCEPTED
def handle_bind_full_video(self, task_id: str, payload: object) -> tuple[object, HTTPStatus]:
full_video_bvid = str((payload or {}).get("full_video_bvid", "")).strip() if isinstance(payload, dict) else ""
if not full_video_bvid:
return {"error": "missing full_video_bvid"}, HTTPStatus.BAD_REQUEST
result = self.bind_full_video_action(task_id, full_video_bvid)
if "error" in result:
status = HTTPStatus.NOT_FOUND if result["error"].get("code") == "TASK_NOT_FOUND" else HTTPStatus.BAD_REQUEST
return result, status
return result, HTTPStatus.ACCEPTED
def handle_task_action(self, task_id: str, action: str, payload: object) -> tuple[object, HTTPStatus]:
if action == "run":
return self.run_task_action(task_id), HTTPStatus.ACCEPTED
if action == "retry-step":
step_name = payload.get("step_name") if isinstance(payload, dict) else None
if not step_name:
return {"error": "missing step_name"}, HTTPStatus.BAD_REQUEST
return self.retry_step_action(task_id, step_name), HTTPStatus.ACCEPTED
if action == "reset-to-step":
step_name = payload.get("step_name") if isinstance(payload, dict) else None
if not step_name:
return {"error": "missing step_name"}, HTTPStatus.BAD_REQUEST
return self.reset_to_step_action(task_id, step_name), HTTPStatus.ACCEPTED
return {"error": "not found"}, HTTPStatus.NOT_FOUND
def handle_worker_run_once(self) -> tuple[object, HTTPStatus]:
payload = self.run_once()
self._record_action(None, "worker_run_once", "ok", "worker run once invoked", payload)
return payload, HTTPStatus.ACCEPTED
def handle_scheduler_run_once(self) -> tuple[object, HTTPStatus]:
payload = self.run_once()
self._record_action(None, "scheduler_run_once", "ok", "scheduler run once completed", payload.get("scheduler", {}))
return payload, HTTPStatus.ACCEPTED
def handle_runtime_service_action(self, service_name: str, action: str) -> tuple[object, HTTPStatus]:
try:
payload = self.systemd_runtime_factory().act(service_name, action)
except ValueError as exc:
return {"error": str(exc)}, HTTPStatus.BAD_REQUEST
self._record_action(None, "service_action", "ok" if payload.get("command_ok") else "error", f"{action} {service_name}", payload)
return payload, HTTPStatus.ACCEPTED
def handle_stage_import(self, payload: object) -> tuple[object, HTTPStatus]:
source_path = payload.get("source_path") if isinstance(payload, dict) else None
if not source_path:
return {"error": "missing source_path"}, HTTPStatus.BAD_REQUEST
stage_dir = Path(self.state["settings"]["paths"]["stage_dir"])
min_free_bytes = mb_to_bytes(self.state["settings"]["ingest"].get("stage_min_free_space_mb", 0))
try:
result = self.stage_importer_factory().import_file(Path(source_path), stage_dir, min_free_bytes=min_free_bytes)
except Exception as exc:
return {"error": str(exc)}, HTTPStatus.BAD_REQUEST
self._record_action(None, "stage_import", "ok", "imported file into stage", result)
return result, HTTPStatus.CREATED
def handle_stage_upload(self, file_item) -> tuple[object, HTTPStatus]: # type: ignore[no-untyped-def]
if file_item is None or not getattr(file_item, "filename", None):
return {"error": "missing file"}, HTTPStatus.BAD_REQUEST
stage_dir = Path(self.state["settings"]["paths"]["stage_dir"])
min_free_bytes = mb_to_bytes(self.state["settings"]["ingest"].get("stage_min_free_space_mb", 0))
try:
result = self.stage_importer_factory().import_upload(
file_item.filename,
file_item.file,
stage_dir,
min_free_bytes=min_free_bytes,
)
except Exception as exc:
return {"error": str(exc)}, HTTPStatus.BAD_REQUEST
self._record_action(None, "stage_upload", "ok", "uploaded file into stage", result)
return result, HTTPStatus.CREATED
def handle_create_task(self, payload: object) -> tuple[object, HTTPStatus]:
source_path = payload.get("source_path") if isinstance(payload, dict) else None
if not source_path:
return {"error": "missing source_path"}, HTTPStatus.BAD_REQUEST
try:
settings = dict(self.state["settings"]["ingest"])
settings.update(self.state["settings"]["paths"])
task = self.state["ingest_service"].create_task_from_file(Path(source_path), settings)
except Exception as exc:
status = HTTPStatus.CONFLICT if exc.__class__.__name__ == "ModuleError" else HTTPStatus.INTERNAL_SERVER_ERROR
body = exc.to_dict() if hasattr(exc, "to_dict") else {"error": str(exc)}
return body, status
return task.to_dict(), HTTPStatus.CREATED
def _record_action(self, task_id: str | None, action_name: str, status: str, summary: str, details: dict[str, object]) -> None:
self.repo.add_action_record(
ActionRecord(
id=None,
task_id=task_id,
action_name=action_name,
status=status,
summary=summary,
details_json=json.dumps(details, ensure_ascii=False),
created_at=utc_now_iso(),
)
)

View File

@ -215,7 +215,6 @@ def render_dashboard_html() -> str:
</select>
<select id="taskDeliveryFilter">
<option value="">全部交付状态</option>
<option value="legacy_untracked">主视频评论未追踪</option>
<option value="pending_comment">评论待完成</option>
<option value="cleanup_removed">已清理视频</option>
</select>
@ -249,6 +248,17 @@ def render_dashboard_html() -> str:
</div>
</section>
<section class="panel">
<div class="panel-head">
<h3>Session Workspace</h3>
<div class="button-row">
<button id="refreshSessionBtn" class="secondary compact">刷新 Session</button>
</div>
</div>
<div id="sessionWorkspaceState" class="task-workspace-state show">当前任务如果已绑定 session_key这里会显示同场片段和完整版绑定信息。</div>
<div id="sessionPanel" class="summary-card session-panel"></div>
</section>
<div class="panel-grid two-up">
<section class="panel">
<div class="panel-head"><h3>Steps</h3></div>

View File

@ -2,6 +2,11 @@ from __future__ import annotations
from datetime import datetime, timedelta, timezone
STEP_SETTINGS_GROUP = {
"publish": "publish",
"comment": "comment",
}
def parse_iso(value: str | None) -> datetime | None:
if not value:
@ -12,7 +17,14 @@ def parse_iso(value: str | None) -> datetime | None:
return None
def publish_retry_schedule_seconds(settings: dict[str, object]) -> list[int]:
def retry_schedule_seconds(
settings: dict[str, object],
*,
count_key: str,
backoff_key: str,
default_count: int,
default_backoff: int,
) -> list[int]:
raw_schedule = settings.get("retry_schedule_minutes")
if isinstance(raw_schedule, list):
schedule: list[int] = []
@ -21,25 +33,57 @@ def publish_retry_schedule_seconds(settings: dict[str, object]) -> list[int]:
schedule.append(item * 60)
if schedule:
return schedule
retry_count = settings.get("retry_count", 5)
retry_count = retry_count if isinstance(retry_count, int) and not isinstance(retry_count, bool) else 5
retry_count = settings.get(count_key, default_count)
retry_count = retry_count if isinstance(retry_count, int) and not isinstance(retry_count, bool) else default_count
retry_count = max(retry_count, 0)
retry_backoff = settings.get("retry_backoff_seconds", 300)
retry_backoff = retry_backoff if isinstance(retry_backoff, int) and not isinstance(retry_backoff, bool) else 300
retry_backoff = settings.get(backoff_key, default_backoff)
retry_backoff = retry_backoff if isinstance(retry_backoff, int) and not isinstance(retry_backoff, bool) else default_backoff
retry_backoff = max(retry_backoff, 0)
return [retry_backoff] * retry_count
def publish_retry_schedule_seconds(settings: dict[str, object]) -> list[int]:
return retry_schedule_seconds(
settings,
count_key="retry_count",
backoff_key="retry_backoff_seconds",
default_count=5,
default_backoff=300,
)
def comment_retry_schedule_seconds(settings: dict[str, object]) -> list[int]:
return retry_schedule_seconds(
settings,
count_key="max_retries",
backoff_key="base_delay_seconds",
default_count=5,
default_backoff=180,
)
def retry_meta_for_step(step, settings_by_group: dict[str, object]) -> dict[str, object] | None: # type: ignore[no-untyped-def]
if getattr(step, "status", None) != "failed_retryable" or getattr(step, "retry_count", 0) <= 0:
return None
if getattr(step, "step_name", None) != "publish":
step_name = getattr(step, "step_name", None)
settings_group = STEP_SETTINGS_GROUP.get(step_name)
if settings_group is None:
return None
group_settings = settings_by_group.get(settings_group, {})
if not isinstance(group_settings, dict):
group_settings = {}
if step_name == "publish":
schedule = publish_retry_schedule_seconds(group_settings)
elif step_name == "comment":
schedule = comment_retry_schedule_seconds(group_settings)
else:
return None
publish_settings = settings_by_group.get("publish", {})
if not isinstance(publish_settings, dict):
publish_settings = {}
schedule = publish_retry_schedule_seconds(publish_settings)
attempt_index = step.retry_count - 1
if attempt_index >= len(schedule):
return {

View File

@ -0,0 +1,254 @@
from __future__ import annotations
import json
from pathlib import Path
from biliup_next.app.retry_meta import retry_meta_for_step
class ControlPlaneSerializer:
def __init__(self, state: dict[str, object]):
self.state = state
@staticmethod
def video_url(bvid: object) -> str | None:
if isinstance(bvid, str) and bvid.startswith("BV"):
return f"https://www.bilibili.com/video/{bvid}"
return None
def task_related_maps(
self,
tasks,
) -> tuple[dict[str, object], dict[str, list[object]]]: # type: ignore[no-untyped-def]
task_ids = [task.id for task in tasks]
contexts_by_task_id = self.state["repo"].list_task_contexts_for_task_ids(task_ids)
steps_by_task_id = self.state["repo"].list_steps_for_task_ids(task_ids)
return contexts_by_task_id, steps_by_task_id
def task_payload(self, task_id: str) -> dict[str, object] | None:
task = self.state["repo"].get_task(task_id)
if task is None:
return None
return self.task_payload_from_task(task)
def task_payloads_from_tasks(self, tasks) -> list[dict[str, object]]: # type: ignore[no-untyped-def]
contexts_by_task_id, steps_by_task_id = self.task_related_maps(tasks)
return [
self.task_payload_from_task(
task,
context=contexts_by_task_id.get(task.id),
steps=steps_by_task_id.get(task.id, []),
)
for task in tasks
]
def task_payload_from_task(
self,
task,
*,
context=None, # type: ignore[no-untyped-def]
steps=None, # type: ignore[no-untyped-def]
) -> dict[str, object]:
payload = task.to_dict()
session_context = self.task_context_payload(task.id, task=task, context=context)
if session_context:
payload["session_context"] = session_context
retry_state = self.task_retry_state(task.id, steps=steps)
if retry_state:
payload["retry_state"] = retry_state
payload["delivery_state"] = self.task_delivery_state(task.id, task=task)
return payload
def step_payload(self, step) -> dict[str, object]: # type: ignore[no-untyped-def]
payload = step.to_dict()
retry_meta = retry_meta_for_step(step, self.state["settings"])
if retry_meta:
payload.update(retry_meta)
return payload
def task_retry_state(self, task_id: str, *, steps=None) -> dict[str, object] | None: # type: ignore[no-untyped-def]
step_items = steps if steps is not None else self.state["repo"].list_steps(task_id)
for step in step_items:
retry_meta = retry_meta_for_step(step, self.state["settings"])
if retry_meta:
return {"step_name": step.step_name, **retry_meta}
return None
def task_delivery_state(self, task_id: str, *, task=None) -> dict[str, object]: # type: ignore[no-untyped-def]
task = task or self.state["repo"].get_task(task_id)
if task is None:
return {}
session_dir = Path(str(self.state["settings"]["paths"]["session_dir"])) / task.title
source_path = Path(task.source_path)
split_dir = session_dir / "split_video"
def comment_status(flag_name: str, *, enabled: bool) -> str:
if not enabled:
return "disabled"
return "done" if (session_dir / flag_name).exists() else "pending"
return {
"split_comment": comment_status("comment_split_done.flag", enabled=self.state["settings"]["comment"].get("post_split_comment", True)),
"full_video_timeline_comment": comment_status(
"comment_full_done.flag",
enabled=self.state["settings"]["comment"].get("post_full_video_timeline_comment", True),
),
"full_video_bvid_resolved": (session_dir / "full_video_bvid.txt").exists(),
"source_video_present": source_path.exists(),
"split_videos_present": split_dir.exists(),
"cleanup_enabled": {
"delete_source_video_after_collection_synced": self.state["settings"].get("cleanup", {}).get("delete_source_video_after_collection_synced", False),
"delete_split_videos_after_collection_synced": self.state["settings"].get("cleanup", {}).get("delete_split_videos_after_collection_synced", False),
},
}
def task_context_payload(self, task_id: str, *, task=None, context=None) -> dict[str, object] | None: # type: ignore[no-untyped-def]
task = task or self.state["repo"].get_task(task_id)
if task is None:
return None
context = context or self.state["repo"].get_task_context(task_id)
if context is None:
payload = {
"task_id": task.id,
"session_key": None,
"streamer": None,
"room_id": None,
"source_title": task.title,
"segment_started_at": None,
"segment_duration_seconds": None,
"full_video_bvid": None,
"created_at": task.created_at,
"updated_at": task.updated_at,
"context_source": "fallback",
}
else:
payload = context.to_dict()
payload["context_source"] = "task_context"
payload["split_bvid"] = self.read_task_text_artifact(task_id, "bvid.txt", task=task)
full_video_bvid = self.read_task_text_artifact(task_id, "full_video_bvid.txt", task=task)
if full_video_bvid:
payload["full_video_bvid"] = full_video_bvid
payload["video_links"] = {
"split_video_url": self.video_url(payload.get("split_bvid")),
"full_video_url": self.video_url(payload.get("full_video_bvid")),
}
return payload
def session_payload(self, session_key: str) -> dict[str, object] | None:
contexts = self.state["repo"].list_task_contexts_by_session_key(session_key)
if not contexts:
return None
tasks = []
full_video_bvid = None
for context in contexts:
task = self.state["repo"].get_task(context.task_id)
if task is None:
continue
tasks.append(task)
if not full_video_bvid and context.full_video_bvid:
full_video_bvid = context.full_video_bvid
return {
"session_key": session_key,
"task_count": len(tasks),
"full_video_bvid": full_video_bvid,
"full_video_url": self.video_url(full_video_bvid),
"tasks": self.task_payloads_from_tasks(tasks),
}
def timeline_payload(self, task_id: str) -> dict[str, object] | None:
task = self.state["repo"].get_task(task_id)
if task is None:
return None
steps = self.state["repo"].list_steps(task_id)
artifacts = self.state["repo"].list_artifacts(task_id)
actions = self.state["repo"].list_action_records(task_id, limit=200)
items: list[dict[str, object]] = []
if task.created_at:
items.append({
"kind": "task",
"time": task.created_at,
"title": "Task Created",
"summary": task.title,
"status": task.status,
})
if task.updated_at and task.updated_at != task.created_at:
items.append({
"kind": "task",
"time": task.updated_at,
"title": "Task Updated",
"summary": task.status,
"status": task.status,
})
for step in steps:
if step.started_at:
items.append({
"kind": "step",
"time": step.started_at,
"title": f"{step.step_name} started",
"summary": step.status,
"status": step.status,
})
if step.finished_at:
retry_meta = retry_meta_for_step(step, self.state["settings"])
retry_note = ""
if retry_meta and retry_meta.get("next_retry_at"):
retry_note = f" | next retry: {retry_meta['next_retry_at']}"
items.append({
"kind": "step",
"time": step.finished_at,
"title": f"{step.step_name} finished",
"summary": f"{step.error_message or step.status}{retry_note}",
"status": step.status,
"retry_state": retry_meta,
})
for artifact in artifacts:
if artifact.created_at:
items.append({
"kind": "artifact",
"time": artifact.created_at,
"title": artifact.artifact_type,
"summary": artifact.path,
"status": "created",
})
for action in actions:
summary = action.summary
try:
details = json.loads(action.details_json or "{}")
except json.JSONDecodeError:
details = {}
if action.action_name == "comment" and isinstance(details, dict):
split_status = details.get("split", {}).get("status")
full_status = details.get("full", {}).get("status")
fragments = []
if split_status:
fragments.append(f"split={split_status}")
if full_status:
fragments.append(f"full={full_status}")
if fragments:
summary = f"{summary} | {' '.join(fragments)}"
if action.action_name in {"collection_a", "collection_b"} and isinstance(details, dict):
cleanup = details.get("result", {}).get("cleanup") or details.get("cleanup")
if isinstance(cleanup, dict):
removed = cleanup.get("removed") or []
if removed:
summary = f"{summary} | cleanup removed={len(removed)}"
items.append({
"kind": "action",
"time": action.created_at,
"title": action.action_name,
"summary": summary,
"status": action.status,
})
items.sort(key=lambda item: str(item["time"]), reverse=True)
return {"items": items}
def read_task_text_artifact(self, task_id: str, filename: str, *, task=None) -> str | None: # type: ignore[no-untyped-def]
task = task or self.state["repo"].get_task(task_id)
if task is None:
return None
session_dir = Path(str(self.state["settings"]["paths"]["session_dir"])) / task.title
path = session_dir / filename
if not path.exists():
return None
value = path.read_text(encoding="utf-8").strip()
return value or None

View File

@ -0,0 +1,254 @@
from __future__ import annotations
import json
from pathlib import Path
import re
from biliup_next.core.models import ActionRecord, SessionBinding, TaskContext, utc_now_iso
class SessionDeliveryService:
def __init__(self, state: dict[str, object]):
self.state = state
self.repo = state["repo"]
self.settings = state["settings"]
def bind_task_full_video(self, task_id: str, full_video_bvid: str) -> dict[str, object]:
task = self.repo.get_task(task_id)
if task is None:
return {"error": {"code": "TASK_NOT_FOUND", "message": f"task not found: {task_id}"}}
bvid = self._normalize_bvid(full_video_bvid)
if bvid is None:
return {"error": {"code": "INVALID_BVID", "message": f"invalid bvid: {full_video_bvid}"}}
now = utc_now_iso()
context = self.repo.get_task_context(task_id)
if context is None:
context = TaskContext(
id=None,
task_id=task.id,
session_key=f"task:{task.id}",
streamer=None,
room_id=None,
source_title=task.title,
segment_started_at=None,
segment_duration_seconds=None,
full_video_bvid=bvid,
created_at=task.created_at,
updated_at=now,
)
full_video_bvid_path = self._persist_task_full_video_bvid(task, context, bvid, now=now)
return {
"task_id": task.id,
"session_key": context.session_key,
"full_video_bvid": bvid,
"path": str(full_video_bvid_path),
}
def rebind_session_full_video(self, session_key: str, full_video_bvid: str) -> dict[str, object]:
bvid = self._normalize_bvid(full_video_bvid)
if bvid is None:
return {"error": {"code": "INVALID_BVID", "message": f"invalid bvid: {full_video_bvid}"}}
contexts = self.repo.list_task_contexts_by_session_key(session_key)
if not contexts:
return {"error": {"code": "SESSION_NOT_FOUND", "message": f"session not found: {session_key}"}}
now = utc_now_iso()
self.repo.update_session_full_video_bvid(session_key, bvid, now)
updated_tasks: list[dict[str, object]] = []
for context in contexts:
task = self.repo.get_task(context.task_id)
if task is None:
continue
full_video_bvid_path = self._persist_task_full_video_bvid(task, context, bvid, now=now)
updated_tasks.append({"task_id": task.id, "path": str(full_video_bvid_path)})
return {
"session_key": session_key,
"full_video_bvid": bvid,
"updated_count": len(updated_tasks),
"tasks": updated_tasks,
}
def merge_session(self, session_key: str, task_ids: list[str]) -> dict[str, object]:
normalized_task_ids: list[str] = []
for raw in task_ids:
task_id = str(raw).strip()
if task_id and task_id not in normalized_task_ids:
normalized_task_ids.append(task_id)
if not normalized_task_ids:
return {"error": {"code": "TASK_IDS_EMPTY", "message": "task_ids is empty"}}
now = utc_now_iso()
inherited_bvid = None
existing_contexts = self.repo.list_task_contexts_by_session_key(session_key)
for context in existing_contexts:
if context.full_video_bvid:
inherited_bvid = context.full_video_bvid
break
merged_tasks: list[dict[str, object]] = []
missing_tasks: list[str] = []
for task_id in normalized_task_ids:
task = self.repo.get_task(task_id)
if task is None:
missing_tasks.append(task_id)
continue
context = self.repo.get_task_context(task_id)
if context is None:
context = TaskContext(
id=None,
task_id=task.id,
session_key=session_key,
streamer=None,
room_id=None,
source_title=task.title,
segment_started_at=None,
segment_duration_seconds=None,
full_video_bvid=inherited_bvid,
created_at=task.created_at,
updated_at=now,
)
else:
context.session_key = session_key
context.updated_at = now
if inherited_bvid and not context.full_video_bvid:
context.full_video_bvid = inherited_bvid
self.repo.upsert_task_context(context)
if context.full_video_bvid:
full_video_bvid_path = self._persist_task_full_video_bvid(task, context, context.full_video_bvid, now=now)
else:
full_video_bvid_path = None
payload = {
"task_id": task.id,
"session_key": session_key,
"full_video_bvid": context.full_video_bvid,
}
if full_video_bvid_path is not None:
payload["path"] = str(full_video_bvid_path)
merged_tasks.append(payload)
return {
"session_key": session_key,
"merged_count": len(merged_tasks),
"tasks": merged_tasks,
"missing_task_ids": missing_tasks,
}
def receive_full_video_webhook(self, payload: dict[str, object]) -> dict[str, object]:
raw_bvid = str(payload.get("full_video_bvid") or payload.get("bvid") or "").strip()
bvid = self._normalize_bvid(raw_bvid)
if bvid is None:
return {"error": {"code": "INVALID_BVID", "message": f"invalid bvid: {raw_bvid}"}}
session_key = str(payload.get("session_key") or "").strip() or None
source_title = str(payload.get("source_title") or "").strip() or None
streamer = str(payload.get("streamer") or "").strip() or None
room_id = str(payload.get("room_id") or "").strip() or None
if session_key is None and source_title is None:
return {"error": {"code": "SESSION_KEY_OR_SOURCE_TITLE_REQUIRED", "message": "session_key or source_title required"}}
now = utc_now_iso()
self.repo.upsert_session_binding(
SessionBinding(
id=None,
session_key=session_key,
source_title=source_title,
streamer=streamer,
room_id=room_id,
full_video_bvid=bvid,
created_at=now,
updated_at=now,
)
)
contexts = self.repo.list_task_contexts_by_session_key(session_key) if session_key else []
if not contexts and source_title:
contexts = self.repo.list_task_contexts_by_source_title(source_title)
updated_tasks: list[dict[str, object]] = []
for context in contexts:
task = self.repo.get_task(context.task_id)
if task is None:
continue
if session_key and (context.session_key.startswith("task:") or context.session_key != session_key):
context.session_key = session_key
full_video_bvid_path = self._persist_task_full_video_bvid(task, context, bvid, now=now)
updated_tasks.append({"task_id": task.id, "path": str(full_video_bvid_path)})
self.repo.add_action_record(
ActionRecord(
id=None,
task_id=None,
action_name="webhook_full_video_uploaded",
status="ok",
summary=f"full video webhook received: {bvid}",
details_json=json.dumps(
{
"session_key": session_key,
"source_title": source_title,
"streamer": streamer,
"room_id": room_id,
"updated_count": len(updated_tasks),
},
ensure_ascii=False,
),
created_at=now,
)
)
return {
"ok": True,
"session_key": session_key,
"source_title": source_title,
"full_video_bvid": bvid,
"updated_count": len(updated_tasks),
"tasks": updated_tasks,
}
def _normalize_bvid(self, full_video_bvid: str) -> str | None:
bvid = full_video_bvid.strip()
if not re.fullmatch(r"BV[0-9A-Za-z]+", bvid):
return None
return bvid
def _full_video_bvid_path(self, task_title: str) -> Path:
session_dir = Path(str(self.settings["paths"]["session_dir"])) / task_title
session_dir.mkdir(parents=True, exist_ok=True)
return session_dir / "full_video_bvid.txt"
def _upsert_session_binding_for_context(self, context: TaskContext, full_video_bvid: str, now: str) -> None:
self.repo.upsert_session_binding(
SessionBinding(
id=None,
session_key=context.session_key,
source_title=context.source_title,
streamer=context.streamer,
room_id=context.room_id,
full_video_bvid=full_video_bvid,
created_at=now,
updated_at=now,
)
)
def _persist_task_full_video_bvid(
self,
task,
context: TaskContext,
full_video_bvid: str,
*,
now: str,
) -> Path: # type: ignore[no-untyped-def]
context.full_video_bvid = full_video_bvid
context.updated_at = now
self.repo.upsert_task_context(context)
self._upsert_session_binding_for_context(context, full_video_bvid, now)
path = self._full_video_bvid_path(task.title)
path.write_text(full_video_bvid, encoding="utf-8")
return path

View File

@ -9,13 +9,14 @@ import {
setTaskPageSize,
state,
} from "./state.js";
import { showBanner, syncSettingsEditorFromState } from "./utils.js";
import { showBanner, syncSettingsEditorFromState, withButtonBusy } from "./utils.js";
import { renderSettingsForm } from "./views/settings.js";
import { renderTasks } from "./views/tasks.js";
export function bindActions({
loadOverview,
loadTaskDetail,
refreshSelectedTaskOnly,
refreshLog,
handleSettingsFieldChange,
}) {
@ -170,29 +171,33 @@ export function bindActions({
document.getElementById("runTaskBtn").onclick = async () => {
if (!state.selectedTaskId) return showBanner("当前没有选中的任务", "warn");
try {
const result = await fetchJson(`/tasks/${state.selectedTaskId}/actions/run`, { method: "POST" });
await loadOverview();
showBanner(`任务已推进processed=${result.processed.length}`, "ok");
} catch (err) {
showBanner(`任务执行失败: ${err}`, "err");
}
await withButtonBusy(document.getElementById("runTaskBtn"), "执行中…", async () => {
try {
const result = await fetchJson(`/tasks/${state.selectedTaskId}/actions/run`, { method: "POST" });
await refreshSelectedTaskOnly(state.selectedTaskId);
showBanner(`任务已推进processed=${result.processed.length}`, "ok");
} catch (err) {
showBanner(`任务执行失败: ${err}`, "err");
}
});
};
document.getElementById("retryStepBtn").onclick = async () => {
if (!state.selectedTaskId) return showBanner("当前没有选中的任务", "warn");
if (!state.selectedStepName) return showBanner("请先在 Steps 区域选中一个 step", "warn");
try {
const result = await fetchJson(`/tasks/${state.selectedTaskId}/actions/retry-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: state.selectedStepName }),
});
await loadOverview();
showBanner(`已重试 step=${state.selectedStepName}processed=${result.processed.length}`, "ok");
} catch (err) {
showBanner(`重试失败: ${err}`, "err");
}
await withButtonBusy(document.getElementById("retryStepBtn"), "重试中…", async () => {
try {
const result = await fetchJson(`/tasks/${state.selectedTaskId}/actions/retry-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: state.selectedStepName }),
});
await refreshSelectedTaskOnly(state.selectedTaskId);
showBanner(`已重试 step=${state.selectedStepName}processed=${result.processed.length}`, "ok");
} catch (err) {
showBanner(`重试失败: ${err}`, "err");
}
});
};
document.getElementById("resetStepBtn").onclick = async () => {
@ -200,16 +205,18 @@ export function bindActions({
if (!state.selectedStepName) return showBanner("请先在 Steps 区域选中一个 step", "warn");
const ok = window.confirm(`确认重置到 step=${state.selectedStepName} 并清理其后的产物吗?`);
if (!ok) return;
try {
const result = await fetchJson(`/tasks/${state.selectedTaskId}/actions/reset-to-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: state.selectedStepName }),
});
await loadOverview();
showBanner(`已重置并重跑 step=${state.selectedStepName}processed=${result.run.processed.length}`, "ok");
} catch (err) {
showBanner(`重置失败: ${err}`, "err");
}
await withButtonBusy(document.getElementById("resetStepBtn"), "重置中…", async () => {
try {
const result = await fetchJson(`/tasks/${state.selectedTaskId}/actions/reset-to-step`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ step_name: state.selectedStepName }),
});
await refreshSelectedTaskOnly(state.selectedTaskId);
showBanner(`已重置并重跑 step=${state.selectedStepName}processed=${result.run.processed.length}`, "ok");
} catch (err) {
showBanner(`重置失败: ${err}`, "err");
}
});
};
}

View File

@ -40,13 +40,22 @@ export async function loadOverviewPayload() {
return { health, doctor, tasks, modules, settings, settingsSchema, services, logs, history, scheduler };
}
export async function loadTasksPayload(limit = 100) {
return fetchJson(`/tasks?limit=${limit}`);
}
export async function loadTaskPayload(taskId) {
const [task, steps, artifacts, history, timeline] = await Promise.all([
const [task, steps, artifacts, history, timeline, context] = await Promise.all([
fetchJson(`/tasks/${taskId}`),
fetchJson(`/tasks/${taskId}/steps`),
fetchJson(`/tasks/${taskId}/artifacts`),
fetchJson(`/tasks/${taskId}/history`),
fetchJson(`/tasks/${taskId}/timeline`),
fetchJson(`/tasks/${taskId}/context`).catch(() => null),
]);
return { task, steps, artifacts, history, timeline };
return { task, steps, artifacts, history, timeline, context };
}
export async function loadSessionPayload(sessionKey) {
return fetchJson(`/sessions/${encodeURIComponent(sessionKey)}`);
}

View File

@ -0,0 +1,70 @@
import { escapeHtml, taskDisplayStatus } from "../utils.js";
export function renderSessionPanel(session, actions = {}) {
const wrap = document.getElementById("sessionPanel");
const stateEl = document.getElementById("sessionWorkspaceState");
if (!wrap || !stateEl) return;
if (!session) {
stateEl.className = "task-workspace-state show";
stateEl.textContent = "当前任务如果已绑定 session_key这里会显示同场片段和完整版绑定信息。";
wrap.innerHTML = "";
return;
}
stateEl.className = "task-workspace-state";
const tasks = session.tasks || [];
wrap.innerHTML = `
<div class="session-hero">
<div>
<div class="summary-title">Session Key</div>
<div class="session-key">${escapeHtml(session.session_key || "-")}</div>
</div>
<div class="session-meta-strip">
<span class="pill">${escapeHtml(`tasks ${session.task_count || tasks.length || 0}`)}</span>
<span class="pill">${escapeHtml(`full BV ${session.full_video_bvid || "-"}`)}</span>
</div>
</div>
<div class="session-actions-grid">
<div class="bind-form">
<div class="summary-title">Session Rebind</div>
<input id="sessionRebindInput" value="${escapeHtml(session.full_video_bvid || "")}" placeholder="BV1..." />
<div class="button-row">
<button id="sessionRebindBtn" class="secondary compact">整个 Session 重绑 BV</button>
${session.full_video_url ? `<a class="detail-link session-link-btn" href="${escapeHtml(session.full_video_url)}" target="_blank" rel="noreferrer">打开完整版</a>` : ""}
</div>
</div>
<div class="bind-form">
<div class="summary-title">Merge Tasks</div>
<input id="sessionMergeInput" placeholder="输入 task id用逗号分隔" />
<div class="button-row">
<button id="sessionMergeBtn" class="secondary compact">合并到当前 Session</button>
</div>
<div class="muted-note">适用于同一场直播断流后产生的多个片段。</div>
</div>
</div>
<div class="summary-title" style="margin-top:14px;">Session Tasks</div>
<div class="stack-list">
${tasks.map((task) => `
<div class="row-card session-task-card" data-session-task-id="${escapeHtml(task.id)}">
<div class="step-card-title">
<strong>${escapeHtml(task.title)}</strong>
<span class="pill">${escapeHtml(taskDisplayStatus(task))}</span>
</div>
<div class="muted-note">${escapeHtml(task.session_context?.split_bvid || "-")} · ${escapeHtml(task.session_context?.full_video_bvid || "-")}</div>
</div>
`).join("")}
</div>
`;
const rebindBtn = document.getElementById("sessionRebindBtn");
if (rebindBtn) {
rebindBtn.onclick = () => actions.onRebind?.(session.session_key, document.getElementById("sessionRebindInput")?.value || "");
}
const mergeBtn = document.getElementById("sessionMergeBtn");
if (mergeBtn) {
mergeBtn.onclick = () => actions.onMerge?.(session.session_key, document.getElementById("sessionMergeInput")?.value || "");
}
wrap.querySelectorAll("[data-session-task-id]").forEach((node) => {
node.onclick = () => actions.onSelectTask?.(node.dataset.sessionTaskId);
});
}

View File

@ -1,22 +1,41 @@
import { escapeHtml, statusClass } from "../utils.js";
function displayTaskStatus(task) {
if (task.status === "failed_manual") return "需人工处理";
if (task.status === "failed_retryable" && task.retry_state?.step_name === "comment") return "等待B站可见";
if (task.status === "failed_retryable") return "等待自动重试";
return {
created: "已接收",
transcribed: "已转录",
songs_detected: "已识歌",
split_done: "已切片",
published: "已上传",
collection_synced: "已完成",
running: "处理中",
}[task.status] || task.status || "-";
}
export function renderTaskHero(task, steps) {
const wrap = document.getElementById("taskHero");
const succeeded = steps.items.filter((step) => step.status === "succeeded").length;
const running = steps.items.filter((step) => step.status === "running").length;
const failed = steps.items.filter((step) => step.status.startsWith("failed")).length;
const delivery = task.delivery_state || {};
const sessionContext = task.session_context || {};
wrap.className = "task-hero";
wrap.innerHTML = `
<div class="task-hero-title">${escapeHtml(task.title)}</div>
<div class="task-hero-subtitle">${escapeHtml(task.id)} · ${escapeHtml(task.source_path)}</div>
<div class="hero-meta-grid">
<div class="mini-stat"><div class="mini-stat-label">Task Status</div><div class="mini-stat-value"><span class="pill ${statusClass(task.status)}">${escapeHtml(task.status)}</span></div></div>
<div class="mini-stat"><div class="mini-stat-label">Task Status</div><div class="mini-stat-value"><span class="pill ${statusClass(task.status)}">${escapeHtml(displayTaskStatus(task))}</span></div></div>
<div class="mini-stat"><div class="mini-stat-label">Succeeded Steps</div><div class="mini-stat-value">${succeeded}/${steps.items.length}</div></div>
<div class="mini-stat"><div class="mini-stat-label">Running / Failed</div><div class="mini-stat-value">${running} / ${failed}</div></div>
</div>
<div class="task-hero-delivery muted-note">
split comment=${escapeHtml(delivery.split_comment || "-")} · full timeline=${escapeHtml(delivery.full_video_timeline_comment || "-")} · source=${delivery.source_video_present ? "present" : "removed"} · split videos=${delivery.split_videos_present ? "present" : "removed"}
</div>
<div class="task-hero-delivery muted-note">
session=${escapeHtml(sessionContext.session_key || "-")} · split_bv=${escapeHtml(sessionContext.split_bvid || "-")} · full_bv=${escapeHtml(sessionContext.full_video_bvid || "-")}
</div>
`;
}

View File

@ -1,4 +1,4 @@
import { fetchJson, loadOverviewPayload, loadTaskPayload } from "./api.js";
import { fetchJson, loadOverviewPayload, loadSessionPayload, loadTaskPayload, loadTasksPayload } from "./api.js";
import { bindActions } from "./actions.js";
import { currentRoute, initRouter, navigate } from "./router.js";
import {
@ -11,11 +11,12 @@ import {
setSelectedLog,
setSelectedStep,
setSelectedTask,
setCurrentSession,
setTaskDetailStatus,
setTaskListLoading,
state,
} from "./state.js";
import { settingsFieldKey, showBanner } from "./utils.js";
import { settingsFieldKey, showBanner, withButtonBusy } from "./utils.js";
import {
renderDoctor,
renderModules,
@ -27,6 +28,7 @@ import {
import { renderLogContent, renderLogsList } from "./views/logs.js";
import { renderSettingsForm } from "./views/settings.js";
import { renderTaskDetail, renderTasks, renderTaskWorkspaceState } from "./views/tasks.js";
import { renderSessionPanel } from "./components/session-panel.js";
async function refreshLog() {
const name = state.selectedLogName;
@ -56,7 +58,41 @@ async function loadTaskDetail(taskId) {
renderTaskDetail(payload, async (stepName) => {
setSelectedStep(stepName);
await loadTaskDetail(taskId);
}, {
onBindFullVideo: async (currentTaskId, fullVideoBvid) => {
const button = document.getElementById("bindFullVideoBtn");
const bvid = String(fullVideoBvid || "").trim();
if (!/^BV[0-9A-Za-z]+$/.test(bvid)) {
showBanner("请输入合法的 BV 号", "warn");
return;
}
await withButtonBusy(button, "绑定中…", async () => {
try {
await fetchJson(`/tasks/${currentTaskId}/bind-full-video`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ full_video_bvid: bvid }),
});
await refreshSelectedTaskOnly(currentTaskId);
showBanner(`已绑定完整版 BV: ${bvid}`, "ok");
} catch (err) {
showBanner(`绑定完整版失败: ${err}`, "err");
}
});
},
onOpenSession: async (sessionKey) => {
if (!sessionKey) {
showBanner("当前任务没有可用的 session_key", "warn");
return;
}
try {
await loadSessionDetail(sessionKey);
} catch (err) {
showBanner(`读取 Session 失败: ${err}`, "err");
}
},
});
await loadSessionDetail(payload.task.session_context?.session_key || payload.context?.session_key || null);
setTaskDetailStatus("ready");
renderTaskWorkspaceState("ready");
} catch (err) {
@ -67,6 +103,79 @@ async function loadTaskDetail(taskId) {
}
}
async function loadSessionDetail(sessionKey) {
if (!sessionKey) {
setCurrentSession(null);
renderSessionPanel(null);
return;
}
const session = await loadSessionPayload(sessionKey);
setCurrentSession(session);
renderSessionPanel(session, {
onSelectTask: async (taskId) => {
if (!taskId) return;
taskSelectHandler(taskId);
},
onRebind: async (currentSessionKey, fullVideoBvid) => {
const button = document.getElementById("sessionRebindBtn");
const bvid = String(fullVideoBvid || "").trim();
if (!/^BV[0-9A-Za-z]+$/.test(bvid)) {
showBanner("请输入合法的 BV 号", "warn");
return;
}
await withButtonBusy(button, "重绑中…", async () => {
try {
await fetchJson(`/sessions/${encodeURIComponent(currentSessionKey)}/rebind`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ full_video_bvid: bvid }),
});
await refreshSelectedTaskOnly();
showBanner(`Session 已重绑完整版 BV: ${bvid}`, "ok");
} catch (err) {
showBanner(`Session 重绑失败: ${err}`, "err");
}
});
},
onMerge: async (currentSessionKey, rawTaskIds) => {
const button = document.getElementById("sessionMergeBtn");
const taskIds = String(rawTaskIds || "")
.split(",")
.map((item) => item.trim())
.filter(Boolean);
if (!taskIds.length) {
showBanner("请先输入至少一个 task id", "warn");
return;
}
await withButtonBusy(button, "合并中…", async () => {
try {
await fetchJson(`/sessions/${encodeURIComponent(currentSessionKey)}/merge`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ task_ids: taskIds }),
});
await refreshSelectedTaskOnly();
showBanner(`已合并 ${taskIds.length} 个任务到当前 Session`, "ok");
} catch (err) {
showBanner(`Session 合并失败: ${err}`, "err");
}
});
},
});
}
async function refreshTaskListOnly() {
const payload = await loadTasksPayload(100);
state.currentTasks = payload.items || [];
renderTasks(taskSelectHandler, taskRowActionHandler);
}
async function refreshSelectedTaskOnly(taskId = state.selectedTaskId) {
if (!taskId) return;
await refreshTaskListOnly();
await loadTaskDetail(taskId);
}
function taskSelectHandler(taskId) {
setSelectedTask(taskId);
setSelectedStep(null);
@ -79,7 +188,7 @@ async function taskRowActionHandler(action, taskId) {
if (action !== "run") return;
try {
const result = await fetchJson(`/tasks/${taskId}/actions/run`, { method: "POST" });
await loadOverview();
await refreshSelectedTaskOnly(taskId);
showBanner(`任务已推进: ${taskId} / processed=${result.processed.length}`, "ok");
} catch (err) {
showBanner(`任务执行失败: ${err}`, "err");
@ -201,6 +310,7 @@ async function handleRouteChange(route) {
bindActions({
loadOverview,
loadTaskDetail,
refreshSelectedTaskOnly,
refreshLog,
handleSettingsFieldChange,
});

View File

@ -13,6 +13,7 @@ export const state = {
taskListLoading: true,
taskDetailStatus: "idle",
taskDetailError: "",
currentSession: null,
currentLogs: [],
selectedLogName: null,
logListLoading: true,
@ -74,6 +75,10 @@ export function setTaskDetailStatus(status, error = "") {
state.taskDetailError = error;
}
export function setCurrentSession(session) {
state.currentSession = session;
}
export function setLogs(logs) {
state.currentLogs = logs;
}

View File

@ -1,9 +1,11 @@
import { state } from "./state.js";
let bannerTimer = null;
export function statusClass(status) {
if (["collection_synced", "published", "commented", "succeeded", "active"].includes(status)) return "good";
if (["done", "resolved", "present"].includes(status)) return "good";
if (["legacy_untracked", "pending", "unresolved"].includes(status)) return "warn";
if (["pending", "unresolved"].includes(status)) return "warn";
if (["removed", "disabled"].includes(status)) return "";
if (["failed_manual", "failed_retryable", "inactive"].includes(status)) return "hot";
if (["running", "activating", "songs_detected", "split_done", "transcribed", "created", "pending"].includes(status)) return "warn";
@ -14,6 +16,11 @@ export function showBanner(message, kind) {
const el = document.getElementById("banner");
el.textContent = message;
el.className = `banner show ${kind}`;
if (bannerTimer) window.clearTimeout(bannerTimer);
bannerTimer = window.setTimeout(() => {
el.className = "banner";
el.textContent = "";
}, kind === "err" ? 6000 : 3200);
}
export function escapeHtml(text) {
@ -59,3 +66,92 @@ export function compareFieldEntries(a, b) {
export function settingsFieldKey(group, field) {
return `${group}.${field}`;
}
export function taskDisplayStatus(task) {
if (!task) return "-";
if (task.status === "failed_manual") return "需人工处理";
if (task.status === "failed_retryable" && task.retry_state?.step_name === "comment") return "等待B站可见";
if (task.status === "failed_retryable") return "等待自动重试";
return {
created: "已接收",
transcribed: "已转录",
songs_detected: "已识歌",
split_done: "已切片",
published: "已上传",
commented: "评论完成",
collection_synced: "已完成",
running: "处理中",
}[task.status] || task.status || "-";
}
export function taskPrimaryActionLabel(task) {
if (!task) return "执行";
if (task.status === "failed_manual") return "人工重跑";
if (task.retry_state?.retry_due) return "立即重试";
if (task.status === "failed_retryable") return "继续等待";
if (task.status === "collection_synced") return "查看结果";
return "执行";
}
export function taskCurrentStep(task, steps = []) {
const running = steps.find((step) => step.status === "running");
if (running) return stepLabel(running.step_name);
if (task?.retry_state?.step_name) return `${stepLabel(task.retry_state.step_name)}: ${taskDisplayStatus(task)}`;
const pending = steps.find((step) => step.status === "pending");
if (pending) return stepLabel(pending.step_name);
return {
created: "转录字幕",
transcribed: "识别歌曲",
songs_detected: "切分分P",
split_done: "上传分P",
published: "评论与合集",
commented: "同步合集",
collection_synced: "链路完成",
}[task?.status] || "-";
}
export function stepLabel(stepName) {
return {
ingest: "接收视频",
transcribe: "转录字幕",
song_detect: "识别歌曲",
split: "切分分P",
publish: "上传分P",
comment: "发布评论",
collection_a: "加入完整版合集",
collection_b: "加入分P合集",
}[stepName] || stepName || "-";
}
export function actionAdvice(task) {
if (!task) return "";
if (task.status === "failed_retryable" && task.retry_state?.step_name === "comment") {
return "B站通常需要一段时间完成转码和审核系统会自动重试评论。";
}
if (task.status === "failed_retryable") {
return "当前错误可自动恢复,等到重试时间或手工触发即可。";
}
if (task.status === "failed_manual") {
return "这个任务需要人工判断,先看错误信息,再决定是重试当前步骤还是绑定完整版 BV。";
}
if (task.status === "collection_synced") {
return "链路已完成可以直接打开分P链接检查结果。";
}
return "系统会继续推进后续步骤,必要时可在这里手工干预。";
}
export async function withButtonBusy(button, loadingText, fn) {
if (!button) return fn();
const originalHtml = button.innerHTML;
const originalDisabled = button.disabled;
button.disabled = true;
button.classList.add("is-busy");
if (loadingText) button.textContent = loadingText;
try {
return await fn();
} finally {
button.disabled = originalDisabled;
button.classList.remove("is-busy");
button.innerHTML = originalHtml;
}
}

View File

@ -1,5 +1,14 @@
import { state, setTaskPage } from "../state.js";
import { escapeHtml, formatDate, formatDuration, statusClass } from "../utils.js";
import {
actionAdvice,
escapeHtml,
formatDate,
formatDuration,
statusClass,
taskCurrentStep,
taskDisplayStatus,
taskPrimaryActionLabel,
} from "../utils.js";
import { renderArtifactList } from "../components/artifact-list.js";
import { renderHistoryList } from "../components/history-list.js";
import { renderRetryPanel } from "../components/retry-banner.js";
@ -8,13 +17,13 @@ import { renderTaskHero } from "../components/task-hero.js";
import { renderTimelineList } from "../components/timeline-list.js";
const STATUS_LABELS = {
created: "待转录",
transcribed: "待识歌",
songs_detected: "待切歌",
split_done: "待上传",
published: "待收尾",
created: "已接收",
transcribed: "已转录",
songs_detected: "已识歌",
split_done: "已切片",
published: "已上传",
collection_synced: "已完成",
failed_retryable: "待重试",
failed_retryable: "待重试",
failed_manual: "待人工",
running: "处理中",
};
@ -22,15 +31,17 @@ const STATUS_LABELS = {
const DELIVERY_LABELS = {
done: "已发送",
pending: "待处理",
legacy_untracked: "历史未追踪",
resolved: "已定位",
unresolved: "未定位",
present: "保留",
removed: "已清理",
};
function displayStatus(status) {
return STATUS_LABELS[status] || status || "-";
function displayTaskStatus(task) {
if (task.status === "failed_manual") return "需人工处理";
if (task.status === "failed_retryable" && task.retry_state?.step_name === "comment") return "等待B站可见";
if (task.status === "failed_retryable") return "等待自动重试";
return taskDisplayStatus(task);
}
function displayDelivery(status) {
@ -162,7 +173,6 @@ export function filteredTasks() {
if (search && !haystack.includes(search)) return false;
if (status && task.status !== status) return false;
const deliveryState = task.delivery_state || {};
if (delivery === "legacy_untracked" && deliveryState.full_video_timeline_comment !== "legacy_untracked") return false;
if (delivery === "pending_comment" && deliveryState.split_comment !== "pending" && deliveryState.full_video_timeline_comment !== "pending") return false;
if (delivery === "cleanup_removed" && deliveryState.source_video_present !== false && deliveryState.split_videos_present !== false) return false;
if (attention && attentionState(task) !== attention) return false;
@ -304,9 +314,9 @@ export function renderTasks(onSelect, onRowAction = null) {
row.innerHTML = `
<td>
<div class="task-cell-title">${escapeHtml(item.title)}</div>
<div class="task-cell-subtitle">${escapeHtml(item.id)}</div>
<div class="task-cell-subtitle">${escapeHtml(taskCurrentStep(item))}</div>
</td>
<td><span class="pill ${statusClass(item.status)}">${escapeHtml(displayStatus(item.status))}</span></td>
<td><span class="pill ${statusClass(item.status)}">${escapeHtml(displayTaskStatus(item))}</span></td>
<td><span class="pill ${attentionClass(attention)}">${escapeHtml(displayAttention(attention))}</span></td>
<td><span class="pill ${statusClass(delivery.split_comment || "")}">${escapeHtml(displayDelivery(delivery.split_comment || "-"))}</span></td>
<td><span class="pill ${statusClass(delivery.full_video_timeline_comment || "")}">${escapeHtml(displayDelivery(delivery.full_video_timeline_comment || "-"))}</span></td>
@ -321,7 +331,7 @@ export function renderTasks(onSelect, onRowAction = null) {
</td>
<td class="task-table-actions">
<button class="secondary compact inline-action-btn" data-task-action="open">打开</button>
<button class="compact inline-action-btn" data-task-action="run">${attention === "manual_now" || attention === "retry_now" ? "重跑" : "执行"}</button>
<button class="compact inline-action-btn" data-task-action="run">${escapeHtml(taskPrimaryActionLabel(item))}</button>
</td>
`;
row.onclick = () => onSelect(item.id);
@ -346,7 +356,7 @@ export function renderTasks(onSelect, onRowAction = null) {
wrap.appendChild(table);
}
export function renderTaskDetail(payload, onStepSelect) {
export function renderTaskDetail(payload, onStepSelect, actions = {}) {
const { task, steps, artifacts, history, timeline } = payload;
renderTaskHero(task, steps);
renderRetryPanel(task);
@ -355,7 +365,8 @@ export function renderTaskDetail(payload, onStepSelect) {
detail.innerHTML = "";
[
["Task ID", task.id],
["Status", task.status],
["Status", displayTaskStatus(task)],
["Current Step", taskCurrentStep(task, steps.items)],
["Created", formatDate(task.created_at)],
["Updated", formatDate(task.updated_at)],
["Source", task.source_path],
@ -385,10 +396,40 @@ export function renderTaskDetail(payload, onStepSelect) {
}
}
const delivery = task.delivery_state || {};
const sessionContext = task.session_context || {};
const splitVideoUrl = sessionContext.video_links?.split_video_url;
const fullVideoUrl = sessionContext.video_links?.full_video_url;
const summaryEl = document.getElementById("taskSummary");
summaryEl.innerHTML = `
<div class="summary-title">Recent Result</div>
<div class="summary-text">${escapeHtml(summaryText)}</div>
<div class="summary-title" style="margin-top:14px;">Recommended Next Step</div>
<div class="summary-text">${escapeHtml(actionAdvice(task))}</div>
<div class="summary-title" style="margin-top:14px;">Delivery Links</div>
<div class="delivery-grid">
${renderDeliveryState("Split BV", sessionContext.split_bvid || "-", "")}
${renderDeliveryState("Full BV", sessionContext.full_video_bvid || "-", "")}
${renderLinkState("Split Video", splitVideoUrl)}
${renderLinkState("Full Video", fullVideoUrl)}
</div>
<div class="summary-title" style="margin-top:14px;">Session Context</div>
<div class="delivery-grid">
${renderDeliveryState("Session Key", sessionContext.session_key || "-", "")}
${renderDeliveryState("Streamer", sessionContext.streamer || "-", "")}
${renderDeliveryState("Room ID", sessionContext.room_id || "-", "")}
${renderDeliveryState("Context Source", sessionContext.context_source || "-", "")}
${renderDeliveryState("Segment Start", sessionContext.segment_started_at ? formatDate(sessionContext.segment_started_at) : "-", "")}
${renderDeliveryState("Segment Duration", sessionContext.segment_duration_seconds != null ? formatDuration(sessionContext.segment_duration_seconds) : "-", "")}
</div>
<div class="summary-title" style="margin-top:14px;">Bind Full Video BV</div>
<div class="bind-form">
<input id="bindFullVideoInput" value="${escapeHtml(sessionContext.full_video_bvid || "")}" placeholder="BV1..." />
<div class="button-row">
<button id="bindFullVideoBtn" class="secondary compact">绑定完整版 BV</button>
${sessionContext.session_key ? `<button id="openSessionBtn" class="secondary compact">查看 Session</button>` : ""}
</div>
<div class="muted-note">用于修复评论 / 合集查不到完整版视频的问题。</div>
</div>
<div class="summary-title" style="margin-top:14px;">Delivery State</div>
<div class="delivery-grid">
${renderDeliveryState("Split Comment", delivery.split_comment || "-")}
@ -403,6 +444,14 @@ export function renderTaskDetail(payload, onStepSelect) {
)}
</div>
`;
const bindBtn = document.getElementById("bindFullVideoBtn");
if (bindBtn) {
bindBtn.onclick = () => actions.onBindFullVideo?.(task.id, document.getElementById("bindFullVideoInput")?.value || "");
}
const openSessionBtn = document.getElementById("openSessionBtn");
if (openSessionBtn) {
openSessionBtn.onclick = () => actions.onOpenSession?.(sessionContext.session_key);
}
renderStepList(steps, onStepSelect);
renderArtifactList(artifacts);
@ -420,8 +469,21 @@ function renderDeliveryState(label, value, forcedClass = null) {
`;
}
function renderLinkState(label, url) {
return `
<div class="delivery-card">
<div class="delivery-label">${escapeHtml(label)}</div>
<div class="delivery-value">
${url ? `<a class="detail-link" href="${escapeHtml(url)}" target="_blank" rel="noreferrer">打开</a>` : `<span class="muted-note">-</span>`}
</div>
</div>
`;
}
export function renderTaskWorkspaceState(mode, message = "") {
const stateEl = document.getElementById("taskWorkspaceState");
const sessionStateEl = document.getElementById("sessionWorkspaceState");
const sessionPanel = document.getElementById("sessionPanel");
const hero = document.getElementById("taskHero");
const retry = document.getElementById("taskRetryPanel");
const detail = document.getElementById("taskDetail");
@ -459,4 +521,11 @@ export function renderTaskWorkspaceState(mode, message = "") {
artifactList.innerHTML = "";
historyList.innerHTML = "";
timelineList.innerHTML = "";
if (sessionStateEl) {
sessionStateEl.className = "task-workspace-state show";
sessionStateEl.textContent = mode === "error"
? "Session 区域暂不可用。"
: "当前任务如果已绑定 session_key这里会显示同场片段和完整版绑定信息。";
}
if (sessionPanel) sessionPanel.innerHTML = "";
}

View File

@ -134,6 +134,11 @@ button.compact {
font-size: 13px;
}
button.is-busy {
opacity: 0.72;
cursor: wait;
}
.content {
display: grid;
gap: 16px;
@ -258,6 +263,79 @@ button.compact {
line-height: 1.6;
}
.task-cell-subtitle {
margin-top: 4px;
color: var(--muted);
font-size: 12px;
}
.bind-form {
display: grid;
gap: 10px;
margin-top: 10px;
}
.bind-form input {
width: 100%;
}
.detail-link {
color: var(--accent-2);
text-decoration: none;
font-weight: 600;
}
.detail-link:hover {
text-decoration: underline;
}
.session-panel {
display: grid;
gap: 16px;
}
.session-hero {
display: flex;
justify-content: space-between;
gap: 12px;
align-items: flex-start;
}
.session-key {
margin-top: 6px;
font-size: 20px;
font-weight: 700;
letter-spacing: -0.02em;
}
.session-meta-strip,
.session-actions-grid {
display: grid;
gap: 12px;
}
.session-actions-grid {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
.session-task-card {
cursor: pointer;
}
.session-task-card:hover {
border-color: var(--line-strong);
}
.session-link-btn {
display: inline-flex;
align-items: center;
justify-content: center;
border: 1px solid var(--line);
border-radius: 12px;
padding: 8px 12px;
background: rgba(255,255,255,0.78);
}
.delivery-grid {
display: grid;
grid-template-columns: repeat(2, minmax(0, 1fr));

View File

@ -1,29 +1,98 @@
from __future__ import annotations
from biliup_next.app.bootstrap import ensure_initialized
from biliup_next.app.task_control_service import TaskControlService
from biliup_next.app.session_delivery_service import SessionDeliveryService
from biliup_next.app.task_audit import record_task_action
from biliup_next.app.task_runner import process_task
from biliup_next.infra.task_reset import TaskResetService
def run_task_action(task_id: str) -> dict[str, object]:
result = process_task(task_id)
state = ensure_initialized()
result = TaskControlService(state).run_task(task_id)
record_task_action(state["repo"], task_id, "task_run", "ok", "task run invoked", result)
return result
def retry_step_action(task_id: str, step_name: str) -> dict[str, object]:
result = process_task(task_id, reset_step=step_name)
state = ensure_initialized()
result = TaskControlService(state).retry_step(task_id, step_name)
record_task_action(state["repo"], task_id, "retry_step", "ok", f"retry step invoked: {step_name}", result)
return result
def reset_to_step_action(task_id: str, step_name: str) -> dict[str, object]:
state = ensure_initialized()
reset_result = TaskResetService(state["repo"]).reset_to_step(task_id, step_name)
process_result = process_task(task_id)
payload = {"reset": reset_result, "run": process_result}
payload = TaskControlService(state).reset_to_step(task_id, step_name)
record_task_action(state["repo"], task_id, "reset_to_step", "ok", f"reset to step invoked: {step_name}", payload)
return payload
def bind_full_video_action(task_id: str, full_video_bvid: str) -> dict[str, object]:
state = ensure_initialized()
payload = SessionDeliveryService(state).bind_task_full_video(task_id, full_video_bvid)
if "error" in payload:
return payload
record_task_action(
state["repo"],
task_id,
"bind_full_video",
"ok",
f"full video bvid bound: {payload['full_video_bvid']}",
payload,
)
return payload
def rebind_session_full_video_action(session_key: str, full_video_bvid: str) -> dict[str, object]:
state = ensure_initialized()
payload = SessionDeliveryService(state).rebind_session_full_video(session_key, full_video_bvid)
if "error" in payload:
return payload
for item in payload["tasks"]:
record_task_action(
state["repo"],
item["task_id"],
"rebind_session_full_video",
"ok",
f"session full video bvid rebound: {payload['full_video_bvid']}",
{
"session_key": session_key,
"full_video_bvid": payload["full_video_bvid"],
"path": item["path"],
},
)
return payload
def merge_session_action(session_key: str, task_ids: list[str]) -> dict[str, object]:
state = ensure_initialized()
payload = SessionDeliveryService(state).merge_session(session_key, task_ids)
if "error" in payload:
return payload
for item in payload["tasks"]:
record_task_action(state["repo"], item["task_id"], "merge_session", "ok", f"task merged into session: {session_key}", item)
return payload
def receive_full_video_webhook(payload: dict[str, object]) -> dict[str, object]:
state = ensure_initialized()
result = SessionDeliveryService(state).receive_full_video_webhook(payload)
if "error" in result:
return result
for item in result["tasks"]:
record_task_action(
state["repo"],
item["task_id"],
"webhook_full_video_uploaded",
"ok",
f"full video bvid received via webhook: {result['full_video_bvid']}",
{
"session_key": result["session_key"],
"source_title": result["source_title"],
"full_video_bvid": result["full_video_bvid"],
"path": item["path"],
},
)
return result

View File

@ -0,0 +1,25 @@
from __future__ import annotations
from pathlib import Path
from biliup_next.app.task_runner import process_task
from biliup_next.infra.task_reset import TaskResetService
class TaskControlService:
def __init__(self, state: dict[str, object]):
self.state = state
def run_task(self, task_id: str) -> dict[str, object]:
return process_task(task_id)
def retry_step(self, task_id: str, step_name: str) -> dict[str, object]:
return process_task(task_id, reset_step=step_name)
def reset_to_step(self, task_id: str, step_name: str) -> dict[str, object]:
reset_result = TaskResetService(
self.state["repo"],
Path(str(self.state["settings"]["paths"]["session_dir"])),
).reset_to_step(task_id, step_name)
process_result = process_task(task_id)
return {"reset": reset_result, "run": process_result}

View File

@ -22,6 +22,12 @@ def settings_for(state: dict[str, object], group: str) -> dict[str, object]:
def infer_error_step_name(task, steps: dict[str, object]) -> str: # type: ignore[no-untyped-def]
running = next((step for step in steps.values() if step.status == "running"), None)
if running is not None:
return running.step_name
failed = next((step for step in steps.values() if step.status == "failed_retryable"), None)
if failed is not None:
return failed.step_name
if task.status in {"created", "failed_retryable"} and steps.get("transcribe") and steps["transcribe"].status in {"pending", "failed_retryable", "running"}:
return "transcribe"
if task.status == "transcribed":
@ -57,6 +63,9 @@ def retry_wait_payload(task_id: str, step, state: dict[str, object]) -> dict[str
def next_runnable_step(task, steps: dict[str, object], state: dict[str, object]) -> tuple[str | None, dict[str, object] | None]: # type: ignore[no-untyped-def]
if any(step.status == "running" for step in steps.values()):
return None, None
if task.status == "failed_retryable":
failed = next((step for step in steps.values() if step.status == "failed_retryable"), None)
if failed is None:

View File

@ -1,5 +1,6 @@
from __future__ import annotations
from biliup_next.app.retry_meta import comment_retry_schedule_seconds
from biliup_next.app.retry_meta import publish_retry_schedule_seconds
from biliup_next.app.task_engine import infer_error_step_name, settings_for as task_engine_settings_for
from biliup_next.core.models import utc_now_iso
@ -40,6 +41,12 @@ def resolve_failure(task, repo, state: dict[str, object], exc) -> dict[str, obje
next_status = "failed_manual"
else:
next_retry_delay_seconds = schedule[next_retry_count - 1]
if exc.retryable and step_name == "comment":
schedule = comment_retry_schedule_seconds(settings_for(state, "comment"))
if next_retry_count > len(schedule):
next_status = "failed_manual"
else:
next_retry_delay_seconds = schedule[next_retry_count - 1]
failed_at = utc_now_iso()
repo.update_step_status(
task.id,

View File

@ -10,6 +10,7 @@ from biliup_next.app.task_policies import apply_disabled_step_fallbacks
from biliup_next.app.task_policies import resolve_failure
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import utc_now_iso
from biliup_next.infra.task_reset import STATUS_BEFORE_STEP
def process_task(task_id: str, *, reset_step: str | None = None, include_stage_scan: bool = False) -> dict[str, object]:
@ -41,7 +42,8 @@ def process_task(task_id: str, *, reset_step: str | None = None, include_stage_s
started_at=None,
finished_at=None,
)
repo.update_task_status(task_id, task.status, utc_now_iso())
target_status = STATUS_BEFORE_STEP.get(reset_step, "created")
repo.update_task_status(task_id, target_status, utc_now_iso())
processed.append({"task_id": task_id, "step": reset_step, "reset": True})
record_task_action(repo, task_id, "retry_step", "ok", f"step reset to pending: {reset_step}", {"step_name": reset_step})
@ -60,6 +62,19 @@ def process_task(task_id: str, *, reset_step: str | None = None, include_stage_s
if step_name is None:
break
claimed_at = utc_now_iso()
if not repo.claim_step_running(task.id, step_name, started_at=claimed_at):
processed.append(
{
"task_id": task.id,
"step": step_name,
"skipped": True,
"reason": "step_already_claimed",
}
)
return {"processed": processed}
repo.update_task_status(task.id, "running", claimed_at)
payload = execute_step(state, task.id, step_name)
if current_task.status == "failed_retryable":
payload["retry"] = True

View File

@ -25,12 +25,13 @@ class SettingsService:
self.schema_path = self.config_dir / "settings.schema.json"
self.settings_path = self.config_dir / "settings.json"
self.staged_path = self.config_dir / "settings.staged.json"
self.standalone_example_path = self.config_dir / "settings.standalone.example.json"
def load(self) -> SettingsBundle:
self.ensure_local_settings()
schema = self._read_json(self.schema_path)
settings = self._read_json(self.settings_path)
settings = self._apply_schema_defaults(settings, schema)
settings = self._apply_legacy_env_overrides(settings, schema)
settings = self._normalize_paths(settings)
self.validate(settings, schema)
return SettingsBundle(schema=schema, settings=settings)
@ -49,6 +50,7 @@ class SettingsService:
self._validate_field(group_name, field_name, group_value[field_name], field_schema)
def save_staged(self, settings: dict[str, Any]) -> None:
self.ensure_local_settings()
schema = self._read_json(self.schema_path)
settings = self._apply_schema_defaults(settings, schema)
self.validate(settings, schema)
@ -68,12 +70,23 @@ class SettingsService:
self._write_json(self.staged_path, merged)
def promote_staged(self) -> None:
self.ensure_local_settings()
staged = self._read_json(self.staged_path)
schema = self._read_json(self.schema_path)
staged = self._apply_schema_defaults(staged, schema)
self.validate(staged, schema)
self._write_json(self.settings_path, staged)
def ensure_local_settings(self) -> None:
if not self.settings_path.exists():
if not self.standalone_example_path.exists():
raise ConfigError(f"配置文件不存在: {self.settings_path}")
example_settings = self._read_json(self.standalone_example_path)
self._write_json(self.settings_path, example_settings)
if not self.staged_path.exists():
settings = self._read_json(self.settings_path)
self._write_json(self.staged_path, settings)
def _validate_field(self, group: str, name: str, value: Any, field_schema: dict[str, Any]) -> None:
expected = field_schema.get("type")
if expected == "string" and not isinstance(value, str):
@ -130,38 +143,6 @@ class SettingsService:
json.dump(data, f, ensure_ascii=False, indent=2)
f.write("\n")
def _apply_legacy_env_overrides(self, settings: dict[str, Any], schema: dict[str, Any]) -> dict[str, Any]:
env_path = self.root_dir.parent / ".env"
if not env_path.exists():
return settings
env_map: dict[str, str] = {}
with env_path.open("r", encoding="utf-8") as f:
for raw_line in f:
line = raw_line.strip()
if not line or line.startswith("#") or "=" not in line:
continue
key, value = line.split("=", 1)
env_map[key.strip()] = value.strip()
overrides = {
("transcribe", "groq_api_key"): env_map.get("GROQ_API_KEY"),
("song_detect", "codex_cmd"): self._resolve_legacy_path(env_map.get("CODEX_CMD")),
("transcribe", "ffmpeg_bin"): self._resolve_legacy_path(env_map.get("FFMPEG_BIN")),
("split", "ffmpeg_bin"): self._resolve_legacy_path(env_map.get("FFMPEG_BIN")),
("ingest", "ffprobe_bin"): self._resolve_legacy_path(env_map.get("FFPROBE_BIN")),
("publish", "biliup_path"): self._resolve_legacy_path(env_map.get("BILIUP_PATH")),
("publish", "cookie_file"): self._resolve_legacy_path(env_map.get("BILIUP_COOKIE_FILE")),
("paths", "cookies_file"): self._resolve_legacy_path(env_map.get("BILIUP_COOKIE_FILE")),
}
merged = json.loads(json.dumps(settings))
defaults = schema.get("groups", {})
for (group, field), value in overrides.items():
default_value = defaults.get(group, {}).get(field, {}).get("default")
current_value = merged.get(group, {}).get(field)
if value and (current_value in ("", None) or current_value == default_value):
merged[group][field] = value
return merged
def _resolve_legacy_path(self, value: str | None) -> str | None:
if not value:
return value

View File

@ -78,3 +78,36 @@ class ActionRecord:
def to_dict(self) -> dict[str, Any]:
return asdict(self)
@dataclass(slots=True)
class TaskContext:
id: int | None
task_id: str
session_key: str
streamer: str | None
room_id: str | None
source_title: str | None
segment_started_at: str | None
segment_duration_seconds: float | None
full_video_bvid: str | None
created_at: str
updated_at: str
def to_dict(self) -> dict[str, Any]:
return asdict(self)
@dataclass(slots=True)
class SessionBinding:
id: int | None
session_key: str | None
source_title: str | None
streamer: str | None
room_id: str | None
full_video_bvid: str
created_at: str
updated_at: str
def to_dict(self) -> dict[str, Any]:
return asdict(self)

View File

@ -0,0 +1,113 @@
from __future__ import annotations
import json
from pathlib import Path
from typing import Any
import requests
from biliup_next.core.errors import ModuleError
class BilibiliApiAdapter:
def load_cookies(self, path: Path) -> dict[str, str]:
with path.open("r", encoding="utf-8") as file_handle:
data = json.load(file_handle)
if "cookie_info" in data:
return {c["name"]: c["value"] for c in data.get("cookie_info", {}).get("cookies", [])}
return data
def build_session(
self,
*,
cookies: dict[str, str],
referer: str,
origin: str | None = None,
) -> requests.Session:
session = requests.Session()
session.cookies.update(cookies)
headers = {
"User-Agent": "Mozilla/5.0",
"Referer": referer,
}
if origin:
headers["Origin"] = origin
session.headers.update(headers)
return session
def get_video_view(self, session: requests.Session, bvid: str, *, error_code: str, error_message: str) -> dict[str, Any]:
result = session.get("https://api.bilibili.com/x/web-interface/view", params={"bvid": bvid}, timeout=15).json()
if result.get("code") != 0:
raise ModuleError(
code=error_code,
message=f"{error_message}: {result.get('message')}",
retryable=True,
)
return dict(result["data"])
def add_reply(self, session: requests.Session, *, csrf: str, aid: int, content: str, error_message: str) -> dict[str, Any]:
result = session.post(
"https://api.bilibili.com/x/v2/reply/add",
data={"type": 1, "oid": aid, "message": content, "plat": 1, "csrf": csrf},
timeout=15,
).json()
if result.get("code") != 0:
raise ModuleError(
code="COMMENT_POST_FAILED",
message=f"{error_message}: {result.get('message')}",
retryable=True,
)
return dict(result["data"])
def top_reply(self, session: requests.Session, *, csrf: str, aid: int, rpid: int, error_message: str) -> None:
result = session.post(
"https://api.bilibili.com/x/v2/reply/top",
data={"type": 1, "oid": aid, "rpid": rpid, "action": 1, "csrf": csrf},
timeout=15,
).json()
if result.get("code") != 0:
raise ModuleError(
code="COMMENT_TOP_FAILED",
message=f"{error_message}: {result.get('message')}",
retryable=True,
)
def list_seasons(self, session: requests.Session) -> dict[str, Any]:
result = session.get("https://member.bilibili.com/x2/creative/web/seasons", params={"pn": 1, "ps": 50}, timeout=15).json()
return dict(result)
def add_section_episodes(
self,
session: requests.Session,
*,
csrf: str,
section_id: int,
episodes: list[dict[str, object]],
) -> dict[str, Any]:
return dict(
session.post(
"https://member.bilibili.com/x2/creative/web/season/section/episodes/add",
params={"csrf": csrf},
json={"sectionId": section_id, "episodes": episodes},
timeout=20,
).json()
)
def get_section_detail(self, session: requests.Session, *, section_id: int) -> dict[str, Any]:
return dict(
session.get(
"https://member.bilibili.com/x2/creative/web/season/section",
params={"id": section_id},
timeout=20,
).json()
)
def edit_section(self, session: requests.Session, *, csrf: str, payload: dict[str, object]) -> dict[str, Any]:
return dict(
session.post(
"https://member.bilibili.com/x2/creative/web/season/section/edit",
params={"csrf": csrf},
json=payload,
timeout=20,
).json()
)

View File

@ -0,0 +1,27 @@
from __future__ import annotations
import subprocess
from biliup_next.core.errors import ModuleError
class BiliupCliAdapter:
def run(self, cmd: list[str], *, label: str) -> subprocess.CompletedProcess[str]:
try:
return subprocess.run(cmd, capture_output=True, text=True, check=False)
except FileNotFoundError as exc:
raise ModuleError(
code="BILIUP_NOT_FOUND",
message=f"找不到 biliup 命令: {cmd[0]} ({label})",
retryable=False,
) from exc
def run_optional(self, cmd: list[str]) -> None:
try:
subprocess.run(cmd, capture_output=True, text=True, check=False)
except FileNotFoundError as exc:
raise ModuleError(
code="BILIUP_NOT_FOUND",
message=f"找不到 biliup 命令: {cmd[0]}",
retryable=False,
) from exc

View File

@ -1,176 +0,0 @@
from __future__ import annotations
import json
import random
import re
import subprocess
from pathlib import Path
from typing import Any
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import PublishRecord, Task, utc_now_iso
from biliup_next.core.providers import ProviderManifest
from biliup_next.infra.legacy_paths import legacy_project_root
class LegacyBiliupPublishProvider:
manifest = ProviderManifest(
id="biliup_cli",
name="Legacy biliup CLI Publish Provider",
version="0.1.0",
provider_type="publish_provider",
entrypoint="biliup_next.infra.adapters.biliup_publish_legacy:LegacyBiliupPublishProvider",
capabilities=["publish"],
enabled_by_default=True,
)
def __init__(self, next_root: Path):
self.next_root = next_root
self.legacy_root = legacy_project_root(next_root)
def publish(self, task: Task, clip_videos: list, settings: dict[str, Any]) -> PublishRecord:
work_dir = Path(str(settings.get("session_dir", str(self.legacy_root / "session")))) / task.title
bvid_file = work_dir / "bvid.txt"
upload_done = work_dir / "upload_done.flag"
config = self._load_upload_config(Path(str(settings.get("upload_config_file", str(self.legacy_root / "upload_config.json")))))
if bvid_file.exists():
bvid = bvid_file.read_text(encoding="utf-8").strip()
return PublishRecord(
id=None,
task_id=task.id,
platform="bilibili",
aid=None,
bvid=bvid,
title=task.title,
published_at=utc_now_iso(),
)
video_files = [artifact.path for artifact in clip_videos]
if not video_files:
raise ModuleError(
code="PUBLISH_NO_CLIPS",
message=f"没有可上传的切片: {task.id}",
retryable=False,
)
parsed = self._parse_filename(task.title, config)
streamer = parsed.get("streamer", task.title)
date = parsed.get("date", "")
songs_txt = work_dir / "songs.txt"
songs_list = songs_txt.read_text(encoding="utf-8").strip() if songs_txt.exists() else ""
songs_json = work_dir / "songs.json"
song_count = 0
if songs_json.exists():
song_count = len(json.loads(songs_json.read_text(encoding="utf-8")).get("songs", []))
quote = self._get_random_quote(config)
template_vars = {
"streamer": streamer,
"date": date,
"song_count": song_count,
"songs_list": songs_list,
"daily_quote": quote.get("text", ""),
"quote_author": quote.get("author", ""),
}
template = config.get("template", {})
title = template.get("title", "{streamer}_{date}").format(**template_vars)
description = template.get("description", "{songs_list}").format(**template_vars)
dynamic = template.get("dynamic", "").format(**template_vars)
tags = template.get("tag", "翻唱,唱歌,音乐").format(**template_vars)
streamer_cfg = config.get("streamers", {})
if streamer in streamer_cfg:
tags = streamer_cfg[streamer].get("tags", tags)
upload_settings = config.get("upload_settings", {})
tid = upload_settings.get("tid", 31)
biliup_path = str(settings.get("biliup_path", str(self.legacy_root / "biliup")))
cookie_file = str(settings.get("cookie_file", str(self.legacy_root / "cookies.json")))
subprocess.run([biliup_path, "-u", cookie_file, "renew"], capture_output=True, text=True)
first_batch = video_files[:5]
remaining_batches = [video_files[i:i + 5] for i in range(5, len(video_files), 5)]
upload_cmd = [
biliup_path, "-u", cookie_file, "upload",
*first_batch,
"--title", title,
"--tid", str(tid),
"--tag", tags,
"--copyright", str(upload_settings.get("copyright", 2)),
"--source", upload_settings.get("source", "直播回放"),
"--desc", description,
]
if dynamic:
upload_cmd.extend(["--dynamic", dynamic])
bvid = self._run_upload(upload_cmd, "首批上传")
bvid_file.write_text(bvid, encoding="utf-8")
for idx, batch in enumerate(remaining_batches, 2):
append_cmd = [biliup_path, "-u", cookie_file, "append", "--vid", bvid, *batch]
self._run_append(append_cmd, f"追加第 {idx}")
upload_done.touch()
return PublishRecord(
id=None,
task_id=task.id,
platform="bilibili",
aid=None,
bvid=bvid,
title=title,
published_at=utc_now_iso(),
)
def _run_upload(self, cmd: list[str], label: str) -> str:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode == 0:
match = re.search(r'"bvid":"(BV[A-Za-z0-9]+)"', result.stdout) or re.search(r'(BV[A-Za-z0-9]+)', result.stdout)
if match:
return match.group(1)
raise ModuleError(
code="PUBLISH_UPLOAD_FAILED",
message=f"{label}失败",
retryable=True,
details={"stdout": result.stdout[-2000:], "stderr": result.stderr[-2000:]},
)
def _run_append(self, cmd: list[str], label: str) -> None:
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode == 0:
return
raise ModuleError(
code="PUBLISH_APPEND_FAILED",
message=f"{label}失败",
retryable=True,
details={"stdout": result.stdout[-2000:], "stderr": result.stderr[-2000:]},
)
def _load_upload_config(self, path: Path) -> dict[str, Any]:
if not path.exists():
return {}
return json.loads(path.read_text(encoding="utf-8"))
def _parse_filename(self, filename: str, config: dict[str, Any] | None = None) -> dict[str, str]:
config = config or {}
patterns = config.get("filename_patterns", {}).get("patterns", [])
for pattern_config in patterns:
regex = pattern_config.get("regex")
if not regex:
continue
match = re.match(regex, filename)
if match:
data = match.groupdict()
date_format = pattern_config.get("date_format", "{date}")
try:
data["date"] = date_format.format(**data)
except KeyError:
pass
return data
return {"streamer": filename, "date": ""}
def _get_random_quote(self, config: dict[str, Any]) -> dict[str, str]:
quotes = config.get("quotes", [])
if not quotes:
return {"text": "", "author": ""}
return random.choice(quotes)

View File

@ -0,0 +1,44 @@
from __future__ import annotations
import subprocess
from pathlib import Path
from biliup_next.core.errors import ModuleError
class CodexCliAdapter:
def run_song_detect(
self,
*,
codex_cmd: str,
work_dir: Path,
prompt: str,
) -> subprocess.CompletedProcess[str]:
cmd = [
codex_cmd,
"exec",
prompt.replace("\n", " "),
"--full-auto",
"--sandbox",
"workspace-write",
"--output-schema",
"./song_schema.json",
"-o",
"songs.json",
"--skip-git-repo-check",
"--json",
]
try:
return subprocess.run(
cmd,
cwd=str(work_dir),
capture_output=True,
text=True,
check=False,
)
except FileNotFoundError as exc:
raise ModuleError(
code="CODEX_NOT_FOUND",
message=f"找不到 codex 命令: {codex_cmd}",
retryable=False,
) from exc

View File

@ -1,79 +0,0 @@
from __future__ import annotations
import json
import os
import subprocess
from pathlib import Path
from typing import Any
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import Artifact, Task, utc_now_iso
from biliup_next.core.providers import ProviderManifest
from biliup_next.infra.legacy_paths import legacy_project_root
class LegacyGroqTranscribeProvider:
manifest = ProviderManifest(
id="groq",
name="Legacy Groq Transcribe Provider",
version="0.1.0",
provider_type="transcribe_provider",
entrypoint="biliup_next.infra.adapters.groq_legacy:LegacyGroqTranscribeProvider",
capabilities=["transcribe"],
enabled_by_default=True,
)
def __init__(self, next_root: Path):
self.next_root = next_root
self.legacy_root = legacy_project_root(next_root)
self.python_bin = self._resolve_python_bin()
def transcribe(self, task: Task, source_video: Artifact, settings: dict[str, Any]) -> Artifact:
session_dir = Path(str(settings.get("session_dir", str(self.legacy_root / "session"))))
work_dir = (session_dir / task.title).resolve()
cmd = [
self.python_bin,
"video2srt.py",
source_video.path,
str(work_dir),
]
env = {
**os.environ,
"GROQ_API_KEY": str(settings.get("groq_api_key", "")),
"FFMPEG_BIN": str(settings.get("ffmpeg_bin", "ffmpeg")),
}
result = subprocess.run(
cmd,
cwd=str(self.legacy_root),
capture_output=True,
text=True,
env=env,
)
if result.returncode != 0:
raise ModuleError(
code="TRANSCRIBE_FAILED",
message="legacy video2srt.py 执行失败",
retryable=True,
details={"stderr": result.stderr[-2000:], "stdout": result.stdout[-2000:]},
)
srt_path = work_dir / f"{task.title}.srt"
if not srt_path.exists():
raise ModuleError(
code="TRANSCRIBE_OUTPUT_MISSING",
message=f"未找到字幕文件: {srt_path}",
retryable=False,
)
return Artifact(
id=None,
task_id=task.id,
artifact_type="subtitle_srt",
path=str(srt_path),
metadata_json=json.dumps({"provider": "groq_legacy"}),
created_at=utc_now_iso(),
)
def _resolve_python_bin(self) -> str:
venv_python = self.legacy_root / ".venv" / "bin" / "python"
if venv_python.exists():
return str(venv_python)
return "python"

View File

@ -1,27 +0,0 @@
from __future__ import annotations
from pathlib import Path
class CommentFlagMigrationService:
def migrate(self, session_dir: Path) -> dict[str, int]:
migrated_split_flags = 0
legacy_untracked_full = 0
if not session_dir.exists():
return {"migrated_split_flags": 0, "legacy_untracked_full": 0}
for folder in sorted(p for p in session_dir.iterdir() if p.is_dir()):
comment_done = folder / "comment_done.flag"
split_done = folder / "comment_split_done.flag"
full_done = folder / "comment_full_done.flag"
if not comment_done.exists():
continue
if not split_done.exists():
split_done.touch()
migrated_split_flags += 1
if not full_done.exists():
legacy_untracked_full += 1
return {
"migrated_split_flags": migrated_split_flags,
"legacy_untracked_full": legacy_untracked_full,
}

View File

@ -59,6 +59,37 @@ CREATE TABLE IF NOT EXISTS action_records (
created_at TEXT NOT NULL,
FOREIGN KEY(task_id) REFERENCES tasks(id)
);
CREATE TABLE IF NOT EXISTS task_contexts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_id TEXT NOT NULL UNIQUE,
session_key TEXT NOT NULL,
streamer TEXT,
room_id TEXT,
source_title TEXT,
segment_started_at TEXT,
segment_duration_seconds REAL,
full_video_bvid TEXT,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
FOREIGN KEY(task_id) REFERENCES tasks(id)
);
CREATE TABLE IF NOT EXISTS session_bindings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_key TEXT UNIQUE,
source_title TEXT,
streamer TEXT,
room_id TEXT,
full_video_bvid TEXT NOT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_task_contexts_session_key ON task_contexts(session_key);
CREATE INDEX IF NOT EXISTS idx_task_contexts_streamer_started_at ON task_contexts(streamer, segment_started_at);
CREATE INDEX IF NOT EXISTS idx_session_bindings_source_title ON session_bindings(source_title);
CREATE INDEX IF NOT EXISTS idx_session_bindings_streamer_room_id ON session_bindings(streamer, room_id);
"""
@ -70,6 +101,10 @@ class Database:
self.db_path.parent.mkdir(parents=True, exist_ok=True)
conn = sqlite3.connect(self.db_path)
conn.row_factory = sqlite3.Row
conn.execute("PRAGMA foreign_keys = ON")
conn.execute("PRAGMA busy_timeout = 5000")
conn.execute("PRAGMA journal_mode = WAL")
conn.execute("PRAGMA synchronous = NORMAL")
return conn
def initialize(self) -> None:

View File

@ -1,7 +0,0 @@
from __future__ import annotations
from pathlib import Path
def legacy_project_root(next_root: Path) -> Path:
return next_root.parent

View File

@ -2,18 +2,27 @@ from __future__ import annotations
from pathlib import Path
ALLOWED_LOG_FILES = {
"monitor.log": Path("/home/theshy/biliup/logs/system/monitor.log"),
"monitorSrt.log": Path("/home/theshy/biliup/logs/system/monitorSrt.log"),
"monitorSongs.log": Path("/home/theshy/biliup/logs/system/monitorSongs.log"),
"upload.log": Path("/home/theshy/biliup/logs/system/upload.log"),
"session_top_comment.py.log": Path("/home/theshy/biliup/logs/system/session_top_comment.py.log"),
"add_to_collection.py.log": Path("/home/theshy/biliup/logs/system/add_to_collection.py.log"),
}
class LogReader:
def __init__(self, root_dir: Path | None = None):
self.root_dir = (root_dir or Path(__file__).resolve().parents[3]).resolve()
self.log_dirs = [
self.root_dir / "logs",
self.root_dir / "runtime" / "logs",
self.root_dir / "data" / "workspace" / "logs",
]
def _allowed_log_files(self) -> dict[str, Path]:
items: dict[str, Path] = {}
for log_dir in self.log_dirs:
if not log_dir.exists():
continue
for path in sorted(p for p in log_dir.rglob("*.log") if p.is_file()):
items.setdefault(path.name, path.resolve())
return items
def list_logs(self) -> dict[str, object]:
allowed_log_files = self._allowed_log_files()
return {
"items": [
{
@ -21,14 +30,15 @@ class LogReader:
"path": str(path),
"exists": path.exists(),
}
for name, path in sorted(ALLOWED_LOG_FILES.items())
for name, path in sorted(allowed_log_files.items())
]
}
def tail(self, name: str, lines: int = 200, contains: str | None = None) -> dict[str, object]:
if name not in ALLOWED_LOG_FILES:
allowed_log_files = self._allowed_log_files()
if name not in allowed_log_files:
raise ValueError(f"unsupported log: {name}")
path = ALLOWED_LOG_FILES[name]
path = allowed_log_files[name]
if not path.exists():
return {"name": name, "path": str(path), "exists": False, "content": ""}
content = path.read_text(encoding="utf-8", errors="replace").splitlines()

View File

@ -8,7 +8,7 @@ from biliup_next.core.config import SettingsService
class RuntimeDoctor:
def __init__(self, root_dir: Path):
self.root_dir = root_dir
self.root_dir = root_dir.resolve()
self.settings_service = SettingsService(root_dir)
def run(self) -> dict[str, object]:
@ -28,27 +28,47 @@ class RuntimeDoctor:
("paths", "cookies_file"),
("paths", "upload_config_file"),
):
path = (self.root_dir / settings[group][name]).resolve()
detail = str(path)
if path.exists() and not str(path).startswith(str(self.root_dir)):
detail = f"{path} (external)"
checks.append({"name": f"{group}.{name}", "ok": path.exists(), "detail": detail})
path = Path(str(settings[group][name])).resolve()
checks.append(
{
"name": f"{group}.{name}",
"ok": path.exists() and self._is_internal_path(path),
"detail": self._internal_path_detail(path),
}
)
for group, name in (
("ingest", "ffprobe_bin"),
("transcribe", "ffmpeg_bin"),
("song_detect", "codex_cmd"),
("publish", "biliup_path"),
):
value = settings[group][name]
found = shutil.which(value) if "/" not in value else str((self.root_dir / value).resolve())
ok = bool(found) and (Path(found).exists() if "/" in str(found) else True)
detail = str(found or value)
if ok and "/" in detail and not detail.startswith(str(self.root_dir)):
detail = f"{detail} (external)"
checks.append({"name": f"{group}.{name}", "ok": ok, "detail": detail})
checks.append({"name": f"{group}.{name}", "ok": ok, "detail": str(found or value)})
publish_biliup_path = Path(str(settings["publish"]["biliup_path"])).resolve()
checks.append(
{
"name": "publish.biliup_path",
"ok": publish_biliup_path.exists() and self._is_internal_path(publish_biliup_path),
"detail": self._internal_path_detail(publish_biliup_path),
}
)
return {
"ok": all(item["ok"] for item in checks),
"checks": checks,
}
def _is_internal_path(self, path: Path) -> bool:
try:
path.relative_to(self.root_dir)
return True
except ValueError:
return False
def _internal_path_detail(self, path: Path) -> str:
if self._is_internal_path(path):
return str(path)
return f"{path} (must live under {self.root_dir})"

View File

@ -5,7 +5,6 @@ import subprocess
ALLOWED_SERVICES = {
"biliup-next-worker.service",
"biliup-next-api.service",
"biliup-python.service",
}
ALLOWED_ACTIONS = {"start", "stop", "restart"}

View File

@ -1,26 +1,9 @@
from __future__ import annotations
import json
from pathlib import Path
from datetime import datetime, timezone
from biliup_next.core.models import ActionRecord, Artifact, PublishRecord, Task, TaskStep
from biliup_next.core.models import ActionRecord, Artifact, PublishRecord, SessionBinding, Task, TaskContext, TaskStep
from biliup_next.infra.db import Database
TASK_STATUS_ORDER = {
"created": 0,
"transcribed": 1,
"songs_detected": 2,
"split_done": 3,
"published": 4,
"commented": 5,
"collection_synced": 6,
"failed_retryable": 7,
"failed_manual": 8,
}
class TaskRepository:
def __init__(self, db: Database):
self.db = db
@ -58,6 +41,24 @@ class TaskRepository:
)
conn.commit()
def _build_task_query(
self,
*,
status: str | None = None,
search: str | None = None,
) -> tuple[str, list[object]]:
conditions: list[str] = []
params: list[object] = []
if status:
conditions.append("status = ?")
params.append(status)
if search:
conditions.append("(id LIKE ? OR title LIKE ?)")
needle = f"%{search}%"
params.extend([needle, needle])
where_clause = f"WHERE {' AND '.join(conditions)}" if conditions else ""
return where_clause, params
def list_tasks(self, limit: int = 100) -> list[Task]:
with self.db.connect() as conn:
rows = conn.execute(
@ -67,6 +68,42 @@ class TaskRepository:
).fetchall()
return [Task(**dict(row)) for row in rows]
def query_tasks(
self,
*,
limit: int = 100,
offset: int = 0,
status: str | None = None,
search: str | None = None,
sort: str = "updated_desc",
) -> tuple[list[Task], int]:
sort_sql = {
"updated_desc": "updated_at DESC",
"updated_asc": "updated_at ASC",
"title_asc": "title COLLATE NOCASE ASC",
"title_desc": "title COLLATE NOCASE DESC",
"created_desc": "created_at DESC",
"created_asc": "created_at ASC",
"status_asc": "status ASC, updated_at DESC",
}.get(sort, "updated_at DESC")
where_clause, params = self._build_task_query(status=status, search=search)
with self.db.connect() as conn:
total = conn.execute(
f"SELECT COUNT(*) AS count FROM tasks {where_clause}",
params,
).fetchone()["count"]
rows = conn.execute(
f"""
SELECT id, source_type, source_path, title, status, created_at, updated_at
FROM tasks
{where_clause}
ORDER BY {sort_sql}
LIMIT ? OFFSET ?
""",
[*params, limit, offset],
).fetchall()
return [Task(**dict(row)) for row in rows], int(total)
def get_task(self, task_id: str) -> Task | None:
with self.db.connect() as conn:
row = conn.execute(
@ -81,6 +118,7 @@ class TaskRepository:
conn.execute("DELETE FROM action_records WHERE task_id = ?", (task_id,))
conn.execute("DELETE FROM publish_records WHERE task_id = ?", (task_id,))
conn.execute("DELETE FROM artifacts WHERE task_id = ?", (task_id,))
conn.execute("DELETE FROM task_contexts WHERE task_id = ?", (task_id,))
conn.execute("DELETE FROM task_steps WHERE task_id = ?", (task_id,))
conn.execute("DELETE FROM tasks WHERE id = ?", (task_id,))
conn.commit()
@ -172,6 +210,19 @@ class TaskRepository:
)
conn.commit()
def claim_step_running(self, task_id: str, step_name: str, *, started_at: str) -> bool:
with self.db.connect() as conn:
result = conn.execute(
"""
UPDATE task_steps
SET status = ?, started_at = ?, finished_at = NULL, error_code = NULL, error_message = NULL
WHERE task_id = ? AND step_name = ? AND status IN (?, ?)
""",
("running", started_at, task_id, step_name, "pending", "failed_retryable"),
)
conn.commit()
return result.rowcount == 1
def add_artifact(self, artifact: Artifact) -> None:
with self.db.connect() as conn:
existing = conn.execute(
@ -265,6 +316,250 @@ class TaskRepository:
)
conn.commit()
def upsert_task_context(self, context: TaskContext) -> None:
with self.db.connect() as conn:
conn.execute(
"""
INSERT INTO task_contexts (
task_id, session_key, streamer, room_id, source_title,
segment_started_at, segment_duration_seconds, full_video_bvid,
created_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(task_id) DO UPDATE SET
session_key=excluded.session_key,
streamer=excluded.streamer,
room_id=excluded.room_id,
source_title=excluded.source_title,
segment_started_at=excluded.segment_started_at,
segment_duration_seconds=excluded.segment_duration_seconds,
full_video_bvid=excluded.full_video_bvid,
updated_at=excluded.updated_at
""",
(
context.task_id,
context.session_key,
context.streamer,
context.room_id,
context.source_title,
context.segment_started_at,
context.segment_duration_seconds,
context.full_video_bvid,
context.created_at,
context.updated_at,
),
)
conn.commit()
def get_task_context(self, task_id: str) -> TaskContext | None:
with self.db.connect() as conn:
row = conn.execute(
"""
SELECT id, task_id, session_key, streamer, room_id, source_title,
segment_started_at, segment_duration_seconds, full_video_bvid,
created_at, updated_at
FROM task_contexts
WHERE task_id = ?
""",
(task_id,),
).fetchone()
return TaskContext(**dict(row)) if row else None
def list_task_contexts_by_session_key(self, session_key: str) -> list[TaskContext]:
with self.db.connect() as conn:
rows = conn.execute(
"""
SELECT id, task_id, session_key, streamer, room_id, source_title,
segment_started_at, segment_duration_seconds, full_video_bvid,
created_at, updated_at
FROM task_contexts
WHERE session_key = ?
ORDER BY segment_started_at ASC, id ASC
""",
(session_key,),
).fetchall()
return [TaskContext(**dict(row)) for row in rows]
def list_task_contexts_by_source_title(self, source_title: str) -> list[TaskContext]:
with self.db.connect() as conn:
rows = conn.execute(
"""
SELECT id, task_id, session_key, streamer, room_id, source_title,
segment_started_at, segment_duration_seconds, full_video_bvid,
created_at, updated_at
FROM task_contexts
WHERE source_title = ?
ORDER BY COALESCE(segment_started_at, updated_at) ASC, id ASC
""",
(source_title,),
).fetchall()
return [TaskContext(**dict(row)) for row in rows]
def list_task_contexts_for_task_ids(self, task_ids: list[str]) -> dict[str, TaskContext]:
if not task_ids:
return {}
placeholders = ", ".join("?" for _ in task_ids)
with self.db.connect() as conn:
rows = conn.execute(
f"""
SELECT id, task_id, session_key, streamer, room_id, source_title,
segment_started_at, segment_duration_seconds, full_video_bvid,
created_at, updated_at
FROM task_contexts
WHERE task_id IN ({placeholders})
""",
task_ids,
).fetchall()
return {row["task_id"]: TaskContext(**dict(row)) for row in rows}
def find_recent_task_contexts(self, streamer: str, limit: int = 20) -> list[TaskContext]:
with self.db.connect() as conn:
rows = conn.execute(
"""
SELECT id, task_id, session_key, streamer, room_id, source_title,
segment_started_at, segment_duration_seconds, full_video_bvid,
created_at, updated_at
FROM task_contexts
WHERE streamer = ?
ORDER BY COALESCE(segment_started_at, updated_at) DESC, id DESC
LIMIT ?
""",
(streamer, limit),
).fetchall()
return [TaskContext(**dict(row)) for row in rows]
def list_steps_for_task_ids(self, task_ids: list[str]) -> dict[str, list[TaskStep]]:
if not task_ids:
return {}
placeholders = ", ".join("?" for _ in task_ids)
with self.db.connect() as conn:
rows = conn.execute(
f"""
SELECT id, task_id, step_name, status, error_code, error_message,
retry_count, started_at, finished_at
FROM task_steps
WHERE task_id IN ({placeholders})
ORDER BY id ASC
""",
task_ids,
).fetchall()
result: dict[str, list[TaskStep]] = {}
for row in rows:
step = TaskStep(**dict(row))
result.setdefault(step.task_id, []).append(step)
return result
def update_session_full_video_bvid(self, session_key: str, full_video_bvid: str, updated_at: str) -> int:
with self.db.connect() as conn:
result = conn.execute(
"""
UPDATE task_contexts
SET full_video_bvid = ?, updated_at = ?
WHERE session_key = ?
""",
(full_video_bvid, updated_at, session_key),
)
conn.commit()
return result.rowcount
def upsert_session_binding(self, binding: SessionBinding) -> None:
with self.db.connect() as conn:
if binding.session_key:
conn.execute(
"""
INSERT INTO session_bindings (
session_key, source_title, streamer, room_id, full_video_bvid, created_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(session_key) DO UPDATE SET
source_title=excluded.source_title,
streamer=excluded.streamer,
room_id=excluded.room_id,
full_video_bvid=excluded.full_video_bvid,
updated_at=excluded.updated_at
""",
(
binding.session_key,
binding.source_title,
binding.streamer,
binding.room_id,
binding.full_video_bvid,
binding.created_at,
binding.updated_at,
),
)
else:
existing = conn.execute(
"""
SELECT id
FROM session_bindings
WHERE source_title = ?
ORDER BY id DESC
LIMIT 1
""",
(binding.source_title,),
).fetchone()
if existing:
conn.execute(
"""
UPDATE session_bindings
SET streamer = ?, room_id = ?, full_video_bvid = ?, updated_at = ?
WHERE id = ?
""",
(
binding.streamer,
binding.room_id,
binding.full_video_bvid,
binding.updated_at,
existing["id"],
),
)
else:
conn.execute(
"""
INSERT INTO session_bindings (
session_key, source_title, streamer, room_id, full_video_bvid, created_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?)
""",
(
binding.session_key,
binding.source_title,
binding.streamer,
binding.room_id,
binding.full_video_bvid,
binding.created_at,
binding.updated_at,
),
)
conn.commit()
def get_session_binding(self, *, session_key: str | None = None, source_title: str | None = None) -> SessionBinding | None:
with self.db.connect() as conn:
row = None
if session_key:
row = conn.execute(
"""
SELECT id, session_key, source_title, streamer, room_id, full_video_bvid, created_at, updated_at
FROM session_bindings
WHERE session_key = ?
LIMIT 1
""",
(session_key,),
).fetchone()
if row is None and source_title:
row = conn.execute(
"""
SELECT id, session_key, source_title, streamer, room_id, full_video_bvid, created_at, updated_at
FROM session_bindings
WHERE source_title = ?
ORDER BY id DESC
LIMIT 1
""",
(source_title,),
).fetchone()
return SessionBinding(**dict(row)) if row else None
def list_action_records(
self,
task_id: str | None = None,
@ -297,162 +592,3 @@ class TaskRepository:
(*params, limit),
).fetchall()
return [ActionRecord(**dict(row)) for row in rows]
def bootstrap_from_legacy_sessions(self, session_dir: Path) -> int:
synced = 0
if not session_dir.exists():
return synced
for folder in sorted(p for p in session_dir.iterdir() if p.is_dir()):
task_id = folder.name
existing_task = self.get_task(task_id)
derived_status = "created"
if (folder / "transcribe_done.flag").exists():
derived_status = "transcribed"
if (folder / "songs.json").exists():
derived_status = "songs_detected"
if (folder / "split_done.flag").exists():
derived_status = "split_done"
if (folder / "upload_done.flag").exists():
derived_status = "published"
if (folder / "comment_done.flag").exists():
derived_status = "commented"
if (folder / "collection_a_done.flag").exists() or (folder / "collection_b_done.flag").exists():
derived_status = "collection_synced"
effective_status = self._merge_task_status(existing_task.status if existing_task else None, derived_status)
created_at = (
existing_task.created_at
if existing_task and existing_task.created_at
else self._folder_time_iso(folder)
)
updated_at = (
existing_task.updated_at
if existing_task and existing_task.updated_at
else created_at
)
task = Task(
id=task_id,
source_type=existing_task.source_type if existing_task else "legacy_session",
source_path=existing_task.source_path if existing_task else str(folder),
title=folder.name,
status=effective_status,
created_at=created_at,
updated_at=updated_at,
)
self.upsert_task(task)
steps = self._merge_steps(folder, task_id)
self.replace_steps(task_id, steps)
self._bootstrap_artifacts(folder, task_id)
synced += 1
return synced
def _infer_steps(self, folder: Path, task_id: str) -> list[TaskStep]:
flags = {
"ingest": True,
"transcribe": (folder / "transcribe_done.flag").exists(),
"song_detect": (folder / "songs.json").exists(),
"split": (folder / "split_done.flag").exists(),
"publish": (folder / "upload_done.flag").exists(),
"comment": (folder / "comment_done.flag").exists(),
"collection_a": (folder / "collection_a_done.flag").exists(),
"collection_b": (folder / "collection_b_done.flag").exists(),
}
steps: list[TaskStep] = []
for name, done in flags.items():
steps.append(
TaskStep(
id=None,
task_id=task_id,
step_name=name,
status="succeeded" if done else "pending",
error_code=None,
error_message=None,
retry_count=0,
started_at=None,
finished_at=None,
)
)
return steps
def _merge_steps(self, folder: Path, task_id: str) -> list[TaskStep]:
inferred_steps = {step.step_name: step for step in self._infer_steps(folder, task_id)}
current_steps = {step.step_name: step for step in self.list_steps(task_id)}
merged: list[TaskStep] = []
for step_name, inferred in inferred_steps.items():
current = current_steps.get(step_name)
if current is None:
merged.append(inferred)
continue
if inferred.status == "succeeded":
merged.append(
TaskStep(
id=None,
task_id=task_id,
step_name=step_name,
status="succeeded",
error_code=None,
error_message=None,
retry_count=current.retry_count,
started_at=current.started_at,
finished_at=current.finished_at,
)
)
continue
if current.status != "pending":
merged.append(
TaskStep(
id=None,
task_id=task_id,
step_name=step_name,
status=current.status,
error_code=current.error_code,
error_message=current.error_message,
retry_count=current.retry_count,
started_at=current.started_at,
finished_at=current.finished_at,
)
)
continue
merged.append(inferred)
return merged
@staticmethod
def _merge_task_status(existing_status: str | None, derived_status: str) -> str:
if not existing_status:
return derived_status
existing_rank = TASK_STATUS_ORDER.get(existing_status, -1)
derived_rank = TASK_STATUS_ORDER.get(derived_status, -1)
return existing_status if existing_rank >= derived_rank else derived_status
@staticmethod
def _folder_time_iso(folder: Path) -> str:
return datetime.fromtimestamp(folder.stat().st_mtime, tz=timezone.utc).isoformat()
def _bootstrap_artifacts(self, folder: Path, task_id: str) -> None:
artifacts = []
if any(folder.glob("*.srt")):
for srt in folder.glob("*.srt"):
artifacts.append(("subtitle_srt", srt))
for name in ("songs.json", "songs.txt", "bvid.txt"):
path = folder / name
if path.exists():
artifact_type = {
"songs.json": "songs_json",
"songs.txt": "songs_txt",
"bvid.txt": "publish_bvid",
}[name]
artifacts.append((artifact_type, path))
existing = {(a.artifact_type, a.path) for a in self.list_artifacts(task_id)}
for artifact_type, path in artifacts:
key = (artifact_type, str(path))
if key in existing:
continue
self.add_artifact(
Artifact(
id=None,
task_id=task_id,
artifact_type=artifact_type,
path=str(path),
metadata_json=json.dumps({}),
created_at="",
)
)

View File

@ -29,8 +29,9 @@ STATUS_BEFORE_STEP = {
class TaskResetService:
def __init__(self, repo: TaskRepository):
def __init__(self, repo: TaskRepository, session_dir: Path):
self.repo = repo
self.session_dir = session_dir.resolve()
def reset_to_step(self, task_id: str, step_name: str) -> dict[str, object]:
task = self.repo.get_task(task_id)
@ -39,7 +40,7 @@ class TaskResetService:
if step_name not in STEP_ORDER:
raise RuntimeError(f"unsupported step: {step_name}")
work_dir = self._resolve_work_dir(task)
work_dir = self._resolve_work_dir(task, self.session_dir)
self._cleanup_files(work_dir, step_name)
self._cleanup_artifacts(task_id, step_name)
self._reset_steps(task_id, step_name)
@ -48,9 +49,14 @@ class TaskResetService:
return {"task_id": task_id, "reset_to": step_name, "work_dir": str(work_dir)}
@staticmethod
def _resolve_work_dir(task) -> Path: # type: ignore[no-untyped-def]
source = Path(task.source_path)
return source.parent if source.is_file() else source
def _resolve_work_dir(task, session_dir: Path) -> Path: # type: ignore[no-untyped-def]
source = Path(task.source_path).resolve()
work_dir = source.parent if source.is_file() else source
try:
work_dir.relative_to(session_dir)
except ValueError as exc:
raise RuntimeError(f"task work_dir outside managed session_dir: {work_dir}") from exc
return work_dir
@staticmethod
def _remove_path(path: Path) -> None:

View File

@ -20,8 +20,13 @@ class WorkspaceCleanupService:
skipped: list[str] = []
if settings.get("delete_source_video_after_collection_synced", False):
source_path = Path(task.source_path)
if source_path.exists():
source_path = Path(task.source_path).resolve()
try:
source_path.relative_to(session_dir)
source_managed = True
except ValueError:
source_managed = False
if source_path.exists() and source_managed:
source_path.unlink()
self.repo.delete_artifact_by_path(task_id, str(source_path.resolve()))
removed.append(str(source_path))

View File

@ -2,47 +2,42 @@ from __future__ import annotations
import json
import random
import re
import subprocess
import time
from pathlib import Path
from typing import Any
import requests
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import Task
from biliup_next.core.providers import ProviderManifest
from biliup_next.infra.adapters.bilibili_api import BilibiliApiAdapter
from biliup_next.infra.adapters.full_video_locator import resolve_full_video_bvid
class LegacyBilibiliCollectionProvider:
class BilibiliCollectionProvider:
def __init__(self, bilibili_api: BilibiliApiAdapter | None = None) -> None:
self.bilibili_api = bilibili_api or BilibiliApiAdapter()
self._section_cache: dict[int, int | None] = {}
manifest = ProviderManifest(
id="bilibili_collection",
name="Legacy Bilibili Collection Provider",
name="Bilibili Collection Provider",
version="0.1.0",
provider_type="collection_provider",
entrypoint="biliup_next.infra.adapters.bilibili_collection_legacy:LegacyBilibiliCollectionProvider",
entrypoint="biliup_next.modules.collection.providers.bilibili_collection:BilibiliCollectionProvider",
capabilities=["collection"],
enabled_by_default=True,
)
def __init__(self) -> None:
self._section_cache: dict[int, int | None] = {}
def sync(self, task: Task, target: str, settings: dict[str, Any]) -> dict[str, object]:
session_dir = Path(str(settings["session_dir"])) / task.title
cookies = self._load_cookies(Path(str(settings["cookies_file"])))
cookies = self.bilibili_api.load_cookies(Path(str(settings["cookies_file"])))
csrf = cookies.get("bili_jct")
if not csrf:
raise ModuleError(code="COOKIE_CSRF_MISSING", message="Cookie 缺少 bili_jct", retryable=False)
session = requests.Session()
session.cookies.update(cookies)
session.headers.update(
{
"User-Agent": "Mozilla/5.0",
"Referer": "https://member.bilibili.com/platform/upload-manager/distribution",
}
session = self.bilibili_api.build_session(
cookies=cookies,
referer="https://member.bilibili.com/platform/upload-manager/distribution",
)
if target == "a":
@ -69,12 +64,11 @@ class LegacyBilibiliCollectionProvider:
raise ModuleError(code="COLLECTION_SECTION_NOT_FOUND", message=f"未找到合集 section: {season_id}", retryable=True)
info = self._get_video_info(session, bvid)
episodes = [info]
add_result = self._add_videos_batch(session, csrf, section_id, episodes)
add_result = self._add_videos_batch(session, csrf, section_id, [info])
if add_result["status"] == "failed":
raise ModuleError(
code="COLLECTION_ADD_FAILED",
message=add_result["message"],
message=str(add_result["message"]),
retryable=True,
details=add_result,
)
@ -83,21 +77,13 @@ class LegacyBilibiliCollectionProvider:
if add_result["status"] == "added":
append_key = "append_collection_a_new_to_end" if target == "a" else "append_collection_b_new_to_end"
if settings.get(append_key, True):
self._move_videos_to_section_end(session, csrf, section_id, [info["aid"]])
self._move_videos_to_section_end(session, csrf, section_id, [int(info["aid"])])
return {"status": add_result["status"], "target": target, "bvid": bvid, "season_id": season_id}
@staticmethod
def _load_cookies(path: Path) -> dict[str, str]:
with path.open("r", encoding="utf-8") as f:
data = json.load(f)
if "cookie_info" in data:
return {c["name"]: c["value"] for c in data.get("cookie_info", {}).get("cookies", [])}
return data
def _resolve_section_id(self, session: requests.Session, season_id: int) -> int | None:
def _resolve_section_id(self, session, season_id: int) -> int | None: # type: ignore[no-untyped-def]
if season_id in self._section_cache:
return self._section_cache[season_id]
result = session.get("https://member.bilibili.com/x2/creative/web/seasons", params={"pn": 1, "ps": 50}, timeout=15).json()
result = self.bilibili_api.list_seasons(session)
if result.get("code") != 0:
return None
for season in result.get("data", {}).get("seasons", []):
@ -109,40 +95,31 @@ class LegacyBilibiliCollectionProvider:
self._section_cache[season_id] = None
return None
@staticmethod
def _get_video_info(session: requests.Session, bvid: str) -> dict[str, object]:
result = session.get("https://api.bilibili.com/x/web-interface/view", params={"bvid": bvid}, timeout=15).json()
if result.get("code") != 0:
raise ModuleError(
code="COLLECTION_VIDEO_INFO_FAILED",
message=f"获取视频信息失败: {result.get('message')}",
retryable=True,
)
data = result["data"]
def _get_video_info(self, session, bvid: str) -> dict[str, object]: # type: ignore[no-untyped-def]
data = self.bilibili_api.get_video_view(
session,
bvid,
error_code="COLLECTION_VIDEO_INFO_FAILED",
error_message="获取视频信息失败",
)
return {"aid": data["aid"], "cid": data["cid"], "title": data["title"], "charging_pay": 0}
@staticmethod
def _add_videos_batch(session: requests.Session, csrf: str, section_id: int, episodes: list[dict[str, object]]) -> dict[str, object]:
def _add_videos_batch(self, session, csrf: str, section_id: int, episodes: list[dict[str, object]]) -> dict[str, object]: # type: ignore[no-untyped-def]
time.sleep(random.uniform(5.0, 10.0))
result = session.post(
"https://member.bilibili.com/x2/creative/web/season/section/episodes/add",
params={"csrf": csrf},
json={"sectionId": section_id, "episodes": episodes},
timeout=20,
).json()
result = self.bilibili_api.add_section_episodes(
session,
csrf=csrf,
section_id=section_id,
episodes=episodes,
)
if result.get("code") == 0:
return {"status": "added"}
if result.get("code") == 20080:
return {"status": "already_exists", "message": result.get("message", "")}
return {"status": "failed", "message": result.get("message", "unknown error"), "code": result.get("code")}
@staticmethod
def _move_videos_to_section_end(session: requests.Session, csrf: str, section_id: int, added_aids: list[int]) -> bool:
detail = session.get(
"https://member.bilibili.com/x2/creative/web/season/section",
params={"id": section_id},
timeout=20,
).json()
def _move_videos_to_section_end(self, session, csrf: str, section_id: int, added_aids: list[int]) -> bool: # type: ignore[no-untyped-def]
detail = self.bilibili_api.get_section_detail(session, section_id=section_id)
if detail.get("code") != 0:
return False
section = detail.get("data", {}).get("section", {})
@ -168,12 +145,7 @@ class LegacyBilibiliCollectionProvider:
"title": section["title"],
"type": section["type"],
},
"sorts": [{"id": item["id"], "sort": idx + 1} for idx, item in enumerate(ordered)],
"sorts": [{"id": item["id"], "sort": index + 1} for index, item in enumerate(ordered)],
}
result = session.post(
"https://member.bilibili.com/x2/creative/web/season/section/edit",
params={"csrf": csrf},
json=payload,
timeout=20,
).json()
result = self.bilibili_api.edit_section(session, csrf=csrf, payload=payload)
return result.get("code") == 0

View File

@ -5,21 +5,23 @@ import time
from pathlib import Path
from typing import Any
import requests
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import Task
from biliup_next.core.providers import ProviderManifest
from biliup_next.infra.adapters.bilibili_api import BilibiliApiAdapter
from biliup_next.infra.adapters.full_video_locator import resolve_full_video_bvid
class LegacyBilibiliTopCommentProvider:
class BilibiliTopCommentProvider:
def __init__(self, bilibili_api: BilibiliApiAdapter | None = None) -> None:
self.bilibili_api = bilibili_api or BilibiliApiAdapter()
manifest = ProviderManifest(
id="bilibili_top_comment",
name="Legacy Bilibili Top Comment Provider",
name="Bilibili Top Comment Provider",
version="0.1.0",
provider_type="comment_provider",
entrypoint="biliup_next.infra.adapters.bilibili_top_comment_legacy:LegacyBilibiliTopCommentProvider",
entrypoint="biliup_next.modules.comment.providers.bilibili_top_comment:BilibiliTopCommentProvider",
capabilities=["comment"],
enabled_by_default=True,
)
@ -42,19 +44,15 @@ class LegacyBilibiliTopCommentProvider:
self._touch_comment_flags(session_dir, split_done=True, full_done=True)
return {"status": "skipped", "reason": "comment_content_empty"}
cookies = self._load_cookies(Path(str(settings["cookies_file"])))
cookies = self.bilibili_api.load_cookies(Path(str(settings["cookies_file"])))
csrf = cookies.get("bili_jct")
if not csrf:
raise ModuleError(code="COOKIE_CSRF_MISSING", message="Cookie 缺少 bili_jct", retryable=False)
session = requests.Session()
session.cookies.update(cookies)
session.headers.update(
{
"User-Agent": "Mozilla/5.0",
"Referer": "https://www.bilibili.com/",
"Origin": "https://www.bilibili.com",
}
session = self.bilibili_api.build_session(
cookies=cookies,
referer="https://www.bilibili.com/",
origin="https://www.bilibili.com",
)
split_result = {"status": "skipped", "reason": "disabled"}
@ -79,7 +77,8 @@ class LegacyBilibiliTopCommentProvider:
if full_bvid and timeline_content:
full_result = self._post_and_top_comment(session, csrf, full_bvid, timeline_content, "full")
else:
full_result = {"status": "skipped", "reason": "full_video_bvid_not_found" if not full_bvid else "timeline_comment_empty"}
reason = "full_video_bvid_not_found" if not full_bvid else "timeline_comment_empty"
full_result = {"status": "skipped", "reason": reason}
full_done = True
(session_dir / "comment_full_done.flag").touch()
elif not full_done:
@ -92,44 +91,35 @@ class LegacyBilibiliTopCommentProvider:
def _post_and_top_comment(
self,
session: requests.Session,
session,
csrf: str,
bvid: str,
content: str,
target: str,
) -> dict[str, object]:
view = session.get("https://api.bilibili.com/x/web-interface/view", params={"bvid": bvid}, timeout=15).json()
if view.get("code") != 0:
raise ModuleError(
code="COMMENT_VIEW_FAILED",
message=f"获取{target}视频信息失败: {view.get('message')}",
retryable=True,
)
aid = view["data"]["aid"]
add_res = session.post(
"https://api.bilibili.com/x/v2/reply/add",
data={"type": 1, "oid": aid, "message": content, "plat": 1, "csrf": csrf},
timeout=15,
).json()
if add_res.get("code") != 0:
raise ModuleError(
code="COMMENT_POST_FAILED",
message=f"发布{target}评论失败: {add_res.get('message')}",
retryable=True,
)
rpid = add_res["data"]["rpid"]
view = self.bilibili_api.get_video_view(
session,
bvid,
error_code="COMMENT_VIEW_FAILED",
error_message=f"获取{target}视频信息失败",
)
aid = int(view["aid"])
add_res = self.bilibili_api.add_reply(
session,
csrf=csrf,
aid=aid,
content=content,
error_message=f"发布{target}评论失败",
)
rpid = int(add_res["rpid"])
time.sleep(3)
top_res = session.post(
"https://api.bilibili.com/x/v2/reply/top",
data={"type": 1, "oid": aid, "rpid": rpid, "action": 1, "csrf": csrf},
timeout=15,
).json()
if top_res.get("code") != 0:
raise ModuleError(
code="COMMENT_TOP_FAILED",
message=f"置顶{target}评论失败: {top_res.get('message')}",
retryable=True,
)
self.bilibili_api.top_reply(
session,
csrf=csrf,
aid=aid,
rpid=rpid,
error_message=f"置顶{target}评论失败",
)
return {"status": "ok", "bvid": bvid, "aid": aid, "rpid": rpid}
@staticmethod
@ -161,14 +151,6 @@ class LegacyBilibiliTopCommentProvider:
return "\n".join(lines)
return ""
@staticmethod
def _load_cookies(path: Path) -> dict[str, str]:
with path.open("r", encoding="utf-8") as f:
data = json.load(f)
if "cookie_info" in data:
return {c["name"]: c["value"] for c in data.get("cookie_info", {}).get("cookies", [])}
return data
@staticmethod
def _touch_comment_flags(session_dir: Path, *, split_done: bool, full_done: bool) -> None:
if split_done:

View File

@ -1,26 +1,51 @@
from __future__ import annotations
import json
import re
import shutil
import subprocess
import time
from datetime import datetime, timedelta
from pathlib import Path
from zoneinfo import ZoneInfo
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import Artifact, Task, TaskStep, utc_now_iso
from biliup_next.core.models import Artifact, Task, TaskContext, TaskStep, utc_now_iso
from biliup_next.core.registry import Registry
from biliup_next.infra.task_repository import TaskRepository
SHANGHAI_TZ = ZoneInfo("Asia/Shanghai")
TITLE_PATTERN = re.compile(
r"^(?P<streamer>.+?)\s+(?P<month>\d{2})月(?P<day>\d{2})日\s+(?P<hour>\d{2})时(?P<minute>\d{2})分"
)
class IngestService:
def __init__(self, registry: Registry, repo: TaskRepository):
self.registry = registry
self.repo = repo
def create_task_from_file(self, source_path: Path, settings: dict[str, object]) -> Task:
def create_task_from_file(
self,
source_path: Path,
settings: dict[str, object],
*,
context_payload: dict[str, object] | None = None,
) -> Task:
provider_id = str(settings.get("provider", "local_file"))
provider = self.registry.get("ingest_provider", provider_id)
provider.validate_source(source_path, settings)
source_path = source_path.resolve()
session_dir = Path(str(settings["session_dir"])).resolve()
try:
source_path.relative_to(session_dir)
except ValueError as exc:
raise ModuleError(
code="SOURCE_OUTSIDE_WORKSPACE",
message=f"源文件不在 session 工作区内: {source_path}",
retryable=False,
details={"session_dir": str(session_dir), "hint": "请先使用 stage/import 或 stage/upload 导入文件"},
) from exc
task_id = source_path.stem
if self.repo.get_task(task_id):
@ -31,10 +56,11 @@ class IngestService:
)
now = utc_now_iso()
context_payload = context_payload or {}
task = Task(
id=task_id,
source_type="local_file",
source_path=str(source_path.resolve()),
source_path=str(source_path),
title=source_path.stem,
status="created",
created_at=now,
@ -59,11 +85,22 @@ class IngestService:
id=None,
task_id=task_id,
artifact_type="source_video",
path=str(source_path.resolve()),
path=str(source_path),
metadata_json=json.dumps({"provider": provider_id}),
created_at=now,
)
)
context = self._build_task_context(
task,
context_payload,
created_at=now,
updated_at=now,
session_gap_minutes=int(settings.get("session_gap_minutes", 60)),
)
self.repo.upsert_task_context(context)
full_video_bvid = (context.full_video_bvid or "").strip()
if full_video_bvid.startswith("BV"):
(source_path.parent / "full_video_bvid.txt").write_text(full_video_bvid, encoding="utf-8")
return task
def scan_stage(self, settings: dict[str, object]) -> dict[str, object]:
@ -123,10 +160,27 @@ class IngestService:
)
continue
sidecar_meta = self._load_sidecar_metadata(
source_path,
enabled=bool(settings.get("meta_sidecar_enabled", True)),
suffix=str(settings.get("meta_sidecar_suffix", ".meta.json")),
)
task_dir = session_dir / task_id
task_dir.mkdir(parents=True, exist_ok=True)
target_source = self._move_to_directory(source_path, task_dir)
task = self.create_task_from_file(target_source, settings)
if sidecar_meta["meta_path"] is not None:
self._move_optional_metadata_file(sidecar_meta["meta_path"], task_dir)
context_payload = {
"source_title": source_path.stem,
"segment_duration_seconds": duration_seconds,
"segment_started_at": sidecar_meta["payload"].get("segment_started_at"),
"streamer": sidecar_meta["payload"].get("streamer"),
"room_id": sidecar_meta["payload"].get("room_id"),
"session_key": sidecar_meta["payload"].get("session_key"),
"full_video_bvid": sidecar_meta["payload"].get("full_video_bvid"),
"reference_timestamp": sidecar_meta["payload"].get("reference_timestamp") or source_path.stat().st_mtime,
}
task = self.create_task_from_file(target_source, settings, context_payload=context_payload)
accepted.append(
{
"task_id": task.id,
@ -199,3 +253,202 @@ class IngestService:
if not candidate.exists():
return candidate
index += 1
@staticmethod
def _load_sidecar_metadata(source_path: Path, *, enabled: bool, suffix: str) -> dict[str, object]:
if not enabled:
return {"meta_path": None, "payload": {}}
suffix = suffix.strip() or ".meta.json"
meta_path = source_path.with_name(f"{source_path.stem}{suffix}")
payload: dict[str, object] = {}
if meta_path.exists():
try:
payload = json.loads(meta_path.read_text(encoding="utf-8"))
except json.JSONDecodeError as exc:
raise ModuleError(
code="STAGE_META_INVALID",
message=f"元数据文件不是合法 JSON: {meta_path.name}",
retryable=False,
) from exc
if not isinstance(payload, dict):
raise ModuleError(
code="STAGE_META_INVALID",
message=f"元数据文件必须是对象: {meta_path.name}",
retryable=False,
)
return {"meta_path": meta_path if meta_path.exists() else None, "payload": payload}
def _move_optional_metadata_file(self, meta_path: Path, task_dir: Path) -> None:
if not meta_path.exists():
return
self._move_to_directory(meta_path, task_dir)
def _build_task_context(
self,
task: Task,
context_payload: dict[str, object],
*,
created_at: str,
updated_at: str,
session_gap_minutes: int,
) -> TaskContext:
source_title = self._clean_text(context_payload.get("source_title")) or task.title
streamer = self._clean_text(context_payload.get("streamer"))
room_id = self._clean_text(context_payload.get("room_id"))
session_key = self._clean_text(context_payload.get("session_key"))
full_video_bvid = self._clean_bvid(context_payload.get("full_video_bvid"))
segment_duration = self._coerce_float(context_payload.get("segment_duration_seconds"))
segment_started_at = self._coerce_iso_datetime(context_payload.get("segment_started_at"))
if streamer is None or segment_started_at is None:
inferred = self._infer_from_title(
source_title,
reference_timestamp=context_payload.get("reference_timestamp"),
)
if streamer is None:
streamer = inferred.get("streamer")
if segment_started_at is None:
segment_started_at = inferred.get("segment_started_at")
if session_key is None:
session_key, inherited_bvid = self._infer_session_key(
streamer=streamer,
room_id=room_id,
segment_started_at=segment_started_at,
segment_duration_seconds=segment_duration,
fallback_task_id=task.id,
gap_minutes=session_gap_minutes,
)
if full_video_bvid is None:
full_video_bvid = inherited_bvid
elif full_video_bvid is None:
full_video_bvid = self._find_full_video_bvid_by_session_key(session_key)
if full_video_bvid is None:
binding = self.repo.get_session_binding(session_key=session_key, source_title=source_title)
if binding is not None:
if session_key is None and binding.session_key:
session_key = binding.session_key
full_video_bvid = self._clean_bvid(binding.full_video_bvid)
if session_key is None:
session_key = f"task:{task.id}"
return TaskContext(
id=None,
task_id=task.id,
session_key=session_key,
streamer=streamer,
room_id=room_id,
source_title=source_title,
segment_started_at=segment_started_at,
segment_duration_seconds=segment_duration,
full_video_bvid=full_video_bvid,
created_at=created_at,
updated_at=updated_at,
)
@staticmethod
def _clean_text(value: object) -> str | None:
if value is None:
return None
text = str(value).strip()
return text or None
@staticmethod
def _clean_bvid(value: object) -> str | None:
text = IngestService._clean_text(value)
if text and text.startswith("BV"):
return text
return None
@staticmethod
def _coerce_float(value: object) -> float | None:
if value is None or value == "":
return None
try:
return float(value)
except (TypeError, ValueError):
return None
@staticmethod
def _coerce_iso_datetime(value: object) -> str | None:
if value is None:
return None
text = str(value).strip()
if not text:
return None
try:
return datetime.fromisoformat(text).astimezone(SHANGHAI_TZ).isoformat()
except ValueError:
return None
def _infer_from_title(self, title: str, *, reference_timestamp: object) -> dict[str, str | None]:
match = TITLE_PATTERN.match(title)
if not match:
return {"streamer": None, "segment_started_at": None}
reference_dt = self._reference_datetime(reference_timestamp)
month = int(match.group("month"))
day = int(match.group("day"))
hour = int(match.group("hour"))
minute = int(match.group("minute"))
year = reference_dt.year
if (month, day) > (reference_dt.month, reference_dt.day):
year -= 1
started_at = datetime(year, month, day, hour, minute, tzinfo=SHANGHAI_TZ)
return {
"streamer": match.group("streamer").strip(),
"segment_started_at": started_at.isoformat(),
}
@staticmethod
def _reference_datetime(reference_timestamp: object) -> datetime:
if isinstance(reference_timestamp, (int, float)):
return datetime.fromtimestamp(float(reference_timestamp), tz=SHANGHAI_TZ)
return datetime.now(tz=SHANGHAI_TZ)
def _infer_session_key(
self,
*,
streamer: str | None,
room_id: str | None,
segment_started_at: str | None,
segment_duration_seconds: float | None,
fallback_task_id: str,
gap_minutes: int,
) -> tuple[str | None, str | None]:
if not streamer or not segment_started_at:
return None, None
try:
segment_start = datetime.fromisoformat(segment_started_at)
except ValueError:
return None, None
tolerance = timedelta(minutes=max(gap_minutes, 0))
for context in self.repo.find_recent_task_contexts(streamer):
if room_id and context.room_id and room_id != context.room_id:
continue
candidate_end = self._context_end_time(context)
if candidate_end is None:
continue
if segment_start >= candidate_end and segment_start - candidate_end <= tolerance:
return context.session_key, context.full_video_bvid
date_tag = segment_start.astimezone(SHANGHAI_TZ).strftime("%Y%m%dT%H%M")
return f"{streamer}:{date_tag}", None
@staticmethod
def _context_end_time(context: TaskContext) -> datetime | None:
if not context.segment_started_at or context.segment_duration_seconds is None:
return None
try:
started_at = datetime.fromisoformat(context.segment_started_at)
except ValueError:
return None
return started_at + timedelta(seconds=float(context.segment_duration_seconds))
def _find_full_video_bvid_by_session_key(self, session_key: str) -> str | None:
for context in self.repo.list_task_contexts_by_session_key(session_key):
bvid = self._clean_bvid(context.full_video_bvid)
if bvid:
return bvid
return None

View File

@ -0,0 +1,247 @@
from __future__ import annotations
import json
import random
import re
import time
from pathlib import Path
from typing import Any
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import PublishRecord, Task, utc_now_iso
from biliup_next.core.providers import ProviderManifest
from biliup_next.infra.adapters.biliup_cli import BiliupCliAdapter
class BiliupCliPublishProvider:
def __init__(self, adapter: BiliupCliAdapter | None = None) -> None:
self.adapter = adapter or BiliupCliAdapter()
manifest = ProviderManifest(
id="biliup_cli",
name="biliup CLI Publish Provider",
version="0.1.0",
provider_type="publish_provider",
entrypoint="biliup_next.modules.publish.providers.biliup_cli:BiliupCliPublishProvider",
capabilities=["publish"],
enabled_by_default=True,
)
def publish(self, task: Task, clip_videos: list, settings: dict[str, Any]) -> PublishRecord:
work_dir = Path(str(settings["session_dir"])) / task.title
bvid_file = work_dir / "bvid.txt"
upload_done = work_dir / "upload_done.flag"
config = self._load_upload_config(Path(str(settings["upload_config_file"])))
video_files = [artifact.path for artifact in clip_videos]
if not video_files:
raise ModuleError(
code="PUBLISH_NO_CLIPS",
message=f"没有可上传的切片: {task.id}",
retryable=False,
)
parsed = self._parse_filename(task.title, config)
streamer = parsed.get("streamer", task.title)
date = parsed.get("date", "")
songs_txt = work_dir / "songs.txt"
songs_json = work_dir / "songs.json"
songs_list = songs_txt.read_text(encoding="utf-8").strip() if songs_txt.exists() else ""
song_count = 0
if songs_json.exists():
song_count = len(json.loads(songs_json.read_text(encoding="utf-8")).get("songs", []))
quote = self._get_random_quote(config)
template_vars = {
"streamer": streamer,
"date": date,
"song_count": song_count,
"songs_list": songs_list,
"daily_quote": quote.get("text", ""),
"quote_author": quote.get("author", ""),
}
template = config.get("template", {})
title = template.get("title", "{streamer}_{date}").format(**template_vars)
description = template.get("description", "{songs_list}").format(**template_vars)
dynamic = template.get("dynamic", "").format(**template_vars)
tags = template.get("tag", "翻唱,唱歌,音乐").format(**template_vars)
streamer_cfg = config.get("streamers", {})
if streamer in streamer_cfg:
tags = streamer_cfg[streamer].get("tags", tags)
upload_settings = config.get("upload_settings", {})
tid = upload_settings.get("tid", 31)
biliup_path = str(settings["biliup_path"])
cookie_file = str(settings["cookie_file"])
retry_count = max(1, int(settings.get("retry_count", 5)))
self.adapter.run_optional([biliup_path, "-u", cookie_file, "renew"])
first_batch = video_files[:5]
remaining_batches = [video_files[i:i + 5] for i in range(5, len(video_files), 5)]
existing_bvid = bvid_file.read_text(encoding="utf-8").strip() if bvid_file.exists() else ""
if upload_done.exists() and existing_bvid.startswith("BV"):
return PublishRecord(
id=None,
task_id=task.id,
platform="bilibili",
aid=None,
bvid=existing_bvid,
title=title,
published_at=utc_now_iso(),
)
bvid = existing_bvid if existing_bvid.startswith("BV") else self._upload_first_batch(
biliup_path=biliup_path,
cookie_file=cookie_file,
first_batch=first_batch,
title=title,
tid=tid,
tags=tags,
description=description,
dynamic=dynamic,
upload_settings=upload_settings,
retry_count=retry_count,
)
bvid_file.write_text(bvid, encoding="utf-8")
for batch_index, batch in enumerate(remaining_batches, start=2):
self._append_batch(
biliup_path=biliup_path,
cookie_file=cookie_file,
bvid=bvid,
batch=batch,
batch_index=batch_index,
retry_count=retry_count,
)
upload_done.touch()
return PublishRecord(
id=None,
task_id=task.id,
platform="bilibili",
aid=None,
bvid=bvid,
title=title,
published_at=utc_now_iso(),
)
def _upload_first_batch(
self,
*,
biliup_path: str,
cookie_file: str,
first_batch: list[str],
title: str,
tid: int,
tags: str,
description: str,
dynamic: str,
upload_settings: dict[str, Any],
retry_count: int,
) -> str:
upload_cmd = [
biliup_path,
"-u",
cookie_file,
"upload",
*first_batch,
"--title",
title,
"--tid",
str(tid),
"--tag",
tags,
"--copyright",
str(upload_settings.get("copyright", 2)),
"--source",
str(upload_settings.get("source", "直播回放")),
"--desc",
description,
]
if dynamic:
upload_cmd.extend(["--dynamic", dynamic])
cover = str(upload_settings.get("cover", "")).strip()
if cover and Path(cover).exists():
upload_cmd.extend(["--cover", cover])
for attempt in range(1, retry_count + 1):
result = self.adapter.run(upload_cmd, label=f"首批上传[{attempt}/{retry_count}]")
if result.returncode == 0:
match = re.search(r'"bvid":"(BV[A-Za-z0-9]+)"', result.stdout) or re.search(r"(BV[A-Za-z0-9]+)", result.stdout)
if match:
return match.group(1)
if attempt < retry_count:
time.sleep(self._wait_seconds(attempt - 1))
continue
raise ModuleError(
code="PUBLISH_UPLOAD_FAILED",
message="首批上传失败",
retryable=True,
details={"stdout": result.stdout[-2000:], "stderr": result.stderr[-2000:]},
)
raise AssertionError("unreachable")
def _append_batch(
self,
*,
biliup_path: str,
cookie_file: str,
bvid: str,
batch: list[str],
batch_index: int,
retry_count: int,
) -> None:
time.sleep(45)
append_cmd = [biliup_path, "-u", cookie_file, "append", "--vid", bvid, *batch]
for attempt in range(1, retry_count + 1):
result = self.adapter.run(append_cmd, label=f"追加第{batch_index}批[{attempt}/{retry_count}]")
if result.returncode == 0:
return
if attempt < retry_count:
time.sleep(self._wait_seconds(attempt - 1))
continue
raise ModuleError(
code="PUBLISH_APPEND_FAILED",
message=f"追加第 {batch_index} 批失败",
retryable=True,
details={"stdout": result.stdout[-2000:], "stderr": result.stderr[-2000:]},
)
@staticmethod
def _wait_seconds(retry_index: int) -> int:
return min(300 * (2**retry_index), 3600)
@staticmethod
def _load_upload_config(path: Path) -> dict[str, Any]:
if not path.exists():
return {}
return json.loads(path.read_text(encoding="utf-8"))
@staticmethod
def _parse_filename(filename: str, config: dict[str, Any] | None = None) -> dict[str, str]:
config = config or {}
patterns = config.get("filename_patterns", {}).get("patterns", [])
for pattern_config in patterns:
regex = pattern_config.get("regex")
if not regex:
continue
match = re.match(regex, filename)
if match:
data = match.groupdict()
date_format = pattern_config.get("date_format", "{date}")
try:
data["date"] = date_format.format(**data)
except KeyError:
pass
return data
return {"streamer": filename, "date": ""}
@staticmethod
def _get_random_quote(config: dict[str, Any]) -> dict[str, str]:
quotes = config.get("quotes", [])
if not quotes:
return {"text": "", "author": ""}
return random.choice(quotes)

View File

@ -1,15 +1,13 @@
from __future__ import annotations
import json
import os
import subprocess
from pathlib import Path
from typing import Any
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import Artifact, Task, utc_now_iso
from biliup_next.core.providers import ProviderManifest
from biliup_next.infra.legacy_paths import legacy_project_root
from biliup_next.infra.adapters.codex_cli import CodexCliAdapter
SONG_SCHEMA = {
"type": "object",
@ -24,15 +22,15 @@ SONG_SCHEMA = {
"title": {"type": "string"},
"artist": {"type": "string"},
"confidence": {"type": "number"},
"evidence": {"type": "string"}
"evidence": {"type": "string"},
},
"required": ["start", "end", "title", "artist", "confidence", "evidence"],
"additionalProperties": False
}
"additionalProperties": False,
},
}
},
"required": ["songs"],
"additionalProperties": False
"additionalProperties": False,
}
TASK_PROMPT = """你是音乐片段识别助手。当前目录下有一个字幕文件。
@ -57,47 +55,34 @@ TASK_PROMPT = """你是音乐片段识别助手。当前目录下有一个字幕
最后请严格按照 Schema 生成 JSON 数据"""
class LegacyCodexSongDetector:
class CodexSongDetector:
def __init__(self, adapter: CodexCliAdapter | None = None) -> None:
self.adapter = adapter or CodexCliAdapter()
manifest = ProviderManifest(
id="codex",
name="Legacy Codex Song Detector",
name="Codex Song Detector",
version="0.1.0",
provider_type="song_detector",
entrypoint="biliup_next.infra.adapters.codex_legacy:LegacyCodexSongDetector",
entrypoint="biliup_next.modules.song_detect.providers.codex:CodexSongDetector",
capabilities=["song_detect"],
enabled_by_default=True,
)
def __init__(self, next_root: Path):
self.next_root = next_root
self.legacy_root = legacy_project_root(next_root)
def detect(self, task: Task, subtitle_srt: Artifact, settings: dict[str, Any]) -> tuple[Artifact, Artifact]:
work_dir = Path(subtitle_srt.path).parent
work_dir = Path(subtitle_srt.path).resolve().parent
schema_path = work_dir / "song_schema.json"
songs_json_path = work_dir / "songs.json"
songs_txt_path = work_dir / "songs.txt"
schema_path.write_text(json.dumps(SONG_SCHEMA, ensure_ascii=False, indent=2), encoding="utf-8")
env = {
**os.environ,
"CODEX_CMD": str(settings.get("codex_cmd", "codex")),
}
cmd = [
str(settings.get("codex_cmd", "codex")),
"exec",
TASK_PROMPT.replace("\n", " "),
"--full-auto",
"--sandbox", "workspace-write",
"--output-schema", "./song_schema.json",
"-o", "songs.json",
"--skip-git-repo-check",
"--json",
]
result = subprocess.run(
cmd,
cwd=str(work_dir),
capture_output=True,
text=True,
env=env,
codex_cmd = str(settings.get("codex_cmd", "codex"))
result = self.adapter.run_song_detect(
codex_cmd=codex_cmd,
work_dir=work_dir,
prompt=TASK_PROMPT,
)
if result.returncode != 0:
raise ModuleError(
code="SONG_DETECT_FAILED",
@ -105,36 +90,49 @@ class LegacyCodexSongDetector:
retryable=True,
details={"stdout": result.stdout[-2000:], "stderr": result.stderr[-2000:]},
)
songs_json = work_dir / "songs.json"
songs_txt = work_dir / "songs.txt"
if songs_json.exists() and not songs_txt.exists():
data = json.loads(songs_json.read_text(encoding="utf-8"))
with songs_txt.open("w", encoding="utf-8") as f:
for song in data.get("songs", []):
start_time = song["start"].split(",")[0].split(".")[0]
f.write(f"{start_time} {song['title']}{song['artist']}\n")
if not songs_json.exists() or not songs_txt.exists():
if songs_json_path.exists() and not songs_txt_path.exists():
self._generate_txt_fallback(songs_json_path, songs_txt_path)
if not songs_json_path.exists() or not songs_txt_path.exists():
raise ModuleError(
code="SONG_DETECT_OUTPUT_MISSING",
message=f"未生成 songs.json/songs.txt: {work_dir}",
retryable=True,
details={"stdout": result.stdout[-2000:], "stderr": result.stderr[-2000:]},
)
return (
Artifact(
id=None,
task_id=task.id,
artifact_type="songs_json",
path=str(songs_json),
metadata_json=json.dumps({"provider": "codex_legacy"}),
path=str(songs_json_path.resolve()),
metadata_json=json.dumps({"provider": "codex"}),
created_at=utc_now_iso(),
),
Artifact(
id=None,
task_id=task.id,
artifact_type="songs_txt",
path=str(songs_txt),
metadata_json=json.dumps({"provider": "codex_legacy"}),
path=str(songs_txt_path.resolve()),
metadata_json=json.dumps({"provider": "codex"}),
created_at=utc_now_iso(),
),
)
def _generate_txt_fallback(self, songs_json_path: Path, songs_txt_path: Path) -> None:
try:
data = json.loads(songs_json_path.read_text(encoding="utf-8"))
songs = data.get("songs", [])
with songs_txt_path.open("w", encoding="utf-8") as file_handle:
for song in songs:
start_time = str(song["start"]).split(",")[0].split(".")[0]
file_handle.write(f"{start_time} {song['title']}{song['artist']}\n")
except Exception as exc: # noqa: BLE001
raise ModuleError(
code="SONGS_TXT_GENERATE_FAILED",
message=f"生成 songs.txt 失败: {songs_txt_path}",
retryable=False,
details={"error": str(exc)},
) from exc

View File

@ -8,33 +8,28 @@ from typing import Any
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import Artifact, Task, utc_now_iso
from biliup_next.core.providers import ProviderManifest
from biliup_next.infra.legacy_paths import legacy_project_root
class LegacyFfmpegSplitProvider:
class FfmpegCopySplitProvider:
manifest = ProviderManifest(
id="ffmpeg_copy",
name="Legacy FFmpeg Split Provider",
name="FFmpeg Copy Split Provider",
version="0.1.0",
provider_type="split_provider",
entrypoint="biliup_next.infra.adapters.ffmpeg_split_legacy:LegacyFfmpegSplitProvider",
entrypoint="biliup_next.modules.split.providers.ffmpeg_copy:FfmpegCopySplitProvider",
capabilities=["split"],
enabled_by_default=True,
)
def __init__(self, next_root: Path):
self.next_root = next_root
self.legacy_root = legacy_project_root(next_root)
def split(self, task: Task, songs_json: Artifact, source_video: Artifact, settings: dict[str, Any]) -> list[Artifact]:
work_dir = Path(songs_json.path).parent
work_dir = Path(songs_json.path).resolve().parent
split_dir = work_dir / "split_video"
split_done = work_dir / "split_done.flag"
if split_done.exists() and split_dir.exists():
return self._collect_existing_clips(task.id, split_dir)
with Path(songs_json.path).open("r", encoding="utf-8") as f:
data = json.load(f)
with Path(songs_json.path).open("r", encoding="utf-8") as file_handle:
data = json.load(file_handle)
songs = data.get("songs", [])
if not songs:
raise ModuleError(
@ -45,32 +40,45 @@ class LegacyFfmpegSplitProvider:
split_dir.mkdir(parents=True, exist_ok=True)
ffmpeg_bin = str(settings.get("ffmpeg_bin", "ffmpeg"))
video_path = Path(source_video.path)
for idx, song in enumerate(songs, 1):
video_path = Path(source_video.path).resolve()
for index, song in enumerate(songs, 1):
start = str(song.get("start", "00:00:00,000")).replace(",", ".")
end = str(song.get("end", "00:00:00,000")).replace(",", ".")
title = str(song.get("title", "UNKNOWN")).replace("/", "_").replace("\\", "_")
output_path = split_dir / f"{idx:02d}_{title}{video_path.suffix}"
output_path = split_dir / f"{index:02d}_{title}{video_path.suffix}"
if output_path.exists():
continue
cmd = [
ffmpeg_bin,
"-y",
"-ss", start,
"-to", end,
"-i", str(video_path),
"-c", "copy",
"-map_metadata", "0",
"-ss",
start,
"-to",
end,
"-i",
str(video_path),
"-c",
"copy",
"-map_metadata",
"0",
str(output_path),
]
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
try:
subprocess.run(cmd, capture_output=True, text=True, check=True)
except FileNotFoundError as exc:
raise ModuleError(
code="FFMPEG_NOT_FOUND",
message=f"找不到 ffmpeg: {ffmpeg_bin}",
retryable=False,
) from exc
except subprocess.CalledProcessError as exc:
raise ModuleError(
code="SPLIT_FFMPEG_FAILED",
message=f"ffmpeg 切割失败: {output_path.name}",
retryable=True,
details={"stderr": result.stderr[-2000:]},
)
details={"stderr": exc.stderr[-2000:], "stdout": exc.stdout[-2000:]},
) from exc
split_done.touch()
return self._collect_existing_clips(task.id, split_dir)
@ -78,15 +86,16 @@ class LegacyFfmpegSplitProvider:
def _collect_existing_clips(self, task_id: str, split_dir: Path) -> list[Artifact]:
artifacts: list[Artifact] = []
for path in sorted(split_dir.iterdir()):
if path.is_file():
artifacts.append(
Artifact(
id=None,
task_id=task_id,
artifact_type="clip_video",
path=str(path),
metadata_json=json.dumps({"provider": "ffmpeg_copy"}),
created_at=utc_now_iso(),
)
if not path.is_file():
continue
artifacts.append(
Artifact(
id=None,
task_id=task_id,
artifact_type="clip_video",
path=str(path.resolve()),
metadata_json=json.dumps({"provider": "ffmpeg_copy"}),
created_at=utc_now_iso(),
)
)
return artifacts

View File

@ -0,0 +1,191 @@
from __future__ import annotations
import json
import math
import shutil
import subprocess
import time
from pathlib import Path
from typing import Any
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import Artifact, Task, utc_now_iso
from biliup_next.core.providers import ProviderManifest
LANGUAGE = "zh"
BITRATE_KBPS = 64
MODEL_NAME = "whisper-large-v3-turbo"
class GroqTranscribeProvider:
manifest = ProviderManifest(
id="groq",
name="Groq Transcribe Provider",
version="0.1.0",
provider_type="transcribe_provider",
entrypoint="biliup_next.modules.transcribe.providers.groq:GroqTranscribeProvider",
capabilities=["transcribe"],
enabled_by_default=True,
)
def transcribe(self, task: Task, source_video: Artifact, settings: dict[str, Any]) -> Artifact:
groq_api_key = str(settings.get("groq_api_key", "")).strip()
if not groq_api_key:
raise ModuleError(
code="GROQ_API_KEY_MISSING",
message="未配置 transcribe.groq_api_key",
retryable=False,
)
try:
from groq import Groq
except ModuleNotFoundError as exc:
raise ModuleError(
code="GROQ_DEPENDENCY_MISSING",
message="未安装 groq 依赖,请在 biliup-next 环境中执行 pip install -e .",
retryable=False,
) from exc
source_path = Path(source_video.path).resolve()
if not source_path.exists():
raise ModuleError(
code="TRANSCRIBE_SOURCE_MISSING",
message=f"源视频不存在: {source_path}",
retryable=False,
)
ffmpeg_bin = str(settings.get("ffmpeg_bin", "ffmpeg"))
max_file_size_mb = int(settings.get("max_file_size_mb", 23))
work_dir = source_path.parent
temp_audio_dir = work_dir / "temp_audio"
temp_audio_dir.mkdir(parents=True, exist_ok=True)
segment_duration = max(1, math.floor((max_file_size_mb * 8 * 1024) / BITRATE_KBPS))
output_pattern = temp_audio_dir / "part_%03d.mp3"
self._extract_audio_segments(
ffmpeg_bin=ffmpeg_bin,
source_path=source_path,
output_pattern=output_pattern,
segment_duration=segment_duration,
)
segments = sorted(temp_audio_dir.glob("part_*.mp3"))
if not segments:
raise ModuleError(
code="TRANSCRIBE_AUDIO_SEGMENTS_MISSING",
message=f"未生成音频分片: {source_path.name}",
retryable=False,
)
client = Groq(api_key=groq_api_key)
srt_path = work_dir / f"{task.title}.srt"
global_idx = 1
try:
with srt_path.open("w", encoding="utf-8") as srt_file:
for index, segment in enumerate(segments):
offset_seconds = index * segment_duration
segment_data = self._transcribe_with_retry(client, segment)
for chunk in segment_data:
start = self._format_srt_time(float(chunk["start"]) + offset_seconds)
end = self._format_srt_time(float(chunk["end"]) + offset_seconds)
text = str(chunk["text"]).strip()
srt_file.write(f"{global_idx}\n{start} --> {end}\n{text}\n\n")
global_idx += 1
finally:
shutil.rmtree(temp_audio_dir, ignore_errors=True)
return Artifact(
id=None,
task_id=task.id,
artifact_type="subtitle_srt",
path=str(srt_path.resolve()),
metadata_json=json.dumps(
{
"provider": "groq",
"model": MODEL_NAME,
"segment_duration_seconds": segment_duration,
}
),
created_at=utc_now_iso(),
)
def _extract_audio_segments(
self,
*,
ffmpeg_bin: str,
source_path: Path,
output_pattern: Path,
segment_duration: int,
) -> None:
cmd = [
ffmpeg_bin,
"-y",
"-i",
str(source_path),
"-vn",
"-acodec",
"libmp3lame",
"-b:a",
f"{BITRATE_KBPS}k",
"-ac",
"1",
"-ar",
"22050",
"-f",
"segment",
"-segment_time",
str(segment_duration),
"-reset_timestamps",
"1",
str(output_pattern),
]
try:
subprocess.run(cmd, check=True, capture_output=True, text=True)
except FileNotFoundError as exc:
raise ModuleError(
code="FFMPEG_NOT_FOUND",
message=f"找不到 ffmpeg: {ffmpeg_bin}",
retryable=False,
) from exc
except subprocess.CalledProcessError as exc:
raise ModuleError(
code="FFMPEG_AUDIO_EXTRACT_FAILED",
message=f"音频提取失败: {source_path.name}",
retryable=True,
details={"stderr": exc.stderr[-2000:], "stdout": exc.stdout[-2000:]},
) from exc
def _transcribe_with_retry(self, client: Any, audio_file: Path) -> list[dict[str, Any]]:
retry_count = 0
while True:
try:
with audio_file.open("rb") as file_handle:
response = client.audio.transcriptions.create(
file=(audio_file.name, file_handle.read()),
model=MODEL_NAME,
response_format="verbose_json",
language=LANGUAGE,
temperature=0.0,
)
return [dict(segment) for segment in response.segments]
except Exception as exc: # noqa: BLE001
retry_count += 1
err_str = str(exc)
if "429" in err_str or "rate_limit" in err_str.lower():
time.sleep(25)
continue
raise ModuleError(
code="GROQ_TRANSCRIBE_FAILED",
message=f"Groq 转录失败: {audio_file.name}",
retryable=True,
details={"error": err_str, "retry_count": retry_count},
) from exc
@staticmethod
def _format_srt_time(seconds: float) -> str:
td_hours = int(seconds // 3600)
td_mins = int((seconds % 3600) // 60)
td_secs = int(seconds % 60)
td_millis = int((seconds - int(seconds)) * 1000)
return f"{td_hours:02}:{td_mins:02}:{td_secs:02},{td_millis:03}"

View File

@ -1,9 +1,9 @@
{
"id": "bilibili_collection",
"name": "Legacy Bilibili Collection Provider",
"name": "Bilibili Collection Provider",
"version": "0.1.0",
"provider_type": "collection_provider",
"entrypoint": "biliup_next.infra.adapters.bilibili_collection_legacy:LegacyBilibiliCollectionProvider",
"entrypoint": "biliup_next.modules.collection.providers.bilibili_collection:BilibiliCollectionProvider",
"capabilities": ["collection"],
"enabled_by_default": true
}

View File

@ -1,9 +1,9 @@
{
"id": "bilibili_top_comment",
"name": "Legacy Bilibili Top Comment Provider",
"name": "Bilibili Top Comment Provider",
"version": "0.1.0",
"provider_type": "comment_provider",
"entrypoint": "biliup_next.infra.adapters.bilibili_top_comment_legacy:LegacyBilibiliTopCommentProvider",
"entrypoint": "biliup_next.modules.comment.providers.bilibili_top_comment:BilibiliTopCommentProvider",
"capabilities": ["comment"],
"enabled_by_default": true
}

View File

@ -3,7 +3,7 @@
"name": "biliup CLI Publish Provider",
"version": "0.1.0",
"provider_type": "publish_provider",
"entrypoint": "biliup_next.infra.adapters.biliup_publish_legacy:LegacyBiliupPublishProvider",
"entrypoint": "biliup_next.modules.publish.providers.biliup_cli:BiliupCliPublishProvider",
"capabilities": ["publish"],
"enabled_by_default": true
}

View File

@ -3,7 +3,7 @@
"name": "Codex Song Detector",
"version": "0.1.0",
"provider_type": "song_detector",
"entrypoint": "biliup_next.infra.adapters.codex_legacy:LegacyCodexSongDetector",
"entrypoint": "biliup_next.modules.song_detect.providers.codex:CodexSongDetector",
"capabilities": ["song_detect"],
"enabled_by_default": true
}

View File

@ -3,7 +3,7 @@
"name": "FFmpeg Copy Split Provider",
"version": "0.1.0",
"provider_type": "split_provider",
"entrypoint": "biliup_next.infra.adapters.ffmpeg_split_legacy:LegacyFfmpegSplitProvider",
"entrypoint": "biliup_next.modules.split.providers.ffmpeg_copy:FfmpegCopySplitProvider",
"capabilities": ["split"],
"enabled_by_default": true
}

View File

@ -3,7 +3,7 @@
"name": "Groq Transcribe Provider",
"version": "0.1.0",
"provider_type": "transcribe_provider",
"entrypoint": "biliup_next.infra.adapters.groq_legacy:LegacyGroqTranscribeProvider",
"entrypoint": "biliup_next.modules.transcribe.providers.groq:GroqTranscribeProvider",
"capabilities": ["transcribe"],
"enabled_by_default": true
}

1031
tests/test_api_server.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,149 @@
from __future__ import annotations
import tempfile
import unittest
from http import HTTPStatus
from pathlib import Path
from types import SimpleNamespace
from biliup_next.app.control_plane_get_dispatcher import ControlPlaneGetDispatcher
from biliup_next.core.models import ActionRecord, Task, TaskContext
class FakeRepo:
def __init__(self, task: Task, context: TaskContext | None = None, actions: list[ActionRecord] | None = None) -> None:
self.task = task
self.context = context
self.actions = actions or []
def query_tasks(self, **kwargs): # type: ignore[no-untyped-def]
return [self.task], 1
def get_task(self, task_id: str) -> Task | None:
return self.task if task_id == self.task.id else None
def get_task_context(self, task_id: str) -> TaskContext | None:
return self.context if self.context and self.context.task_id == task_id else None
def list_task_contexts_for_task_ids(self, task_ids: list[str]) -> dict[str, TaskContext]:
if self.context and self.context.task_id in task_ids:
return {self.context.task_id: self.context}
return {}
def list_steps_for_task_ids(self, task_ids: list[str]) -> dict[str, list[object]]:
return {self.task.id: []} if self.task.id in task_ids else {}
def list_task_contexts_by_session_key(self, session_key: str) -> list[TaskContext]:
if self.context and self.context.session_key == session_key:
return [self.context]
return []
def list_steps(self, task_id: str) -> list[object]:
return []
def list_artifacts(self, task_id: str) -> list[object]:
return []
def list_action_records(
self,
task_id: str | None = None,
limit: int = 200,
action_name: str | None = None,
status: str | None = None,
) -> list[ActionRecord]:
items = list(self.actions)
if task_id is not None:
items = [item for item in items if item.task_id == task_id]
if action_name is not None:
items = [item for item in items if item.action_name == action_name]
if status is not None:
items = [item for item in items if item.status == status]
return items[:limit]
class FakeSettingsService:
def __init__(self, root) -> None: # type: ignore[no-untyped-def]
self.root = root
def load_redacted(self):
return SimpleNamespace(settings={"runtime": {"control_token": "secret"}})
def load(self):
return SimpleNamespace(schema={"title": "SettingsSchema"})
class ControlPlaneGetDispatcherTests(unittest.TestCase):
def _dispatcher(self, tmpdir: str, repo: FakeRepo) -> ControlPlaneGetDispatcher:
state = {
"root": Path(tmpdir),
"repo": repo,
"settings": {
"paths": {"session_dir": str(Path(tmpdir) / "session")},
"comment": {"post_split_comment": True, "post_full_video_timeline_comment": True},
"cleanup": {},
"publish": {},
},
"registry": SimpleNamespace(list_manifests=lambda: [{"name": "publish.biliup_cli"}]),
"manifests": [{"name": "publish.biliup_cli"}],
}
return ControlPlaneGetDispatcher(
state,
attention_state_fn=lambda payload: "running" if payload.get("status") == "running" else "stable",
delivery_state_label_fn=lambda payload: "pending_comment" if payload.get("delivery_state", {}).get("split_comment") == "pending" else "stable",
build_scheduler_preview_fn=lambda state, include_stage_scan=False, limit=200: {"items": [{"limit": limit}]},
settings_service_factory=FakeSettingsService,
)
def test_handle_settings_schema_returns_schema(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
dispatcher = self._dispatcher(tmpdir, FakeRepo(task))
body, status = dispatcher.handle_settings_schema()
self.assertEqual(status, HTTPStatus.OK)
self.assertEqual(body["title"], "SettingsSchema")
def test_handle_history_filters_records(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
actions = [
ActionRecord(None, "task-1", "comment", "ok", "comment ok", "{}", "2026-01-01T00:01:00+00:00"),
ActionRecord(None, "task-1", "publish", "error", "publish failed", "{}", "2026-01-01T00:02:00+00:00"),
]
dispatcher = self._dispatcher(tmpdir, FakeRepo(task, actions=actions))
body, status = dispatcher.handle_history(limit=100, task_id="task-1", action_name="comment", status="ok")
self.assertEqual(status, HTTPStatus.OK)
self.assertEqual(len(body["items"]), 1)
self.assertEqual(body["items"][0]["action_name"], "comment")
def test_handle_session_returns_not_found_when_missing(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
dispatcher = self._dispatcher(tmpdir, FakeRepo(task))
body, status = dispatcher.handle_session("missing-session")
self.assertEqual(status, HTTPStatus.NOT_FOUND)
self.assertEqual(body["error"], "session not found")
def test_handle_tasks_filters_attention(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "running", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
dispatcher = self._dispatcher(tmpdir, FakeRepo(task))
body, status = dispatcher.handle_tasks(
limit=10,
offset=0,
status=None,
search=None,
sort="updated_desc",
attention="running",
delivery=None,
)
self.assertEqual(status, HTTPStatus.OK)
self.assertEqual(body["total"], 1)
self.assertEqual(body["items"][0]["id"], "task-1")

View File

@ -0,0 +1,111 @@
from __future__ import annotations
import io
import tempfile
import unittest
from http import HTTPStatus
from pathlib import Path
from types import SimpleNamespace
from biliup_next.app.control_plane_post_dispatcher import ControlPlanePostDispatcher
from biliup_next.core.models import Task
class FakeRepo:
def __init__(self) -> None:
self.actions = []
def add_action_record(self, action) -> None: # type: ignore[no-untyped-def]
self.actions.append(action)
class ModuleError(Exception):
def to_dict(self) -> dict[str, object]:
return {"error": "conflict"}
class ControlPlanePostDispatcherTests(unittest.TestCase):
def _dispatcher(self, tmpdir: str, repo: FakeRepo, *, ingest_service: object | None = None) -> ControlPlanePostDispatcher:
state = {
"repo": repo,
"root": Path(tmpdir),
"settings": {
"paths": {"stage_dir": str(Path(tmpdir) / "stage"), "session_dir": str(Path(tmpdir) / "session")},
"ingest": {"stage_min_free_space_mb": 100},
},
"ingest_service": ingest_service or SimpleNamespace(
create_task_from_file=lambda path, settings: Task(
"task-1",
"local_file",
str(path),
"task-title",
"created",
"2026-01-01T00:00:00+00:00",
"2026-01-01T00:00:00+00:00",
)
),
}
return ControlPlanePostDispatcher(
state,
bind_full_video_action=lambda task_id, bvid: {"task_id": task_id, "full_video_bvid": bvid},
merge_session_action=lambda session_key, task_ids: {"session_key": session_key, "task_ids": task_ids},
receive_full_video_webhook=lambda payload: {"ok": True, **payload},
rebind_session_full_video_action=lambda session_key, bvid: {"session_key": session_key, "full_video_bvid": bvid},
reset_to_step_action=lambda task_id, step_name: {"task_id": task_id, "step_name": step_name},
retry_step_action=lambda task_id, step_name: {"task_id": task_id, "step_name": step_name},
run_task_action=lambda task_id: {"task_id": task_id},
run_once=lambda: {"scheduler": {"scan_count": 1}, "worker": {"picked": 1}},
stage_importer_factory=lambda: SimpleNamespace(
import_file=lambda source, dest, min_free_bytes=0: {"imported_to": str(dest / source.name)},
import_upload=lambda filename, fileobj, dest, min_free_bytes=0: {"filename": filename, "dest": str(dest)},
),
systemd_runtime_factory=lambda: SimpleNamespace(act=lambda service, action: {"service": service, "action": action, "command_ok": True}),
)
def test_handle_bind_full_video_maps_missing_bvid(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
dispatcher = self._dispatcher(tmpdir, FakeRepo())
body, status = dispatcher.handle_bind_full_video("task-1", {})
self.assertEqual(status, HTTPStatus.BAD_REQUEST)
self.assertEqual(body["error"], "missing full_video_bvid")
def test_handle_worker_run_once_records_action(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
repo = FakeRepo()
dispatcher = self._dispatcher(tmpdir, repo)
body, status = dispatcher.handle_worker_run_once()
self.assertEqual(status, HTTPStatus.ACCEPTED)
self.assertEqual(body["worker"]["picked"], 1)
self.assertEqual(repo.actions[-1].action_name, "worker_run_once")
def test_handle_stage_upload_returns_created(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
dispatcher = self._dispatcher(tmpdir, FakeRepo())
file_item = SimpleNamespace(filename="incoming.mp4", file=io.BytesIO(b"video"))
body, status = dispatcher.handle_stage_upload(file_item)
self.assertEqual(status, HTTPStatus.CREATED)
self.assertEqual(body["filename"], "incoming.mp4")
def test_handle_create_task_maps_module_error_to_conflict(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
repo = FakeRepo()
def raise_module_error(path, settings): # type: ignore[no-untyped-def]
raise ModuleError()
dispatcher = self._dispatcher(
tmpdir,
repo,
ingest_service=SimpleNamespace(create_task_from_file=raise_module_error),
)
body, status = dispatcher.handle_create_task({"source_path": str(Path(tmpdir) / "source.mp4")})
self.assertEqual(status, HTTPStatus.CONFLICT)
self.assertEqual(body["error"], "conflict")

42
tests/test_retry_meta.py Normal file
View File

@ -0,0 +1,42 @@
from __future__ import annotations
import unittest
from types import SimpleNamespace
from biliup_next.app.retry_meta import retry_meta_for_step
class RetryMetaTests(unittest.TestCase):
def test_retry_meta_uses_schedule_minutes(self) -> None:
step = SimpleNamespace(
step_name="publish",
status="failed_retryable",
retry_count=1,
started_at=None,
finished_at="2099-01-01T00:00:00+00:00",
)
payload = retry_meta_for_step(step, {"publish": {"retry_schedule_minutes": [15, 5]}})
self.assertIsNotNone(payload)
self.assertEqual(payload["retry_wait_seconds"], 900)
self.assertFalse(payload["retry_due"])
def test_retry_meta_marks_exhausted_after_schedule_is_consumed(self) -> None:
step = SimpleNamespace(
step_name="comment",
status="failed_retryable",
retry_count=3,
started_at=None,
finished_at="2026-01-01T00:00:00+00:00",
)
payload = retry_meta_for_step(step, {"comment": {"retry_schedule_minutes": [1, 2]}})
self.assertIsNotNone(payload)
self.assertTrue(payload["retry_exhausted"])
self.assertIsNone(payload["next_retry_at"])
if __name__ == "__main__":
unittest.main()

177
tests/test_serializers.py Normal file
View File

@ -0,0 +1,177 @@
from __future__ import annotations
import json
import tempfile
import unittest
from pathlib import Path
from biliup_next.app.serializers import ControlPlaneSerializer
from biliup_next.core.models import ActionRecord, Artifact, Task, TaskContext, TaskStep
class FakeSerializerRepo:
def __init__(
self,
*,
task: Task,
context: TaskContext | None = None,
steps: list[TaskStep] | None = None,
artifacts: list[Artifact] | None = None,
actions: list[ActionRecord] | None = None,
) -> None:
self.task = task
self.context = context
self.steps = steps or []
self.artifacts = artifacts or []
self.actions = actions or []
def get_task(self, task_id: str) -> Task | None:
return self.task if task_id == self.task.id else None
def get_task_context(self, task_id: str) -> TaskContext | None:
return self.context if task_id == self.task.id else None
def list_task_contexts_for_task_ids(self, task_ids: list[str]) -> dict[str, TaskContext]:
if self.context and self.context.task_id in task_ids:
return {self.context.task_id: self.context}
return {}
def list_steps_for_task_ids(self, task_ids: list[str]) -> dict[str, list[TaskStep]]:
if self.task.id in task_ids:
return {self.task.id: list(self.steps)}
return {}
def list_steps(self, task_id: str) -> list[TaskStep]:
return list(self.steps) if task_id == self.task.id else []
def list_task_contexts_by_session_key(self, session_key: str) -> list[TaskContext]:
if self.context and self.context.session_key == session_key:
return [self.context]
return []
def list_artifacts(self, task_id: str) -> list[Artifact]:
return list(self.artifacts) if task_id == self.task.id else []
def list_action_records(self, task_id: str, limit: int = 200) -> list[ActionRecord]:
return list(self.actions)[:limit] if task_id == self.task.id else []
class SerializerTests(unittest.TestCase):
def test_task_payload_includes_context_retry_and_delivery_state(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", str(Path(tmpdir) / "session" / "task-title" / "source.mp4"), "task-title", "running", "2026-01-01T00:00:00+00:00", "2026-01-01T00:01:00+00:00")
session_dir = Path(tmpdir) / "session" / "task-title"
session_dir.mkdir(parents=True, exist_ok=True)
(session_dir / "full_video_bvid.txt").write_text("BVFULL123", encoding="utf-8")
(session_dir / "bvid.txt").write_text("BVSPLIT123", encoding="utf-8")
steps = [
TaskStep(None, "task-1", "publish", "failed_retryable", "ERR", "upload failed", 1, None, "2099-01-01T00:00:00+00:00"),
]
context = TaskContext(
id=None,
task_id="task-1",
session_key="session-1",
streamer="streamer",
room_id="room",
source_title="task-title",
segment_started_at=None,
segment_duration_seconds=None,
full_video_bvid=None,
created_at="2026-01-01T00:00:00+00:00",
updated_at="2026-01-01T00:00:00+00:00",
)
repo = FakeSerializerRepo(task=task, context=context, steps=steps)
state = {
"repo": repo,
"settings": {
"paths": {"session_dir": str(Path(tmpdir) / "session")},
"comment": {"post_split_comment": True, "post_full_video_timeline_comment": True},
"cleanup": {},
"publish": {"retry_schedule_minutes": [10]},
},
}
payload = ControlPlaneSerializer(state).task_payload("task-1")
self.assertIsNotNone(payload)
self.assertEqual(payload["session_context"]["session_key"], "session-1")
self.assertEqual(payload["session_context"]["full_video_bvid"], "BVFULL123")
self.assertEqual(payload["retry_state"]["step_name"], "publish")
self.assertEqual(payload["delivery_state"]["split_comment"], "pending")
def test_session_payload_reuses_task_payload_serialization(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", str(Path(tmpdir) / "session" / "task-title" / "source.mp4"), "task-title", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:01:00+00:00")
context = TaskContext(
id=None,
task_id="task-1",
session_key="session-1",
streamer="streamer",
room_id="room",
source_title="task-title",
segment_started_at=None,
segment_duration_seconds=None,
full_video_bvid="BVFULL123",
created_at="2026-01-01T00:00:00+00:00",
updated_at="2026-01-01T00:00:00+00:00",
)
repo = FakeSerializerRepo(task=task, context=context)
state = {
"repo": repo,
"settings": {
"paths": {"session_dir": str(Path(tmpdir) / "session")},
"comment": {"post_split_comment": True, "post_full_video_timeline_comment": True},
"cleanup": {},
"publish": {},
},
}
payload = ControlPlaneSerializer(state).session_payload("session-1")
self.assertIsNotNone(payload)
self.assertEqual(payload["session_key"], "session-1")
self.assertEqual(payload["task_count"], 1)
self.assertEqual(payload["full_video_url"], "https://www.bilibili.com/video/BVFULL123")
self.assertEqual(payload["tasks"][0]["id"], "task-1")
def test_timeline_payload_includes_task_step_artifact_and_action_entries(self) -> None:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:02:00+00:00")
steps = [
TaskStep(None, "task-1", "comment", "succeeded", None, None, 0, "2026-01-01T00:01:00+00:00", "2026-01-01T00:01:30+00:00"),
]
artifacts = [
Artifact(None, "task-1", "publish_bvid", "/tmp/bvid.txt", "{}", "2026-01-01T00:01:40+00:00"),
]
actions = [
ActionRecord(
id=None,
task_id="task-1",
action_name="comment",
status="ok",
summary="comment succeeded",
details_json=json.dumps({"split": {"status": "ok"}, "full": {"status": "skipped"}}),
created_at="2026-01-01T00:01:50+00:00",
)
]
repo = FakeSerializerRepo(task=task, steps=steps, artifacts=artifacts, actions=actions)
state = {
"repo": repo,
"settings": {
"paths": {"session_dir": "/tmp/session"},
"comment": {"post_split_comment": True, "post_full_video_timeline_comment": True},
"cleanup": {},
"publish": {},
},
}
payload = ControlPlaneSerializer(state).timeline_payload("task-1")
self.assertIsNotNone(payload)
action_item = next(item for item in payload["items"] if item["kind"] == "action")
self.assertIn("split=ok", action_item["summary"])
kinds = {item["kind"] for item in payload["items"]}
self.assertTrue({"task", "step", "artifact", "action"}.issubset(kinds))
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,92 @@
from __future__ import annotations
import tempfile
import unittest
from pathlib import Path
from biliup_next.app.session_delivery_service import SessionDeliveryService
from biliup_next.core.models import Task, TaskContext
class FakeRepo:
def __init__(self, task: Task, context: TaskContext | None = None, contexts: list[TaskContext] | None = None) -> None:
self.task = task
self.context = context
self.contexts = contexts or ([] if context is None else [context])
self.task_context_upserts: list[TaskContext] = []
self.session_binding_upserts = []
self.action_records = []
self.updated_session_bvid: tuple[str, str, str] | None = None
def get_task(self, task_id: str) -> Task | None:
return self.task if task_id == self.task.id else None
def get_task_context(self, task_id: str) -> TaskContext | None:
return self.context if task_id == self.task.id else None
def upsert_task_context(self, context: TaskContext) -> None:
self.context = context
self.task_context_upserts.append(context)
def upsert_session_binding(self, binding) -> None: # type: ignore[no-untyped-def]
self.session_binding_upserts.append(binding)
def add_action_record(self, record) -> None: # type: ignore[no-untyped-def]
self.action_records.append(record)
def list_task_contexts_by_session_key(self, session_key: str) -> list[TaskContext]:
return [context for context in self.contexts if context.session_key == session_key]
def update_session_full_video_bvid(self, session_key: str, full_video_bvid: str, updated_at: str) -> int:
self.updated_session_bvid = (session_key, full_video_bvid, updated_at)
return len(self.list_task_contexts_by_session_key(session_key))
def list_task_contexts_by_source_title(self, source_title: str) -> list[TaskContext]:
return [context for context in self.contexts if context.source_title == source_title]
class SessionDeliveryServiceTests(unittest.TestCase):
def test_receive_full_video_webhook_updates_binding_context_and_action_record(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
context = TaskContext(
id=None,
task_id="task-1",
session_key="task:task-1",
streamer="streamer",
room_id="room",
source_title="task-title",
segment_started_at=None,
segment_duration_seconds=None,
full_video_bvid=None,
created_at="2026-01-01T00:00:00+00:00",
updated_at="2026-01-01T00:00:00+00:00",
)
repo = FakeRepo(task, context=context, contexts=[context])
state = {"repo": repo, "settings": {"paths": {"session_dir": str(Path(tmpdir) / "session")}}}
result = SessionDeliveryService(state).receive_full_video_webhook(
{"session_key": "session-1", "source_title": "task-title", "full_video_bvid": "BVWEBHOOK123"}
)
self.assertEqual(result["updated_count"], 1)
self.assertEqual(repo.context.session_key, "session-1")
self.assertEqual(repo.context.full_video_bvid, "BVWEBHOOK123")
self.assertEqual(repo.session_binding_upserts[-1].full_video_bvid, "BVWEBHOOK123")
self.assertEqual(repo.action_records[-1].action_name, "webhook_full_video_uploaded")
persisted_path = Path(result["tasks"][0]["path"])
self.assertTrue(persisted_path.exists())
self.assertEqual(persisted_path.read_text(encoding="utf-8"), "BVWEBHOOK123")
def test_merge_session_returns_error_when_task_ids_empty(self) -> None:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "created", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
repo = FakeRepo(task)
state = {"repo": repo, "settings": {"paths": {"session_dir": "/tmp/session"}}}
result = SessionDeliveryService(state).merge_session("session-1", ["", " "])
self.assertEqual(result["error"]["code"], "TASK_IDS_EMPTY")
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,80 @@
from __future__ import annotations
import tempfile
import unittest
from pathlib import Path
from biliup_next.core.config import SettingsService
class SettingsServiceTests(unittest.TestCase):
def test_load_seeds_settings_from_standalone_example_when_missing(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
root = Path(tmpdir)
config_dir = root / "config"
config_dir.mkdir(parents=True, exist_ok=True)
(config_dir / "settings.schema.json").write_text(
"""
{
"groups": {
"runtime": {
"database_path": {"type": "string", "default": "data/workspace/biliup_next.db"}
},
"paths": {
"stage_dir": {"type": "string", "default": "data/workspace/stage"},
"backup_dir": {"type": "string", "default": "data/workspace/backup"},
"session_dir": {"type": "string", "default": "data/workspace/session"},
"cookies_file": {"type": "string", "default": "runtime/cookies.json"},
"upload_config_file": {"type": "string", "default": "runtime/upload_config.json"}
},
"ingest": {
"ffprobe_bin": {"type": "string", "default": "ffprobe"}
},
"transcribe": {
"ffmpeg_bin": {"type": "string", "default": "ffmpeg"}
},
"split": {
"ffmpeg_bin": {"type": "string", "default": "ffmpeg"}
},
"song_detect": {
"codex_cmd": {"type": "string", "default": "codex"}
},
"publish": {
"biliup_path": {"type": "string", "default": "runtime/biliup"},
"cookie_file": {"type": "string", "default": "runtime/cookies.json"}
}
}
}
""",
encoding="utf-8",
)
(config_dir / "settings.standalone.example.json").write_text(
"""
{
"runtime": {"database_path": "data/workspace/biliup_next.db"},
"paths": {
"stage_dir": "data/workspace/stage",
"backup_dir": "data/workspace/backup",
"session_dir": "data/workspace/session",
"cookies_file": "runtime/cookies.json",
"upload_config_file": "runtime/upload_config.json"
},
"ingest": {"ffprobe_bin": "ffprobe"},
"transcribe": {"ffmpeg_bin": "ffmpeg"},
"split": {"ffmpeg_bin": "ffmpeg"},
"song_detect": {"codex_cmd": "codex"},
"publish": {"biliup_path": "runtime/biliup", "cookie_file": "runtime/cookies.json"}
}
""",
encoding="utf-8",
)
bundle = SettingsService(root).load()
self.assertTrue((config_dir / "settings.json").exists())
self.assertTrue((config_dir / "settings.staged.json").exists())
self.assertEqual(bundle.settings["paths"]["cookies_file"], str((root / "runtime" / "cookies.json").resolve()))
if __name__ == "__main__":
unittest.main()

143
tests/test_task_actions.py Normal file
View File

@ -0,0 +1,143 @@
from __future__ import annotations
import tempfile
import unittest
from pathlib import Path
from unittest.mock import patch
from biliup_next.app.task_actions import bind_full_video_action, merge_session_action, rebind_session_full_video_action
from biliup_next.core.models import Task, TaskContext
class FakeRepo:
def __init__(self, task: Task, context: TaskContext | None = None, contexts: list[TaskContext] | None = None) -> None:
self.task = task
self.context = context
self.contexts = contexts or ([] if context is None else [context])
self.task_context_upserts: list[TaskContext] = []
self.session_binding_upserts = []
self.updated_session_bvid: tuple[str, str, str] | None = None
def get_task(self, task_id: str) -> Task | None:
return self.task if task_id == self.task.id else None
def get_task_context(self, task_id: str) -> TaskContext | None:
return self.context if task_id == self.task.id else None
def upsert_task_context(self, context: TaskContext) -> None:
self.context = context
self.task_context_upserts.append(context)
def upsert_session_binding(self, binding) -> None: # type: ignore[no-untyped-def]
self.session_binding_upserts.append(binding)
def add_action_record(self, record) -> None: # type: ignore[no-untyped-def]
return None
def list_task_contexts_by_session_key(self, session_key: str) -> list[TaskContext]:
return [context for context in self.contexts if context.session_key == session_key]
def update_session_full_video_bvid(self, session_key: str, full_video_bvid: str, updated_at: str) -> int:
self.updated_session_bvid = (session_key, full_video_bvid, updated_at)
return len(self.list_task_contexts_by_session_key(session_key))
def list_task_contexts_by_source_title(self, source_title: str) -> list[TaskContext]:
return [context for context in self.contexts if context.source_title == source_title]
class TaskActionsTests(unittest.TestCase):
def test_bind_full_video_action_persists_context_binding_and_file(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "created", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
repo = FakeRepo(task)
state = {
"repo": repo,
"settings": {"paths": {"session_dir": str(Path(tmpdir) / "session")}},
}
with patch("biliup_next.app.task_actions.ensure_initialized", return_value=state), patch(
"biliup_next.app.task_actions.record_task_action"
):
result = bind_full_video_action("task-1", " BV1234567890 ")
self.assertEqual(result["full_video_bvid"], "BV1234567890")
self.assertEqual(repo.context.full_video_bvid, "BV1234567890")
self.assertEqual(len(repo.session_binding_upserts), 1)
self.assertTrue(Path(result["path"]).exists())
self.assertEqual(Path(result["path"]).read_text(encoding="utf-8"), "BV1234567890")
def test_rebind_session_full_video_action_updates_binding_and_all_task_files(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
context = TaskContext(
id=None,
task_id="task-1",
session_key="session-1",
streamer="streamer",
room_id="room",
source_title="task-title",
segment_started_at=None,
segment_duration_seconds=None,
full_video_bvid="BVOLD",
created_at="2026-01-01T00:00:00+00:00",
updated_at="2026-01-01T00:00:00+00:00",
)
repo = FakeRepo(task, context=context, contexts=[context])
state = {
"repo": repo,
"settings": {"paths": {"session_dir": str(Path(tmpdir) / "session")}},
}
with patch("biliup_next.app.task_actions.ensure_initialized", return_value=state), patch(
"biliup_next.app.task_actions.record_task_action"
):
result = rebind_session_full_video_action("session-1", "BVNEW1234567")
self.assertEqual(result["updated_count"], 1)
self.assertEqual(repo.context.full_video_bvid, "BVNEW1234567")
self.assertIsNotNone(repo.updated_session_bvid)
self.assertEqual(len(repo.session_binding_upserts), 1)
self.assertEqual(repo.session_binding_upserts[-1].full_video_bvid, "BVNEW1234567")
persisted_path = Path(result["tasks"][0]["path"])
self.assertTrue(persisted_path.exists())
self.assertEqual(persisted_path.read_text(encoding="utf-8"), "BVNEW1234567")
def test_merge_session_action_reuses_persist_path_for_inherited_bvid(self) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
task = Task("task-1", "local_file", "/tmp/source.mp4", "task-title", "created", "2026-01-01T00:00:00+00:00", "2026-01-01T00:00:00+00:00")
existing_context = TaskContext(
id=None,
task_id="existing-task",
session_key="session-1",
streamer="streamer",
room_id="room",
source_title="existing-title",
segment_started_at=None,
segment_duration_seconds=None,
full_video_bvid="BVINHERITED123",
created_at="2026-01-01T00:00:00+00:00",
updated_at="2026-01-01T00:00:00+00:00",
)
repo = FakeRepo(task, contexts=[existing_context])
state = {
"repo": repo,
"settings": {"paths": {"session_dir": str(Path(tmpdir) / "session")}},
}
with patch("biliup_next.app.task_actions.ensure_initialized", return_value=state), patch(
"biliup_next.app.task_actions.record_task_action"
):
result = merge_session_action("session-1", ["task-1"])
self.assertEqual(result["merged_count"], 1)
self.assertEqual(repo.context.full_video_bvid, "BVINHERITED123")
self.assertEqual(len(repo.session_binding_upserts), 1)
self.assertEqual(repo.session_binding_upserts[0].full_video_bvid, "BVINHERITED123")
self.assertIn("path", result["tasks"][0])
persisted_path = Path(result["tasks"][0]["path"])
self.assertTrue(persisted_path.exists())
self.assertEqual(persisted_path.read_text(encoding="utf-8"), "BVINHERITED123")
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,46 @@
from __future__ import annotations
import unittest
from types import SimpleNamespace
from unittest.mock import patch
from biliup_next.app.task_control_service import TaskControlService
class TaskControlServiceTests(unittest.TestCase):
def test_run_task_delegates_to_process_task(self) -> None:
state = {"repo": object(), "settings": {"paths": {"session_dir": "/tmp/session"}}}
with patch("biliup_next.app.task_control_service.process_task", return_value={"processed": [{"task_id": "task-1"}]}) as process_mock:
result = TaskControlService(state).run_task("task-1")
self.assertEqual(result["processed"][0]["task_id"], "task-1")
process_mock.assert_called_once_with("task-1")
def test_retry_step_delegates_with_reset_step(self) -> None:
state = {"repo": object(), "settings": {"paths": {"session_dir": "/tmp/session"}}}
with patch("biliup_next.app.task_control_service.process_task", return_value={"processed": [{"step": "publish"}]}) as process_mock:
result = TaskControlService(state).retry_step("task-1", "publish")
self.assertEqual(result["processed"][0]["step"], "publish")
process_mock.assert_called_once_with("task-1", reset_step="publish")
def test_reset_to_step_combines_reset_and_run_payloads(self) -> None:
state = {"repo": object(), "settings": {"paths": {"session_dir": "/tmp/session"}}}
reset_service = SimpleNamespace(reset_to_step=lambda task_id, step_name: {"task_id": task_id, "reset_to": step_name})
with patch("biliup_next.app.task_control_service.TaskResetService", return_value=reset_service) as reset_cls:
with patch.object(reset_service, "reset_to_step", return_value={"task_id": "task-1", "reset_to": "split"}) as reset_mock:
with patch("biliup_next.app.task_control_service.process_task", return_value={"processed": [{"task_id": "task-1"}]}) as process_mock:
result = TaskControlService(state).reset_to_step("task-1", "split")
self.assertEqual(result["reset"]["reset_to"], "split")
self.assertEqual(result["run"]["processed"][0]["task_id"], "task-1")
reset_cls.assert_called_once()
reset_mock.assert_called_once_with("task-1", "split")
process_mock.assert_called_once_with("task-1")
if __name__ == "__main__":
unittest.main()

70
tests/test_task_engine.py Normal file
View File

@ -0,0 +1,70 @@
from __future__ import annotations
import unittest
from types import SimpleNamespace
from biliup_next.app.task_engine import infer_error_step_name, next_runnable_step
from biliup_next.core.models import TaskStep
class TaskEngineTests(unittest.TestCase):
def test_infer_error_step_name_prefers_running_step(self) -> None:
task = SimpleNamespace(status="running")
steps = {
"transcribe": TaskStep(None, "task-1", "transcribe", "running", None, None, 0, None, None),
"song_detect": TaskStep(None, "task-1", "song_detect", "pending", None, None, 0, None, None),
}
self.assertEqual(infer_error_step_name(task, steps), "transcribe")
def test_next_runnable_step_returns_none_while_a_step_is_running(self) -> None:
task = SimpleNamespace(id="task-1", status="running")
steps = {
"transcribe": TaskStep(None, "task-1", "transcribe", "running", None, None, 0, None, None),
"song_detect": TaskStep(None, "task-1", "song_detect", "pending", None, None, 0, None, None),
}
state = {
"settings": {
"comment": {"enabled": True},
"collection": {"enabled": True},
"paths": {},
"publish": {},
}
}
self.assertEqual(next_runnable_step(task, steps, state), (None, None))
def test_next_runnable_step_returns_wait_payload_for_retryable_publish(self) -> None:
task = SimpleNamespace(id="task-1", status="failed_retryable")
steps = {
"publish": TaskStep(
None,
"task-1",
"publish",
"failed_retryable",
"PUBLISH_UPLOAD_FAILED",
"upload failed",
1,
None,
"2099-01-01T00:00:00+00:00",
)
}
state = {
"settings": {
"comment": {"enabled": True},
"collection": {"enabled": True},
"paths": {},
"publish": {"retry_schedule_minutes": [10]},
}
}
step_name, waiting_payload = next_runnable_step(task, steps, state)
self.assertIsNone(step_name)
self.assertIsNotNone(waiting_payload)
self.assertTrue(waiting_payload["waiting_for_retry"])
self.assertEqual(waiting_payload["step"], "publish")
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,75 @@
from __future__ import annotations
import unittest
from types import SimpleNamespace
from biliup_next.app.task_policies import apply_disabled_step_fallbacks, resolve_failure
from biliup_next.core.errors import ModuleError
from biliup_next.core.models import TaskStep
class FakePolicyRepo:
def __init__(self, task, steps: list[TaskStep]) -> None: # type: ignore[no-untyped-def]
self.task = task
self.steps = steps
self.step_updates: list[tuple] = []
self.task_updates: list[tuple] = []
def get_task(self, task_id: str): # type: ignore[no-untyped-def]
return self.task if task_id == self.task.id else None
def list_steps(self, task_id: str) -> list[TaskStep]:
return list(self.steps) if task_id == self.task.id else []
def update_step_status(self, task_id: str, step_name: str, status: str, **kwargs) -> None: # type: ignore[no-untyped-def]
self.step_updates.append((task_id, step_name, status, kwargs))
def update_task_status(self, task_id: str, status: str, updated_at: str) -> None:
self.task_updates.append((task_id, status, updated_at))
class TaskPoliciesTests(unittest.TestCase):
def test_apply_disabled_step_fallbacks_marks_collection_done_when_disabled(self) -> None:
task = SimpleNamespace(id="task-1", status="commented")
repo = FakePolicyRepo(task, [])
state = {
"settings": {
"comment": {"enabled": True},
"collection": {"enabled": False},
"paths": {},
"publish": {},
}
}
changed = apply_disabled_step_fallbacks(state, task, repo)
self.assertTrue(changed)
self.assertEqual([update[1] for update in repo.step_updates], ["collection_a", "collection_b"])
self.assertEqual(repo.task_updates[-1][1], "collection_synced")
def test_resolve_failure_uses_publish_retry_schedule(self) -> None:
task = SimpleNamespace(id="task-1", status="running")
steps = [
TaskStep(None, "task-1", "publish", "running", None, None, 0, "2026-01-01T00:00:00+00:00", None),
]
repo = FakePolicyRepo(task, steps)
state = {
"settings": {
"publish": {"retry_schedule_minutes": [15, 5]},
"comment": {},
"paths": {},
}
}
exc = ModuleError(code="PUBLISH_UPLOAD_FAILED", message="upload failed", retryable=True)
failure = resolve_failure(task, repo, state, exc)
self.assertEqual(failure["step_name"], "publish")
self.assertEqual(failure["payload"]["retry_status"], "failed_retryable")
self.assertEqual(failure["payload"]["next_retry_delay_seconds"], 900)
self.assertEqual(repo.step_updates[-1][1], "publish")
self.assertEqual(repo.task_updates[-1][1], "failed_retryable")
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,121 @@
from __future__ import annotations
import tempfile
import unittest
from pathlib import Path
from biliup_next.core.models import SessionBinding, Task, TaskContext, TaskStep
from biliup_next.infra.db import Database
from biliup_next.infra.task_repository import TaskRepository
class TaskRepositorySqliteTests(unittest.TestCase):
def setUp(self) -> None:
self.tempdir = tempfile.TemporaryDirectory()
db_path = Path(self.tempdir.name) / "test.db"
self.db = Database(db_path)
self.db.initialize()
self.repo = TaskRepository(self.db)
def tearDown(self) -> None:
self.tempdir.cleanup()
def test_query_tasks_filters_and_sorts_by_updated_desc(self) -> None:
self.repo.upsert_task(Task("task-1", "local_file", "/tmp/a.mp4", "Alpha", "created", "2026-01-01T00:00:00+00:00", "2026-01-01T00:01:00+00:00"))
self.repo.upsert_task(Task("task-2", "local_file", "/tmp/b.mp4", "Beta", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:03:00+00:00"))
self.repo.upsert_task(Task("task-3", "local_file", "/tmp/c.mp4", "Gamma", "published", "2026-01-01T00:00:00+00:00", "2026-01-01T00:02:00+00:00"))
items, total = self.repo.query_tasks(status="published", search="a", sort="updated_desc")
self.assertEqual(total, 2)
self.assertEqual([item.id for item in items], ["task-2", "task-3"])
def test_list_task_contexts_and_steps_for_task_ids_returns_batched_maps(self) -> None:
self.repo.upsert_task(Task("task-1", "local_file", "/tmp/a.mp4", "Alpha", "created", "2026-01-01T00:00:00+00:00", "2026-01-01T00:01:00+00:00"))
self.repo.upsert_task(Task("task-2", "local_file", "/tmp/b.mp4", "Beta", "created", "2026-01-01T00:00:00+00:00", "2026-01-01T00:02:00+00:00"))
self.repo.upsert_task_context(
TaskContext(
id=None,
task_id="task-1",
session_key="session-1",
streamer="streamer",
room_id="room",
source_title="Alpha",
segment_started_at="2026-01-01T00:00:00+00:00",
segment_duration_seconds=60.0,
full_video_bvid="BV123",
created_at="2026-01-01T00:00:00+00:00",
updated_at="2026-01-01T00:00:00+00:00",
)
)
self.repo.replace_steps(
"task-1",
[
TaskStep(None, "task-1", "transcribe", "pending", None, None, 0, None, None),
TaskStep(None, "task-1", "song_detect", "pending", None, None, 0, None, None),
],
)
self.repo.replace_steps(
"task-2",
[
TaskStep(None, "task-2", "transcribe", "running", None, None, 0, "2026-01-01T00:03:00+00:00", None),
],
)
contexts = self.repo.list_task_contexts_for_task_ids(["task-1", "task-2"])
steps = self.repo.list_steps_for_task_ids(["task-1", "task-2"])
self.assertEqual(set(contexts.keys()), {"task-1"})
self.assertEqual(contexts["task-1"].full_video_bvid, "BV123")
self.assertEqual([step.step_name for step in steps["task-1"]], ["transcribe", "song_detect"])
self.assertEqual(steps["task-2"][0].status, "running")
def test_session_binding_supports_upsert_and_source_title_fallback_lookup(self) -> None:
self.repo.upsert_session_binding(
SessionBinding(
id=None,
session_key="session-1",
source_title="Alpha",
streamer="streamer",
room_id="room",
full_video_bvid="BVOLD",
created_at="2026-01-01T00:00:00+00:00",
updated_at="2026-01-01T00:00:00+00:00",
)
)
self.repo.upsert_session_binding(
SessionBinding(
id=None,
session_key="session-1",
source_title="Alpha",
streamer="streamer",
room_id="room",
full_video_bvid="BVNEW",
created_at="2026-01-01T00:01:00+00:00",
updated_at="2026-01-01T00:01:00+00:00",
)
)
self.repo.upsert_session_binding(
SessionBinding(
id=None,
session_key=None,
source_title="Beta",
streamer="streamer-2",
room_id="room-2",
full_video_bvid="BVBETA",
created_at="2026-01-01T00:02:00+00:00",
updated_at="2026-01-01T00:02:00+00:00",
)
)
binding_by_session = self.repo.get_session_binding(session_key="session-1")
binding_by_title = self.repo.get_session_binding(source_title="Beta")
self.assertIsNotNone(binding_by_session)
self.assertEqual(binding_by_session.full_video_bvid, "BVNEW")
self.assertIsNotNone(binding_by_title)
self.assertEqual(binding_by_title.full_video_bvid, "BVBETA")
if __name__ == "__main__":
unittest.main()

102
tests/test_task_runner.py Normal file
View File

@ -0,0 +1,102 @@
from __future__ import annotations
import unittest
from types import SimpleNamespace
from unittest.mock import patch
from biliup_next.app.task_runner import process_task
from biliup_next.core.models import TaskStep
class FakeRunnerRepo:
def __init__(self, task, steps: list[TaskStep]) -> None: # type: ignore[no-untyped-def]
self.task = task
self.steps = steps
self.step_updates: list[tuple] = []
self.task_updates: list[tuple] = []
self.claims: list[tuple[str, str, str]] = []
def get_task(self, task_id: str): # type: ignore[no-untyped-def]
return self.task if task_id == self.task.id else None
def list_steps(self, task_id: str) -> list[TaskStep]:
return list(self.steps) if task_id == self.task.id else []
def update_step_status(self, task_id: str, step_name: str, status: str, **kwargs) -> None: # type: ignore[no-untyped-def]
self.step_updates.append((task_id, step_name, status, kwargs))
for index, step in enumerate(self.steps):
if step.task_id == task_id and step.step_name == step_name:
self.steps[index] = TaskStep(
step.id,
step.task_id,
step.step_name,
status,
kwargs.get("error_code", step.error_code),
kwargs.get("error_message", step.error_message),
kwargs.get("retry_count", step.retry_count),
kwargs.get("started_at", step.started_at),
kwargs.get("finished_at", step.finished_at),
)
def update_task_status(self, task_id: str, status: str, updated_at: str) -> None:
self.task_updates.append((task_id, status, updated_at))
if task_id == self.task.id:
self.task = SimpleNamespace(**{**self.task.__dict__, "status": status, "updated_at": updated_at})
def claim_step_running(self, task_id: str, step_name: str, *, started_at: str) -> bool:
self.claims.append((task_id, step_name, started_at))
for index, step in enumerate(self.steps):
if step.task_id == task_id and step.step_name == step_name:
self.steps[index] = TaskStep(step.id, step.task_id, step.step_name, "running", None, None, step.retry_count, started_at, None)
return True
class TaskRunnerTests(unittest.TestCase):
def test_process_task_reset_step_marks_task_back_to_pre_step_status(self) -> None:
task = SimpleNamespace(id="task-1", status="failed_retryable", updated_at="2026-01-01T00:00:00+00:00")
steps = [
TaskStep(None, "task-1", "transcribe", "failed_retryable", "ERR", "boom", 1, "2026-01-01T00:00:00+00:00", "2026-01-01T00:01:00+00:00"),
]
repo = FakeRunnerRepo(task, steps)
state = {
"repo": repo,
"settings": {"ingest": {}, "paths": {}, "comment": {"enabled": True}, "collection": {"enabled": True}, "publish": {}},
}
with patch("biliup_next.app.task_runner.ensure_initialized", return_value=state), patch(
"biliup_next.app.task_runner.record_task_action"
), patch("biliup_next.app.task_runner.apply_disabled_step_fallbacks", return_value=False), patch(
"biliup_next.app.task_runner.next_runnable_step", return_value=(None, None)
):
result = process_task("task-1", reset_step="transcribe")
self.assertTrue(result["processed"][0]["reset"])
self.assertEqual(repo.step_updates[0][1], "transcribe")
self.assertEqual(repo.step_updates[0][2], "pending")
self.assertEqual(repo.task_updates[0][1], "created")
def test_process_task_sets_task_running_before_execute_step(self) -> None:
task = SimpleNamespace(id="task-1", status="created", updated_at="2026-01-01T00:00:00+00:00")
steps = [
TaskStep(None, "task-1", "transcribe", "pending", None, None, 0, None, None),
]
repo = FakeRunnerRepo(task, steps)
state = {
"repo": repo,
"settings": {"ingest": {}, "paths": {}, "comment": {"enabled": True}, "collection": {"enabled": True}, "publish": {}},
}
with patch("biliup_next.app.task_runner.ensure_initialized", return_value=state), patch(
"biliup_next.app.task_runner.record_task_action"
), patch("biliup_next.app.task_runner.apply_disabled_step_fallbacks", return_value=False), patch(
"biliup_next.app.task_runner.next_runnable_step", side_effect=[("transcribe", None), (None, None)]
), patch("biliup_next.app.task_runner.execute_step", return_value={"task_id": "task-1", "step": "transcribe"}):
result = process_task("task-1")
self.assertEqual(repo.claims[0][1], "transcribe")
self.assertEqual(repo.task_updates[0][1], "running")
self.assertEqual(result["processed"][0]["step"], "transcribe")
if __name__ == "__main__":
unittest.main()