Two issues with the old bar:
1. progress state was never cleared between jobs — when a job finished,
its 100% bar lingered on the next job's card until that job emitted
its first progress event. Clear progress on any job_update where
status != 'running', and on the column side ignore progress unless
progress.id matches the current job.id.
2. labels were misleading: the left/right times were ffmpeg's *input*
timestamp position (how far into the source it had read), not wall-
clock elapsed/remaining. For -c copy jobs ripping a 90-min file in
5 wall-seconds, the user saw '0:45 / 90:00' jump straight to
'90:00 / 90:00' which looks broken.
New display: 'elapsed M:SS N% ~M:SS left'. Elapsed is wall-clock
since the job started (re-renders every second), percent comes from
ffmpeg input progress as before, ETA is derived from elapsed × (100-p)/p
once we have at least 1% to avoid wild guesses.
The server's old /approve-up-to/:id re-ran its own SQL ORDER BY against
ALL pending plans (no LIMIT) to decide which rows fell 'before' the target.
The pipeline UI uses a different ordering — interleaving movies with
series cards, sorting by confidence tier without a name tiebreaker, and
collapsing every episode of a series into one card. Visible position
therefore did not map to the server's iteration position, and clicking
'Approve up to here' could approve far more (or different) items than
the user expected.
- replace POST /approve-up-to/:id with POST /approve-batch { planIds: [...] }
— server only approves the plans the client lists, idempotent: skips
ids that are no longer pending, were already approved, or are noop
- ReviewColumn now builds visiblePlanIds in actual render order
(each movie's id, then every episode id of each series in series order)
and 'approve up to here' on any card sends slice(0, idx+1) of that list
- works the same for both PipelineCard (movie) and SeriesCard (whole series
through its last episode)
Extract a ColumnShell component so all four columns share the same flex-1
basis-0 width (no more 24/16/18/16 rem mix) and the same header layout
(title + count + optional action button on the right).
Per-column actions:
- Review: 'Skip all' → POST /api/review/skip-all (new endpoint, sets all
pending non-noop plans to skipped in one update)
- Queued: 'Clear' → POST /api/execute/clear (existing; cancels pending jobs)
- Processing: 'Stop' → POST /api/execute/stop (new; SIGTERMs the running
ffmpeg via a tracked Bun.spawn handle, runJob's catch path
marks the job error and cleans up)
- Done: 'Clear' → POST /api/execute/clear-completed (existing)
All destructive actions confirm before firing.
The pipeline endpoint returned every pending plan (no LIMIT) while the audio
list capped at 500 — that alone was the main lag. SSE compounded it: every
job_update (which fires per-line of running ffmpeg output) re-ran the entire
endpoint and re-rendered every card.
- review query: LIMIT 500 + a separate COUNT for reviewTotal; column header
shows 'X of Y' and a footer 'Showing first X of Y. Approve some to see
the rest' when truncated
- doneCount: split the OR-form into two indexable counts (is_noop + done&!noop),
added together — uses idx_review_plans_is_noop and idx_review_plans_status
instead of full scan
- pipeline page: 1s debounce on SSE-triggered reload so a burst of
job_update events collapses into one refetch
Subtitle extraction lives only in the pipeline now; a file is 'done' when it
matches the desired end state — no embedded subs AND audio matches the
language config. The separate Extract page was redundant.
- delete src/routes/review/subtitles/extract.tsx + SubtitleExtractPage
- delete /api/subtitles/extract-all + /:id/extract endpoints
- delete buildExtractOnlyCommand + unused buildExtractionOutputs from ffmpeg.ts
- detail page: drop Extract button + extractCommand textarea, replace with
'will be extracted via pipeline' note when embedded subs present
- pipeline endpoint: doneCount = is_noop OR status='done' (a file in the
desired state, however it got there); UI label 'N files in desired state'
- nav: drop the now-defunct 'Extract subs' link, default activeOptions.exact
to false so detail subpages (e.g. /review/audio/123) highlight their
parent ('Audio') in the menu — was the cause of the broken-feeling menu
Nav only exposed a subset; Dashboard, Audio review, Subtitle extract, Jobs,
and Paths were reachable only via URL. Add links for every top-level route:
- left: Dashboard, Scan, Pipeline, Audio, Extract subs, Subtitle mgr, Jobs
- right: Paths, Settings
Split the two subtitle pages explicitly (Extract subs = per-item extraction
queue, Subtitle mgr = language summary + title harmonization) so their
distinct purpose is visible from the nav instead of hidden under one label.
Prod minified bundle crashed with 'can't access lexical declaration 'o'
before initialization' because flush was memoized with stopFlushing in its
deps, and stopFlushing was memoized with flush in its deps — circular.
In dev this still worked (refs paper over TDZ), but Vite's minifier emitted
the declarations in an order that tripped the temporal dead zone.
Extract the interval-clearing into a plain inline helper (clearFlushTimer)
that both flush and stopFlushing call. flush no longer depends on
stopFlushing; the cycle is gone.
Gitea's act runner re-clones every referenced GitHub action on every build
because it has no action cache. docker/setup-buildx-action alone was taking
~2 minutes to clone before the build even started.
buildx is already bundled in gitea/runner-images:ubuntu-latest, so call
'docker buildx build --push' directly with --cache-from/--cache-to pointing
at a registry buildcache tag. Keeps the layer caching benefit, skips the
action-clone tax entirely.
Root cause of 6+ min builds: Dockerfile stage 1 ran 'npm install' with no
package-lock.json, so every build re-resolved + re-fetched the full npm tree
from scratch on a fresh runner.
- Dockerfile: replace node:22-slim+npm stage with oven/bun:1-slim; both
stages now 'bun install --frozen-lockfile' against the tracked bun.lock;
--mount=type=cache for the bun install cache
- workflow: switch to docker/build-push-action with registry buildcache
(cache-from + cache-to) so layers persist across runs
- dockerignore: add .worktrees, docs, tests, tsbuildinfo so the build context
ships less
- split tsconfig.json into project references (client + server) so bun-types and DOM types don't leak into the other side; server now resolves Bun.* without diagnostics
- client tsconfig adds vite/client types so import.meta.env typechecks
- index.tsx spa fallback: use async/await + c.html(await …) instead of returning a Promise of a Response, which Hono's Handler type rejects
- subtitles normalize-titles: narrow canonical to string|null (Map.get widened to include undefined)
- execute: actually call isInScheduleWindow/waitForWindow/sleepBetweenJobs in runSequential (they were dead code); emit queue_status SSE events (running/paused/sleeping/idle) so the pipeline's existing QueueStatus listener lights up
- review: POST /:id/retry resets an errored plan to approved, wipes old done/error jobs, rebuilds command from current decisions, queues fresh job
- scan: dev-mode DELETE now also wipes jobs + subtitle_files (previously orphaned after every dev reset)
- biome: migrate config to 2.4 schema, autoformat 68 files (strings + indentation), relax opinionated a11y/hooks-deps/index-key rules that don't fit this codebase
- routeTree.gen.ts regenerated after /nodes removal
- analyzer: rewrite checkAudioOrderChanged to compare actual output order, unify assignTargetOrder with a shared sortKeptStreams util in ffmpeg builder
- review: recompute is_noop via full audio removed/reordered/transcode/subs check on toggle, preserve custom_title across rescan by matching (type,lang,stream_index,title), batch pipeline transcode-reasons query to avoid N+1
- validate: add lib/validate.ts with parseId + isOneOf helpers; replace bare Number(c.req.param('id')) with 400 on invalid ids across review/subtitles
- scan: atomic CAS on scan_running config to prevent concurrent scans
- subtitles: path-traversal guard — only unlink sidecars within the media item's directory; log-and-orphan DB entries pointing outside
- schedule: include end minute in window (<= vs <)
- db: add indexes on review_plans(status,is_noop), stream_decisions(plan_id), media_items(series_jellyfin_id,series_name,type), media_streams(item_id,type), subtitle_files(item_id), jobs(status,item_id)
- remove nodes table, ssh service, nodes api, NodesPage route
- execute.ts: local-only spawn, atomic CAS job claim via UPDATE status
- wrap job done + subtitle_files insert + review_plans status in db transaction
- stream ffmpeg output per line with 500ms throttled flush
- bump version to 2026.04.13
After a scan completes, show a "Review in Pipeline →" link next to the
status label. Nav already included the Pipeline entry from a prior task.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- server-side filter + LIMIT 200 + totalCounts on GET /api/execute
- shared FilterTabs component with status-colored active tabs
- execute page: filter tabs, SSE live count updates, module-level cache
- replace inline tab pills in AudioListPage, SubtitleListPage with FilterTabs
- fix buildExtractOnlyCommand: skip -map 0:a when no audio streams exist
- bump version
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>