PDARR: The Factory Interviews Itself
The Premise
The Stage 1 response went like this:
“dude all I wanted to do was take these big ass video files that sonarr was downloading… I’m going to make my own goddamn pedar and make it open source and it’s going to be freaking easy and have a nice admin panel and it’s not going to be all tarted up with bullshit it’s just going to work”
That’s the complete product brief. No structured requirements. No feature matrix. One frustrated home server owner who’d been burned by Tdarr one too many times.
The app that came out the other end: a production-quality GPU-accelerated media transcoder. Go backend, React admin panel, hardware-agnostic encoding across Intel VAAPI, Apple VideoToolbox, and NVIDIA NVENC. File safety pipeline. Plex integration. A 29-entry decision log. Open source on GitHub.
The app is not the story.
The Real Experiment
Buried in Stage 5 of the interview:
“this is just like a project to see if this voice thing works as an interface for the interview”
That line is what PDARR actually is. The same pipeline being built to turn client problems into deployed software was turned on its own creator. You were the client. You were also building the factory. If the interview was broken, PDARR would expose it.
Real stakes: a misconfigured transcoder targeting home videos is a data loss event. Not a failed demo. Not a staging incident. Gone.
How It Works
The seven-stage interview produced a config manifest, a WORKFLOW.md with six build phases, UI mockups, and a code packet. An agent took those documents and built the application phase by phase.
Phase 1: Go module scaffold - config, DB, scanner, transcoder, verifier, queue, logger. Phase 2: File safety - quarantine folder, disk space guard, restore CLI. Phase 3: HTTP API and SSE event stream. Phase 4: React admin panel - sandstone palette, five screens, live progress via SSE. Phase 5: Plex integration, macOS launchd support, VideoToolbox on the Mac Mini. Phase 6: Security review, README, CI/CD, one week of dogfood on the real library.
Every architectural decision that couldn’t be resolved from the transcript went into a decision log. Hardware detection strategy. Quarantine directory naming. Stdlib routing vs. gorilla/mux. When a ZFS permission bug surfaced during dogfood on the Proxmox host, it became entry #22.
On Docker, from Stage 6 of the interview: “no fucking Docker no Docker dependency no Docker anywhere I don’t even want it to fucking work with Docker even if the user wants it to.” Not in the README. Not in the docs. Not mentioned as an option. The constraint was absolute.
The Numbers
- Build phases: 6, from module scaffold to v1.0 tag
- Decision log entries: 29
- Encoders supported: 4 (VAAPI, VideoToolbox, NVENC, software fallback)
- Human application code written: zero
- Docker: never
What We Learned
The deficiency report ran after the first build. Five feature gaps. Three process bugs.
The process bugs are the interesting part.
PROC-1 — Stage 4 abandoned. The client said “I’m kind of getting bored with this interview so let’s just move on with it” and the 1.0 scope was never locked. The agent inferred scope conservatively and got most of it right - but shipped quarantine backend without quarantine UI, because no mockup spec existed for it. The fix: Stage 4 is now non-optional. If scope isn’t locked, the output flags it explicitly rather than proceeding silently.
PROC-2 — The summary lost specificity. The interview captured “skip HEVC and AV1” in Stage 1. The closing summary wrote “configurable criteria (age, bitrate, codec).” The agent implemented HEVC skip - the obvious case - and missed AV1. One pass through a summarization prompt erased a direct requirement. The fix: structured requirements lists, not prose, so specific values survive into downstream prompts verbatim.
PROC-3 — Conversational requirements weren’t formalized. “Hey I want to transcode this file right now boom” was logged in summary prose but never converted to an explicit UI spec. No mockup for a manual file trigger meant no button for a manual file trigger. The API endpoint existed. The control didn’t. The fix: a UI feature extraction pass that converts vivid scenarios into a flat list of elements with their screen location.
The Throughline
Tdarr had been failing on the Proxmox server for months. Overcomplicated, subscription-gated features, already crashed in production once. The standard response: find a better tool, or write a workaround.
Instead: describe the problem to the factory. Let the factory build the tool.
The result is a production application running on real hardware, transcoding a real media library, saving real disk space. The decision log has 29 entries because 29 things needed judgment calls. The deficiency report has three process findings that made every pipeline run after this one smarter.
You were your own first client. That’s a different kind of test than building something for someone else. If it breaks your stuff, that’s on the factory. No distance between the person building the process and the person living with the output.
The tool was almost incidental. What the factory learned from building it was not.
Current Status
PDARR v1.0 is live on the home server - running as a systemd daemon in a Proxmox LXC (Intel VAAPI) and as a launchd daemon on a Mac Mini (Apple VideoToolbox). Open source at github.com/danrichardson/pdarr.