Fork the Vote: Does the Line Work?
The Premise
The client’s Stage 1 response:
“it’s a family voting app for Portland dining month - it does family-based voting which nobody else has, specific for Portland dining month. The user comes in, they can see the venues, they can search and sort and vote, and then we get the combined scoring so we can decide where to go to dinner. Hopefully takes just a couple minutes to sort through a hundred different restaurants and pick which ones you like.”
The real motivation came out in Stage 4: “I need it by this weekend so we can all vote and not get into family arguments.”
Six family members. Scattered across different locations. One month of Portland restaurant events. A 48-hour deadline. Zero budget.
What came out the other end: a deployed web application. Scraped restaurant data from the PDX Dining Month site. Per-user voting with a combined scorecard. Cloudflare Pages, free tier, no backend to maintain. Two builds - one that failed, one that worked.
No human wrote a line of application code. The client’s only input was a 15-minute conversation.
The Real Experiment
The question was never “can we build a voting app.” Voting apps are not interesting.
The question was: can a person describe what they want out loud, with no brief, no spec, no wireframes - just a conversation - and have a deployed web application come out the other end?
That is the experiment. Fork the Vote was the test case.
The Throughline Interview ran: seven stages, voice transcription, structured intake. When the interview completed, a generation pipeline produced a config manifest, architecture document, workflow plan, database schema, UI mockups, code packet, and consistency audit. A build agent took those documents and deployed a working application to Cloudflare Pages.
The experiment asked: does the line work?
How It Works
The interview. Seven stages covering product definition, users, blast radius, 1.0 scope, process contract, technical constraints, and quality policy. The client spoke. The interviewer extracted. When the client said “I need it by this weekend so we can all vote and not get into family arguments,” that deadline and that motivation went directly into the config manifest.
The pipeline. Architecture resolves ambiguities first. Workflow is produced from the finalized architecture. Code packet implements the finalized workflow. One authoritative chain of decisions - no parallel re-interpretation of the same source transcript.
The first build failed. Frontend didn’t render: a Tailwind CDN script leaked into the visible DOM as raw text. The API was fully functional. Zero users could see it. Five of six user names were placeholders - the interview captured “six people” but never asked who they were. Member2 through Member6 in the database. No real person logs in as Member5.
The deeper structural failure: every artifact in the first pipeline run was produced independently from the raw transcript. Architecture, workflow, and code packet each re-interpreted the same ambiguous source and made different decisions. Same interview. Three irreconcilable outputs.
This is not a writing quality problem. It is a factory design problem. The line was running parallel processes that should have been sequential.
Two fixes. The prompt chain was linearized: architecture first, workflow from architecture, code from workflow. A configuration manifest was added as the first deliverable - it collects every concrete operational value before any design artifact is generated. User names, external URLs, auth decisions, open items. If a value is not collected during the interview, it surfaces as an explicit flag, not a silent placeholder in deployed code.
The second build worked.
The Numbers
- Interview: 7 stages, ~15 minutes, under $0.30 in model costs
- Deliverables: Config Manifest, Draft Summary, Rough Sketch, Full Architecture, WORKFLOW.md, Code Packet, Mockups, Consistency Audit
- Human code written: zero
- Human input after the interview: zero
- Deficiencies diagnosed: 8, with specific root causes documented
- Line changes made: 2 structural, both validated by the second build
What We Learned
The interview missed things. The client said “six people” and never named them. The interviewer accepted that and moved on. The pipeline had no mechanism to surface the gap, so it filled it with placeholders. Member5 went to production.
The fix is not “ask better questions in the moment.” The fix is a config manifest that makes every open item explicit before the build starts. An item without a value becomes a flag, not a guess. The line cannot proceed past a blocker it does not know exists.
The other thing: scope creep, properly channeled, is a feature. The original ask was a working app for the weekend. The second build added everything the first build should have had. Neither was out of scope - they were just things the interview hadn’t surfaced yet. The interview continued. The build reflected it.
The Throughline
Ford did not care about any individual car. He cared whether the line worked, and when it did not, he fixed the line. Every deficiency report, every post-mortem, every process change is the work. The products are evidence that the work is progressing.
Fork the Vote answered the experiment’s question: yes, the line works. Here is precisely where it did not, and here is what we changed.
The interview captured “six family members, Portland restaurants, no arguments at dinner.” The factory turned that into a deployed application in 48 hours for $0.30. The gap between “describe the problem” and “here is the solution” is the variable being driven toward zero.
Which raises the next question: if a professional pipeline can take a spoken description and produce a deployed app, what happens when you compress the entire thing to a single prompt box and hand it to anyone? Not a developer. Not a client with a 15-minute interview. Anyone.
That question is Boopadoop.
Current Status
Both builds are live for direct comparison. The first build demonstrates what a pipeline design failure looks like in production. The second demonstrates what the fix produces. The interview that generated them is the foundation for everything the pipeline runs next.