Built with GIGO
How the pipeline builds itself. And what that means for your project.
The problem we caught ourselves on
GIGO claims to work for any domain. Software, children’s novels, tabletop board games, research papers. But during a live pipeline run, we realized the review system assumed code. The Challenger asked about “race conditions.” The spec reviewer wanted “file:line references.”
A children’s novel project would get reviewed for deadlocks.
That’s not domain-agnostic. That’s broken. So we used GIGO to fix GIGO.
The pipeline plans its own fix
We ran gigo:blueprint. Same pipeline you use. It entered plan mode, explored the existing review templates, found 40+ code-specific terms across three files, and designed the fix: neutral defaults with a {DOMAIN_CRITERIA} injection point that fills from your project’s expertise.
The design brief went to the fact-checker. Phase 4.25. The same fact-checker that validates your design briefs.
It found two issues before the brief was even approved. One was a hardcoded section name that would break when the templates changed. The other was an insertion point that would append after the final step instead of inserting between steps.
What this means for you: The fact-checker runs on every design brief. Yours included. It catches the assumptions you don’t know you’re making.
The Challenger catches a blocker in its own spec
The spec went to the Challenger. Independent adversarial review. Two passes: blind technical assessment first, then intent alignment. The Challenger doesn’t know why the spec was written. It just looks for what’s wrong.
It rated the spec 4/5 and flagged four issues. One of them was a constraint that would have crashed at runtime: the spec dispatched a subagent type that can’t run in plan mode. If we’d shipped that spec, the first user to trigger it would hit an error.
The Challenger caught it before a single line was written.
What this means for you: Your specs get the same review. Blind-first, honest scoring, real issues with evidence. Not a rubber stamp.
The Challenger catches dangling references in its own plan
After the spec was approved, the implementation plan went through the same process. The Challenger found a dangling reference to a file that had been renamed in the spec. It also caught two code-specific phrases that would have survived the neutralization: “quality bar checklist” and “code in task steps” in a phase description.
Both were real. The dangling reference would have been a broken link. The surviving phrases would have been the only code-specific text left in the entire pipeline.
What this means for you: Your plans get checked against the actual project. References that don’t exist, names that don’t match, steps that won’t work. The Challenger reads the project before judging.
What shipped
Three review templates now work for any domain. A Go project gets “Does every goroutine have a shutdown path?” A children’s novel gets “Is every clue planted 2+ chapters before payoff?” A board game gets “Does the reward loop respect the session-length target?”
The criteria are generated automatically when your team is assembled. They’re regenerated when the team changes. The Snap audits them for staleness.
Different criteria, same rigor. That’s what domain-agnostic actually means.
What this tells you
Every improvement to the review system was reviewed by the review system. The fact-checker checked its own design brief. The Challenger challenged its own spec. The pipeline caught real bugs in its own redesign and fixed them before shipping.
We don’t use GIGO because we built it. We use it because it works. And every time we use it, it gets better.
This page is part of the proof. The Voice persona was expanded with three new authorities earlier in the same session. Those authorities are why every section on this page ends with what it means for you, not what it means for us. The team doesn’t just review the pipeline. It writes the words you’re reading.
If the tool catches bugs in its own redesign, what will it catch in yours?