義剛 GIGO
AI projects that get better every session, not worse.
You describe what you’re building. GIGO researches the best practitioners in that field, builds you an expert team, and keeps your project sharp as it grows.
claude install @croftspan/gigoThe problem everyone hits
You set up a Claude Code project. Add some rules. Things work great. Next session you add more. Fix a gotcha. Add a convention.
A week later your CLAUDE.md is 200 lines. Some rules contradict each other. Some are stale. Some just repeat what Claude already knows. Your AI output starts getting worse.
More rules should help, right? Research shows the opposite: bloated context files reduce success rates by 20%+ while increasing cost.
What changes
With GIGO, your project improves every session instead of rotting. Specs come out good enough that workers nail it on the first pass. Reviews find almost nothing because the upstream process already caught it.
Validated in controlled experiments across software and fiction. Battle-tested in production across crypto ops, game design, creative writing, and its own pipeline.
Same skill, different expert teams.
How it works
Your project stays lean
This is the problem GIGO was built to solve. Most AI setups grow out of control. Every session adds rules, nothing gets removed. Within weeks your context files are hundreds of lines of overlapping, outdated guidance that makes output worse, not better.
GIGO keeps it lean. Rules that apply to every conversation stay short (under 60 lines each). Deep knowledge loads only when relevant. Zero cost when unused.
At the end of every session, The Snap audits your project: removes what’s stale, merges what overlaps, enforces line budgets. Your project gets sharper over time, not bigger. See how it works →
Built with GIGO
We use GIGO to build GIGO. Not as a demo. As the way we work.
The review system assumed code. A children’s novel would get reviewed for deadlocks. So we used the pipeline to fix the pipeline. The Challenger found a runtime blocker in its own spec. The fact-checker caught issues in its own design brief. All caught before shipping.
Every improvement to the review system was reviewed by the review system. The tool improves because we depend on it, and you get something we trust enough to bet our own output on.
You can check every claim
100 blinded comparisons across software and fiction. Even the criterion that only measures output quality, ignoring everything else, shows 95%. Every number traces to a specific experiment. Every paper links to the source so you can verify it yourself.
Built on Gloaguen et al. and Hu et al. Read the full research →
Seven skills
| Skill | What it does |
|---|---|
gigo | Builds your expert team from scratch |
gigo:blueprint | Turns ideas into specs and implementation plans |
gigo:execute | Runs plans with agent teams. Workers get the spec, not the rules |
gigo:verify | Two-stage review: spec compliance + output quality |
gigo:snap | Session-end audit. Projects get sharper, not bigger |
gigo:retro | Turns session friction into project improvements. Your next session is smoother |
gigo:maintain | Adds expertise, audits for bloat, upgrades setups |
Why “GIGO”?
義剛 (Gigo), pronounced ghee-goh (義 gi, righteousness + 剛 gō, strong).
In computer science, GIGO means “Garbage In, Garbage Out.”