// A UX designer orchestrates five AI agents through Claude Code to ship a portfolio site and design system. CLAUDE.md defines the rules. Hooks enforce them. The designer makes every call.
Research layer — upstream of all other agents:
If an edge case isn't mapped, it will break in production. James documents every path through the product — happy path first, then every branch, error, and recovery. His output feeds Sona, Kai, and Miso.
Core team:
She spots spacing inconsistencies before anyone else does. Her UX reviews come with exact file:line references, priority tags, and implementation guidance — not vague suggestions.
She spots AI-generated text at a glance. If your copy sounds like a committee approved it, Miso rewrites it. Parallel structure addiction, filler phrases, generic adjectives — she catches them all.
The best code is the code you don't write. When Sona sends specs, Kai implements exactly — or explains why not. Every implementation triggers auto-QA. No exceptions.
QA FAIL = FAIL. She catches token misuse that a linter would miss — knows the difference between a DS override leaking through and a deliberately applied style. No "we'll fix it later."
A real session: I got copy feedback from an external AI on a process page. I called Miso to analyze it. Miso used sequential thinking to break down each suggestion, flagged AI-pattern structures in the feedback, and recommended rejecting most of it. I agreed with some points, disagreed with others, and implemented directly. The preview hook caught a type error before I saw it.
No pipeline, no flow chart. I called the agent I needed and moved on. The harness made that possible — a team that moves faster than I can alone.
Not one file — a layered stack that loads before the first prompt.
CLAUDE.md is the root — responsibility boundaries, collaboration flows, shared standards, conflict resolution. .claude/agents/ holds agent definitions — personas, judgment frameworks, output formats. Rules go in .claude/rules/ (diagram conventions, spacing enforcement). Handover files carry session context: what was done, what failed, what to pick up next. Plans and memory round out the stack.
Different project, different files — COLLABORATION.md, ARCHITECTURE.md, whatever fits. The structure changes. What doesn't: every .md Claude reads at startup constrains behavior before the conversation starts.
This project's files are written in Korean because that's how I think. The agents read them in Korean and respond in the language I use. It's not a translated framework — it's how I actually work.
The original workflow had one gate: QA after implementation. We learned the hard way that catching problems after code is written is expensive. Diagram coordinates were wrong, layouts overlapped, patterns were ignored — all things that a spec review would have caught before a single line was implemented.
Now there are two gates. Sona validates specs before Kai touches code: state coverage, pattern conflicts, edge cases, exact values and file references. If the spec isn't ready, it goes back. After implementation, Hana runs QA as before. The first gate stops bad specs before they become code. The second stops bad code before it ships.
CLAUDE.md says "auto-QA after every implementation." That's a rule. The preview hook that fires after code is edited — that's enforcement. Rules get skipped. Hooks don't.
Preview hooks verify browser changes automatically — no manual trigger. settings.json defines file access and permission boundaries per agent. MCP connections extend reach: Preview for browser checks, Chrome for live interaction, Context7 for docs, Sequential Thinking for structured analysis.
When Kai writes code and a preview server is running, the hook fires. Doesn't ask permission, doesn't wait for a command. It checks for errors, verifies rendering, reports back.
I don't consult a flow chart before deciding which agent to call. I know my team. I call who I need, when I need them.
The harness doesn't orchestrate me — I orchestrate through it. When external feedback says "add stateless/stateful framing," I call Miso to analyze. Miso's analysis gave me data. I made the call. We moved on.
Every mistake becomes a new rule. Margin-based spacing broke a layout — so we added a spacing enforcement rule to CLAUDE.md and Hana's checklist. A DS token leaked into the portfolio theme — separation rule added. The harness gets tighter with every failure. Memory files persist across sessions through .claude/memory/ — so the next conversation starts where this one left off.