cd ../process
devon@portfolio ~/process/product-agent-setup
[completed · Apr 2026]
cat README.md

# Multi-Product × Multi-Agent UX Setup

// 2 brands, 3 web properties, each with a dedicated UX agent. No LLM router agent — deterministic scope resolution maps requests to the right agent, and a requirement planner loads only the skill chain needed for the current design state.

# conditions

$ Figma MCP write-to-canvas (beta)

$ Shared skill pool (30 skills)

$ Skill chains loaded per-task, not at agent init

cat diagram.svg
Shared / PortfolioBrand ABrand BFigma MCP / SkillsOutput
LinesDirectOptionalAaAnnotation
cat steps.json
[01]

Token efficiency drives the architecture

The obvious setup: give every UX agent all 30 Figma skills, the full design system spec, brand guidelines, and every convention file. The agent has everything it needs. The context window is half gone before the first prompt.

With 2 brands and 3 web properties, each property having its own Figma file, DS tokens, and audience constraints, pre-loading everything is not viable. A client portal agent doesn't need admin portal component skills. A Brand A agent doesn't need Brand B's token naming rules. Loading irrelevant context wastes tokens and increases the chance of cross-contamination — the agent applies Brand B's spacing scale to Brand A because both are in context.

The solution is on-demand skill chain loading. A shared pool of 30 skills exists. When a design request arrives, two things happen: scope resolution determines which brand and property the request targets, and a requirement planner selects the right skill chain based on the current design state. The chain loads only the 3 to 6 skills relevant to the current task. The rest stays unloaded, so tokens stay available for actual work.

[02]

Scope resolution — brand, property, agent

There's no separate LLM router agent sitting between the request and the UX agent. Instead, a lightweight deterministic resolver handles scope.

A request arrives targeting a specific product. The resolver maps it to a brand (A or B), then to a web property within that brand. Brand A has two properties — a client portal and an admin portal, each with its own UX agent. Brand B has one property — a client portal with one UX agent. Three properties, three agents.

Each UX agent is a .claude/agents/ file scoped to one property. It carries the Figma file URL, the brand's DS contract, and the audience constraints in its system prompt. When it receives a task, it loads only the skills it needs from the shared pool, executes via Figma MCP, and returns.

Brand resolves ownership — which property, which agent. Requirement resolves the execution plan — which skill chain to load. These are two distinct routing decisions, but neither needs a dedicated LLM agent. Adding one for a brand with one property wastes a full agent context worth of tokens on a decision that has only one possible outcome.

InfrastructureOperationsConvergenceProduct Portfolio2 brands, 3 propertiesBrand ABrand BDesign System AFigma fileDesign System BFigma fileDesign File AFigma fileDesign File BFigma fileShared Skills30 totalRequirementincoming taskselects chain based on requirementChain ADS ExtractionChain BScreen CreateChain CFull BuildChain DMaintainskill chain loadsUX Agent AClient PortalUX Agent BAdmin PortalUX Agent CClient PortalLLM + Figma MCPwrite-to-canvasAgent A → Brand AClient PortalAgent B → Brand AAdmin PortalAgent C → Brand BClient Portal
[03]

Shared skill pool — four chains

30 Figma skills sit in a shared pool. No agent owns them. Which skills get loaded depends on a single question: what does the current design state look like?

The answer maps to a 2×2 matrix. One axis: does a design system already exist for this property? Other axis: do screens already exist?

Chain A — DS Extraction. Screens exist but the DS does not. The agent extracts tokens from existing screens, builds a variable system, and generates DS rules. Five skills: get-variable-defs, search-design-system, generate-library, create-ds-rules, audit-tokens.

Chain B — Screen Creation. DS exists, screens do not. Search the library, assemble screens from existing components, verify token bindings. Three skills: search-design-system, generate-screen, audit-tokens.

Chain C is the heaviest — Full Build. Nothing exists yet. Build the DS first, then generate screens from it. Six skills: figma-use, generate-library, create-ds-rules, generate-screen, audit-tokens, sync-states.

Chain D handles maintenance. Both DS and screens exist. Verify bindings, check for detached instances, confirm nothing drifted. Three skills: search-design-system, figma-use, audit-tokens.

Chain D loads 3 skills (~5K tokens). Chain C loads 6 (~9K tokens). Loading all 30 would cost ~45K tokens — most of the context window gone before the first prompt.

CaseSkill Chain LoadsAgentDS Extraction — 5 skillsget_variable_defs → search_ds → gen-librarycreate_ds_rules → audit-tokensScreen Creation — 3 skillssearch_design_system → generate-designaudit-tokensFull Build — 6 skillsfigma-use → gen-library → create_ds_rulesgen-design → audit-tokens → sync-statesMaintain — 3 skillssearch_design_system → figma-useaudit-tokensCase 1DS missing + Screen existsUX Agent ACase 2DS exists + Screen missingUX Agent BCase 3DS missing + Screen missingUX Agent CCase 4DS exists + Screen existsUX Agent Aall 30 skills = ~45K tokens. Per chain = 3-6 skills, ~5-9K tokens.
[04]

How agents get their DS context

The canonical source of truth for each brand's design system lives in Figma — variables, components, modes. But LLM agents can't read Figma variables directly as structured context. So each brand maintains a set of .md specification files that serve as agent-readable contracts, synchronized from the Figma source.

The structure per brand: design-system.md at the root. foundations/ with color.md, typography.md, spacing.md. components/ with button.md, input.md, card.md — each defining variants, states, and usage rules.

The agent reads its own brand's specs — never another brand's. If the Figma source changes, the .md contracts get updated. If a state is undefined in the contract, the agent flags it instead of inventing one. The contract prevents drift, but only as long as it stays synchronized with the canonical Figma source.

[05]

LLM + Figma MCP and realistic constraints

All UX agents connect through a single Figma MCP server. The server is link-based — each agent targets a different Figma file by URL. No server reconfiguration between properties. Logical access scoping is enforced by agent-to-file mapping in the agent's system prompt.

The critical tools: use_figma writes anything the Plugin API supports — frames, variables, auto-layout, component variants. search_design_system helps detect existing reusable components, though duplicate prevention also depends on naming validation and audit checks. The full set provides 16 tools (7 write, 9 read) documented on the Figma MCP docs.

In the current implementation: use_figma returns a maximum of about 20KB per call — large screen generation must be chunked. No image or asset import through MCP — asset-heavy components are templated in Figma first. Custom fonts are not yet supported through MCP. All agents share one OAuth identity — a known operational risk mitigated by file allowlists and write confirmation guards, with per-agent service identity as a future improvement.

These are the boundaries I hit while testing this flow. Every constraint is encoded in the shared skill pool as a guard — a skill that says "chunk operations over 15KB" or "check for existing template before generating" prevents the agent from hitting the wall at runtime.

cat links.md