Noam Almosnino Design Engineer
About

WPUI Lab

A real-component design editor with AI agents

Principal Designer / Design Engineer

Between a design tool and production, there’s a translation layer where intent gets lost, props get misread, and spacing gets eyeballed. I built WPUI Lab to remove that layer: a Figma-style editor that renders real WordPress React components instead of drawings of them. You drag a Button onto the canvas and it’s an actual @wordpress/components Button. Same React code, same props, same rendering. When a developer looks at the output, there’s nothing to re-interpret. The JSX is already written. The intended user is a designer building for the WordPress ecosystem who’s currently working in Figma and handing off specs a developer has to re-interpret into real components.

The editor

I built the canvas to work like a design tool. Select, drag, resize, nest. But everything on it is a live React component from the WordPress design system. A 12-column grid governs layout. Components snap to width presets (full, two-thirds, half, third, quarter) that map directly to CSS grid spans in the exported code. What you see is what the developer gets.

WPUI Lab's canvas rendering core WordPress components in a Figma-style editor.

The properties panel exposes real component props, not a design-tool approximation. Changing a Button’s variant from primary to secondary changes the actual prop, and the component re-renders exactly as it would in production. There’s no translation step where “secondary style” in a Figma annotation becomes a different prop name in code.

Play Mode makes components fully interactive, so you can feel how a Button responds before any code is written.

The agent system

A second design problem was how to make AI useful inside a visual tool. You describe what you want in natural language and an agent builds it with real components.

The first version was a multi-agent orchestrator: a classifier routed each request to specialist agents (page creation, component building, updates), and a validator checked the results. It worked, but context fragmented across handoffs. A request like “create a Pricing page with three tier cards for WordPress agencies” would lose domain specifics by the time it reached the builder agent. I tried threading full user intent through every LLM call, adding few-shot examples to preserve keywords, and building a shared memory system. Each fix added complexity without solving the core problem: splitting one conversation across multiple agents means each one only sees a slice.

The fix was simpler than any of those patches. I replaced the entire orchestrator with a single unified agent that loads skills on demand. One agent loop, one context thread, no routing overhead. When a request needs page management, the agent loads that skill. When it needs component creation, it loads that. Skills are loaded progressively, so the context stays lean, but nothing gets lost between steps.

ApproachArchitectureTradeoff
v1: Multi-agentClassifier, specialist agents, validatorContext fragmented across handoffs
v2: Unified agentSingle loop, skills loaded on demandFull context preserved, fewer LLM calls
The unified agent building a DataViews table from a natural language request.

The unified agent plans with an explicit task list (visible to the user in real time), loads only the skills it needs, and executes tools directly. A complex request that used to require a classifier call, two specialist calls, and a validator now runs in a single agent loop. Simpler architecture, better results.

The code view

This is the payoff. I built a code panel that generates React JSX mapping one-to-one to the canvas. No wrapper divs, no !important overrides, no styling hacks. Just the component tree as clean, usable code a developer can drop into a project.

Selecting components on the canvas and viewing the generated JSX in real time.

It handles WordPress component imports, converts grid layout props to CSS Grid, and produces interaction handlers for components with click behaviors. My goal was that a developer shouldn’t need to clean up the output before using it.

What it proved

The work led to a working group at Automattic around AI in the design process and owning our own tooling, and is leading to further experiments in designing with AI. You can follow progress on GitHub. Stack: Next.js, Supabase, WordPress Core components.