Noam Almosnino Design Engineer
About

WPUI Lab

A real-component design editor with AI agents

Principal Designer / Design Engineer

Between a design tool and production, there’s a translation layer where intent gets lost, props get misread, and spacing gets eyeballed. I built WPUI Lab to remove that layer: a Figma-style editor that renders real WordPress React components instead of drawings of them. You drag a Button onto the canvas and it’s an actual @wordpress/components Button. Same React code, same props, same rendering. When a developer looks at the output, there’s nothing to re-interpret. The JSX is already written.

The Editor

I built the canvas to work like a design tool. Select, drag, resize, nest. But everything on it is a live React component from the WordPress design system. A 12-column grid governs layout. Components snap to width presets (full, two-thirds, half, third, quarter) that map directly to CSS grid spans in the exported code. What you see is what the developer gets.

WPUI Lab's canvas rendering core WordPress components in a Figma-style editor.

The properties panel exposes real component props, not a design-tool approximation. Changing a Button’s variant from primary to secondary changes the actual prop, and the component re-renders exactly as it would in production. There’s no translation step where “secondary style” in a Figma annotation becomes a different prop name in code.

I added multi-page projects, undo/redo (50 states), keyboard shortcuts, and a Play Mode that makes components fully interactive.

The Agent System

A second design problem was how to make AI useful inside a visual tool. You describe what you want in natural language and an agent builds it with real components.

The first version was a multi-agent orchestrator: a classifier routed each request to specialist agents (page creation, component building, updates), and a validator checked the results. It worked, but context fragmented across handoffs. A request like “create a Pricing page with three tier cards for WordPress agencies” would lose domain specifics by the time it reached the builder agent. I tried threading full user intent through every LLM call, adding few-shot examples to preserve keywords, and building a shared memory system. Each fix added complexity without solving the core problem: splitting one conversation across multiple agents means each one only sees a slice.

The fix was simpler than any of those patches. I replaced the entire orchestrator with a single unified agent that loads skills on demand. One agent loop, one context thread, no routing overhead. When a request needs page management, the agent loads that skill. When it needs component creation, it loads that. Skills are loaded progressively, so the context stays lean, but nothing gets lost between steps.

ApproachArchitectureTradeoff
v1: Multi-agentClassifier, specialist agents, validatorContext fragmented across handoffs
v2: Unified agentSingle loop, skills loaded on demandFull context preserved, fewer LLM calls
The unified agent building a DataViews table from a natural language request.

The unified agent plans with an explicit task list (visible to the user in real time), loads only the skills it needs, and executes tools directly. A complex request that used to require a classifier call, two specialist calls, and a validator now runs in a single agent loop. Simpler architecture, better results.

The Code View

This is the payoff. I built a code panel that generates React JSX mapping one-to-one to the canvas. No wrapper divs, no !important overrides, no styling hacks. Just the component tree as clean, usable code a developer can drop into a project.

Selecting components on the canvas and viewing the generated JSX in real time.

It handles WordPress component imports, converts grid layout props to CSS Grid, and produces interaction handlers for components with click behaviors. My goal was that a developer shouldn’t need to clean up the output before using it.

What It Proved

WPUI Lab is still a work in progress. What’s shown here is about a month and a half of work with Claude Code. So far it’s proved its thesis: real-component editing works, AI agents can scaffold layouts from natural language, and the code output is genuinely usable.

More importantly, it shifted how our design organization thinks about the gap between design and code. The conversations it sparked, about owning our own tooling, working closer to real components, where AI fits in the design process, led to a working group exploring the many paths where AI can boost our design process. The long-term goal is for WPUI Lab to become the design tool for the dashboards I build, like the Automattic for Agencies platform.

You can follow the latest progress on GitHub. The stack is Next.js (front and back-end), Supabase, and WordPress Core components.