Posted on Updated on min read by

Dynamic User Interfaces and Rendering

One of the most exciting shifts from AI isn’t just code generation — it’s UI generation.
Instead of hand-wiring every screen, you can have a model propose a layout and behavior, then render it with a generic, reusable component.

I built exactly that: a renderer component that takes a structured payload (a spec) and turns it into real UI.

Human Intent:

“Show me a user journey workflow for my client using green, blue colors with fun animations. I’m a yoga instructor.”

My component maps that to prebuilt, native components which render the data cleanly from the LLM.
It sounds simple; it took a year to make reliable.

That’s the promise. The reality: making this robust on native stacks is surprisingly hard.


The Generative UI Pattern

The core idea is to split responsibilities cleanly:

  • The model proposes structure, style hints, content, and action references.
  • The app validates, interprets, and renders using a safe, finite palette of components and actions.
  • No model code runs in the app — only data.

This creates a powerful contract:

  • The model is creative within constraints.
  • The app is deterministic within guarantees.

How My Renderer Works (and Why It’s Tricky)

  • Prebuild a small set of safe UI parts (e.g., carousel layered over a data table, plus buttons, forms, lists, media, and a few layout primitives).
  • Model sends a data spec (JSON). I only render allowed parts.
  • No model code runs in the app.
  • Spec has a strict schema and version, plus a custom rendering pipeline.
  • Every spec is validated before rendering; bad specs are rejected. (Lots of code here.)
  • Actions are IDs that map to known callbacks only (explicit allowlist).

In practice, this feels like a UI bytecode: a compact, declarative set of instructions the runtime can execute consistently across screens and sessions.


Why Native Stacks Make It Hard (Especially iOS and Android)

  • LLM output is messy; native models are strictly typed. Missing or malformed fields crash at runtime.
  • Dynamic view trees fight concurrency and state rules. Each component has unique, often undocumented quirks.
  • Streaming updates create race conditions → need buffers, cancellation, back-pressure, and versioning.
  • Simulator/Preview ≠ TestFlight/Prod. Bugs often appear only in production builds.
  • Testing/maintenance burden grows fast. Even if one developer sprints, the surface area balloons.

The Generative UI Specification: Contract Over Creativity

To make generative UI trustworthy, the spec must be boring in the best way:

  • Versioned: Every payload declares schema_version; renderers are backward compatible or reject unknown versions.
  • Typed and constrained: Enumerations, union types, bounded arrays, explicit defaults.
  • Explicit capabilities: If the platform can’t animate X or stream Y, the spec cannot request it.
  • Deterministic rendering: Same spec + same context = same pixels and behaviors.
  • Figure out how not to break a native bundle, browser, or compilers.

Rendering Pipeline: Make Undefined States Impossible

To keep things reliable, the renderer is a pipeline with explicit stages:

  1. Intake
    Receive a spec (possibly via streaming). Associate correlation ID + timestamp; stash in buffer.

  2. Validation

    • Schema validation (JSON Schema or equivalent) with strict mode.
    • Business rules (e.g., “lists cannot exceed 200 items”; “no nested carousels”).
    • Permissions/capabilities (e.g., “animations: reduced” if user has Reduce Motion).
  3. Normalization
    Fill defaults, map deprecated fields, fold hints into supported options, resolve theme tokens to platform colors/typography.

  4. Dependency Resolution
    Download/preload media, resolve action IDs to known callbacks, check feature flags.

  5. Transactional Render
    Build the view tree off-main when possible, then apply atomically.
    If streaming, apply only coherent deltas keyed by a monotonic sequence.

  6. Telemetry
    Log spec hash, schema_version, render time, rejected rules, and fallbacks.
    Crucial for offline debugging.


Streaming Without Tears

Streaming is magical for perceived speed, but it’s where race conditions breed. A few tactics that helped:

  • Epochs and sequence numbers: Every stream update carries (epoch, seq). Only apply if epoch matches current and seq > last_applied.
  • Coalescing windows: Buffer micro-updates for 50–100 ms to reduce layout thrash.
  • Intent-level diffs: The model sends semantic changes (“append three items to list A”), not raw UI diffs.
  • Cancel and rollback: If a later update invalidates prior assumptions, cancel animations, revert to last stable snapshot, and re-apply.

State Management: Single Source of Truth

Dynamic trees plus native lifecycles are brittle without a clear state model.

  • One store per screen: The spec + derived state live in a single, serializable store.
  • Idempotent reducers: Rendering is a pure function of (store, platform_context).
  • Side effects behind gates: Network, file I/O, and navigation live in effect handlers keyed by action IDs.
  • Lifecycles as contracts: Mount/unmount map to explicit enter/leave events so resources can be released deterministically.

Safety: Sandboxing Creativity

Generative UI should feel expansive to users and narrow to the runtime.

  • Allowlist components only. No raw HTML/JS injection. No arbitrary view instantiation.
  • Fixed animation primitives (spring, fade, slide) with bounded durations and distances; respect accessibility settings.
  • Resource budgets: max nodes per tree, max media size, timeouts for remote fetch.
  • Action firewall: Only known callbacks; parameters are validated and sanitized.

Bugs hide in the gaps between spec, render, and device.

  • Spec hash + device snapshot: Log both. If a user reports “screen flickers,” you need the exact spec.
  • Frame-time and layout metrics: Sample at interaction points, not continuously.
  • Rejection reasons: Emit structured codes for every rule violation to tune prompts and schemas.
  • Privacy-first: Hash PII, drop content fields unless explicitly allowed; keep just enough to reproduce.

The renderer’s constraints shape the prompt. Make them explicit:

  • Provide a formal schema excerpt and examples.
  • Penalize unsupported constructs by returning a clear NACK + schema diff.
  • Reward reuse of known components and patterns with better responses (few-shot examples).
  • Teach the model to degrade gracefully (“if carousel not allowed, use list with image thumbnails”).

The fastest reliability gains came from teaching the model the renderer’s world, not from widening the renderer.

Native Reality Check: iOS, Android, Web

  • iOS: SwiftUI previews lie (in helpful ways). Production timing and memory are different; Core Animation and Combine scheduling can reorder updates. Use MainActor wisely; isolate heavy normalization off-main; keep animations declarative and cancelable.
  • Android: Fragment lifecycles, RecyclerView recycling, and Compose recomposition create unique ordering hazards. Favor immutable state and snapshot flows; be explicit about remember/saveable boundaries.
  • Web: Feels easiest until you hit GPU switching, font loading, and hydration. Still, the event model and dev tooling make rapid iteration and diffing simpler.

What I’d Do Again

  • Start small: 6–8 components, not 60.
  • Version early: v0 payloads haunt you forever.
  • Build a renderer CLI: Feed it specs and snapshot outputs without launching the app.
  • Treat rendering as a transaction: All-or-nothing updates keep the UI stable.
  • Log like an SRE: You’re running a tiny interpreter; give yourself runtime visibility.

What I’d Avoid

  • Letting the model invent actions or parameters on the fly.
  • Rendering before validation “just to see something.”
  • Overfitting to simulator or dev environment behavior.

Closing Thought

Generative UI isn’t about replacing design or engineering. It’s about giving products a new operating layer: intent in, trustworthy interfaces out.

The trick is to keep creativity at the edges (specs, patterns, content) and determinism at the core (renderer, actions, lifecycles). With a strict spec, a safe renderer, and disciplined streaming and state management, you can turn model outputs into native, reliable, and accessible experiences — without running model code on-device.

This is how we get fast iteration without chaos, personalization without fragmentation, and power without the maintenance tax.

The future UI isn’t hand-wired. It’s described, validated, and rendered — instantly.

Table of Contents