Eating Our Own Dog Food: An Admin UI That Uses Our AI Service to Design Itself


We needed real app icons. The React Native app was still shipping with Expo’s default placeholder — a generic gradient square that screamed “developer who hasn’t gotten to branding yet.” We had an AI image generation service sitting right there. Why not use it?

So we built an admin UI directly into the ai-service. It generates branding assets using the service’s own /api/v1/image/generate endpoint, lets you compare variations side by side, and exports a ZIP file with every size your app store submission could ask for.

Why Not Just Use Figma?

We could have opened Midjourney, downloaded a PNG, and manually resized it in Figma for each platform target. That’s what most people do.

But we had specific constraints:

  1. 10 platform-specific sizes — iOS icon (1024x1024), Android adaptive icon foreground (432x432), splash screen (1284x2778), favicon (32x32), OG image (1200x630), and five more. Doing this by hand for every iteration is tedious.
  2. Iteration speed — branding is subjective. You want to generate a bunch of options, compare them, tweak the prompt, try again. A GUI beats a CLI script.
  3. We already had the infrastructure — the ai-service supports multiple models, has Langfuse tracing, and handles all the provider abstraction. Why go around it?

The real motivator was philosophical: if we’re building an AI platform, we should be using it to build the platform.

Architecture: Static SPA, No Build Step

The admin UI is three files served by the same Bun.serve() that handles the API:

src/admin/public/
  index.html    — layout and structure
  style.css     — dark theme, responsive grid
  app.js        — vanilla JS, ~350 lines

No React. No Vite. No node_modules. The HTML loads the CSS and JS directly. Bun serves them as static files from the /admin path prefix:

// src/index.ts
if (path === "/admin" || path.startsWith("/admin/")) {
  return serveAdminFile(path);
}

The serveAdminFile function maps /admin to index.html and serves everything else by relative path with the correct MIME type. It ships in the same Docker image as the API — no separate deployment, no CORS configuration, no additional infrastructure.

This was a deliberate decision. The admin UI is an internal tool for one or two developers. It doesn’t need hot module replacement. It doesn’t need a component library. It needs to work, and it needs to not add complexity to the build pipeline.

The Generator

The left panel has the controls: model selector, prompt textarea, style presets, and a variation count picker.

Model Selection

The dropdown offers three models out of the box:

  • FLUX.1-schnell — fast (2-4 seconds), good quality, our default for the puzzle app
  • FLUX.1-dev — higher quality, slower (~10 seconds), better for detailed branding work
  • Stable Diffusion XL — the classic, useful for certain art styles

There’s also a “Custom model…” option that reveals a text input for any Hugging Face model ID. This works because the ai-service’s provider registry supports per-request model overrides:

body: JSON.stringify({
  prompt: fullPrompt,
  options: {
    model,        // override the server's default model
    width: 1024,
    height: 1024,
  },
})

The API call goes to the same /api/v1/image/generate endpoint that the puzzle app uses. Same auth (x-api-key header), same provider pipeline, same Langfuse tracing. The only difference is the x-consumer-id: admin-ui header, so we can distinguish admin usage from app usage in our dashboards.

Style Presets

Each preset appends style-specific suffixes to your prompt:

const STYLE_PRESETS = {
  icon:   { suffix: ', square composition, centered subject, no text, solid or gradient background', width: 1024, height: 1024 },
  splash: { suffix: ', centered hero element, clean background, vertical composition',               width: 1024, height: 1024 },
  store:  { suffix: ', wide horizontal composition, eye-catching, room for text overlay',            width: 1024, height: 512 },
  free:   { suffix: '',                                                                              width: null, height: null },
};

The “App Icon” preset is the one we used most. That no text suffix is important — AI models love adding garbled text to images, which looks terrible on a 32px favicon.

Parallel Variations

You pick how many variations you want (1, 2, 4, or 6) and the UI fires that many requests in parallel:

const promises = Array.from({ length: state.variations }, (_, i) =>
  fetch('/api/v1/image/generate', {
    method: 'POST',
    headers: { 'x-api-key': state.apiKey, 'x-consumer-id': 'admin-ui' },
    body: JSON.stringify({ prompt: fullPrompt, options: { model } }),
  })
);
await Promise.allSettled(promises);

Results stream into a grid as they complete. Each card shows the generated image and a timing badge. Click one to select it for export. Keyboard shortcuts (1-9) let you select without clicking.

Four parallel FLUX.1-schnell requests complete in about 4-5 seconds total — the model is fast and the requests are independent. FLUX.1-dev takes closer to 15 seconds for four, but the quality difference is noticeable for icon work.

The Export Pipeline

This is where the real utility lives. Click “Export Selected” and you get a ZIP containing 10 files:

AssetDimensionsFilename
iOS App Icon1024 x 1024ios-icon.png
Android Foreground432 x 432android-icon-foreground.png
Android Background432 x 432android-icon-background.png
Splash Icon512 x 512splash-icon.png
Splash Screen1284 x 2778splash-screen.png
Favicon32 x 32favicon.png
Apple Touch Icon180 x 180apple-touch-icon.png
OG Image1200 x 630og-image.png
Store Feature Graphic1024 x 500store-feature.png
App Store Screenshot BG1290 x 2796store-screenshot-bg.png

The ZIP is generated entirely in the browser using JSZip. No server round-trip for the export — the image data is already in memory from the generation step.

Multi-Step Canvas Downscaling

Here’s the one genuinely interesting technical detail. If you resize a 1024x1024 image directly to 32x32 using a single Canvas drawImage() call, you get a blurry mess. The browser’s bilinear interpolation can’t handle a 32x reduction in one step — too much information is lost.

The fix is progressive downscaling: halve the dimensions repeatedly until you’re within 2x of the target, then do the final resize:

function resizeImage(img, targetWidth, targetHeight) {
  let sw = img.naturalWidth;
  let sh = img.naturalHeight;
  let currentSource = img;

  // Step down by halves until within 2x of target
  while (sw > targetWidth * 2 || sh > targetHeight * 2) {
    const stepW = Math.max(Math.floor(sw / 2), targetWidth);
    const stepH = Math.max(Math.floor(sh / 2), targetHeight);
    const stepCanvas = document.createElement('canvas');
    stepCanvas.width = stepW;
    stepCanvas.height = stepH;
    const stepCtx = stepCanvas.getContext('2d');
    stepCtx.imageSmoothingEnabled = true;
    stepCtx.imageSmoothingQuality = 'high';
    stepCtx.drawImage(currentSource, 0, 0, stepW, stepH);
    currentSource = stepCanvas;
    sw = stepW;
    sh = stepH;
  }

  // Final resize
  const finalCanvas = document.createElement('canvas');
  finalCanvas.width = targetWidth;
  finalCanvas.height = targetHeight;
  const fCtx = finalCanvas.getContext('2d');
  fCtx.imageSmoothingQuality = 'high';
  fCtx.drawImage(currentSource, 0, 0, targetWidth, targetHeight);
  return finalCanvas;
}

For the favicon (1024 to 32), this creates intermediate canvases at 512, 256, 128, and 64 before the final 32. Each step preserves detail that a single jump would destroy. The difference is dramatic — the favicon actually looks like the source image instead of a colored smudge.

Auth Without Overengineering

The admin UI sits behind a simple auth gate — enter the same x-api-key that the service uses for API authentication. The key is stored in sessionStorage (cleared when you close the tab). No user accounts, no OAuth, no JWT.

This is fine because: the admin path isn’t exposed through ingress (the service is ClusterIP-only, no external access), and even if someone reached it, they’d need a valid API key to generate anything. During local development, you use the same key from your .env.

The Dog Food Angle

The workflow ended up being:

  1. Open localhost:3002/admin
  2. Type “Friendly cartoon puzzle piece character, playful, kids game mascot”
  3. Select “App Icon” preset, pick FLUX.1-dev for quality
  4. Generate 4 variations
  5. Pick the best one
  6. Export ZIP
  7. Drop the files into the React Native project’s assets/images/ directory

From prompt to production-ready assets in under a minute. And every generation is traced in Langfuse, so we can see exactly which prompts produced the icons we shipped.

There’s something satisfying about an AI service that designs the branding for the app that uses the AI service. It’s turtles all the way down, except the turtles are generating cartoon puzzle pieces.

What We Didn’t Build

No image editing. No cropping tool. No prompt history or favorites. No batch re-export with different prompts. All of those would be nice, and none of them were needed to solve the actual problem: generate icons, export them, move on.

The whole thing is ~350 lines of vanilla JavaScript. If we need those features later, we’ll add them. But the best internal tools are the ones that do one thing and don’t try to become products.


Ship the tool that ships the app.