From Prompt to Production: Building an AI Asset Pipeline for Theme Images
Our puzzle app has 10+ themes — Frozen, Rainbows & Unicorns, K-Pop Stars, Paw Patrol, and more. Each theme needs two images: a cover card for the carousel and a background that fills the screen behind it. Generating these manually through different AI tools, downloading, resizing, uploading to S3, and updating the code was a 15-minute process per image. With 20+ images to maintain, we needed a pipeline.
So we extended the admin UI we’d already built into the ai-service with a complete asset generation and deployment workflow.
The Category Assets Tab
The admin panel at /admin already had an Image Generator tab for general-purpose branding assets. We added a second tab — Category Assets — purpose-built for theme images.
The workflow is straightforward:
- Pick a theme from a dropdown (all 13 themes pre-loaded, plus “Add new theme…”)
- Choose asset type — Cover Image or Background
- Generate variations — prompt auto-fills based on the theme, hit Generate for 4 variations
- Compare and iterate — each result card has its own prompt editor, so you can tweak individual images without regenerating the whole batch
- Save favorites — star the images you like, they persist in localStorage
- Preview the carousel — see how your chosen cover looks on top of your chosen background, just like the real app
- Deploy — one click uploads to S3 and copies to your local dev folder simultaneously
Multiple AI Providers
The original admin UI only used HuggingFace. We added Replicate as a second image provider, giving us access to better models:
| Model | Provider | Best For |
|---|---|---|
| FLUX 1.1 Pro | Replicate | Highest quality |
| SDXL Lightning | Replicate | Ultra-fast iterations |
| Playground v2.5 | Replicate | Aesthetic/stylized |
| FLUX.1-schnell | HuggingFace | Good free-tier default |
Both providers are registered at startup. The route auto-picks the right one based on the model you select — Replicate short names (like flux-1.1-pro) route to Replicate, HuggingFace model IDs route to HuggingFace. No manual switching needed.
// Route auto-picks provider based on model selection
const isReplicateModel = requestModel && (requestModel in REPLICATE_MODELS);
const provider = (isReplicateModel ? getImageProvider("replicate") : null)
|| getActiveImageProvider();
One gotcha: Replicate’s free tier limits you to 1 concurrent request. We detect Replicate models in the frontend and generate sequentially instead of in parallel — you see results appear one by one instead of all at once.
LLM-Powered Prompt Enhancement
We added a text generation capability to the ai-service with the same provider registry pattern. Press Cmd+K in any prompt field and the current text gets sent to an LLM that expands it into a detailed image generation prompt.
We integrated two providers:
- Groq (llama-3.3-70b) — near-instant responses, our default
- OpenAI (gpt-4o-mini) — available as fallback
Groq running Llama 3.3 70B is absurdly fast for this use case. Prompt enhancement feels instant — no loading spinner needed.
The enhancement works contextually. It knows the theme, asset type, and target audience:
prompt: `You are a prompt engineer for AI image generation.
Context: Kids puzzle app, theme "${theme.name}", asset type: ${assetLabel}
The user wrote: "${currentText}"
Expand into a detailed image generation prompt...`
Carousel Preview
This was the feature that made the whole pipeline feel complete. When you’re generating covers and backgrounds separately, you need to see how they look together before deploying.
The preview panel simulates a phone screen with the background filling it and the cover card centered on top — matching the real carousel layout. It updates live as you:
- Select a generated result
- Click a saved image
- Switch between covers and backgrounds
The logic independently tracks cover and background slots. Clicking a saved background only updates the background in the preview — it doesn’t touch the cover. Missing slots auto-fill from your other saved images for that theme.
Deploy to S3
The deploy button (🚀) does two things simultaneously:
- Uploads to S3 —
s3://aws-platform-puzzle-images/themes/{theme-id}.png - Copies locally — writes to the puzzle app’s
client/public/themes/folder
The S3 upload uses the AWS SDK with IRSA authentication in production (IAM role bound to the Kubernetes service account). Locally, it uses your AWS CLI credentials.
For the puzzle app to read from S3, we added a configurable base URL:
const ASSETS_BASE_URL = import.meta.env.VITE_ASSETS_BASE_URL || "";
function themeAsset(filename: string): string {
return ASSETS_BASE_URL ? `${ASSETS_BASE_URL}/themes/${filename}` : `/themes/${filename}`;
}
Local dev uses the local files (empty base URL). Production builds bake in the S3 URL via a Docker build arg.
What’s Next
We’ve generated fresh assets for Frozen and Rainbows & Unicorns. Eight more themes to go, plus three inactive themes (Cars, Pokemon, Sonic) that have assets but aren’t in the carousel yet.
The admin panel is also becoming useful beyond just theme images — the Image Generator tab handles app icons, splash screens, and store graphics. Having multiple AI providers means we can pick the best model for each use case instead of being locked to one.
The whole system — from typing a rough idea, to Cmd+K enhancement, to generation, to carousel preview, to S3 deploy — takes about 2 minutes per theme. That’s down from 15+ minutes of manual work across multiple tools.