Run Your Own Private AI Image Generator with Docker and Open WebUI
Learn how to set up a fully private, local AI image generator using Docker Model Runner and Open WebUI—no cloud subscription required.
Why Run Images Locally? The Case for Local AI
We've all experienced the frustration: you need a quick batch of AI-generated images for a project, so you sign up for an online service. Then come the questions—what happens to your prompts? How many credits are left? And why did a perfectly reasonable request for a dragon in a business suit get flagged by a 'safe content' filter? The solution is surprisingly simple: run everything on your own machine, with a polished chat interface to boot.

That's precisely what Docker Model Runner now makes possible. With a few commands, you can pull an image-generation model, link it to Open WebUI, and start creating images from a chat interface—fully local, fully private, and fully under your control. No cloud subscription required.
What You'll Need
- Docker Desktop (macOS) or Docker Engine (Linux)
- ~8 GB of free RAM for a small model (more is better)
- GPU (optional but highly recommended): NVIDIA (CUDA), Apple Silicon (MPS), or CPU fallback
If you can run docker model version without errors, you're ready to go.
How Docker Model Runner Works with Open WebUI
Before diving in, here's the big picture: Docker Model Runner acts as the control plane. It downloads the model, manages the inference backend lifecycle, and exposes a 100% OpenAI-compatible API—including the POST /v1/images/generations endpoint that Open WebUI already knows how to talk to. This means you get a fully compatible local replacement for DALL-E, but with zero data leaving your computer.
Step 1: Pull an Image Generation Model
Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute image generation models through Docker Hub, just like any other OCI artifact. To get started, pull a model:
docker model pull stable-diffusion
You can verify it's ready with:
docker model inspect stable-diffusion
This reveals details such as the model ID, tags, creation timestamp, and configuration—including the DDUF file name and layout. Under the hood, the model is stored locally as a single DDUF file that bundles the text encoder, VAE, UNet/DiT, scheduler config, and other components into one portable artifact. Docker Model Runner knows how to unpack it at runtime.

Step 2: Launch Open WebUI
Here's the magic part: Docker Model Runner has a built-in launch command that automatically wires up Open WebUI against your local inference endpoint:
docker model launch openwebui
That's it—a single command spins up the entire environment, connecting the local model to a sleek, familiar chat interface. You can now start generating images by typing prompts directly into the web UI.
Going Further: Customize the Experience
Once you have the basic setup running, you can explore additional options:
- Try other models by pulling different DDUF packages from Docker Hub.
- Tweak resource allocation by adjusting Docker container limits (RAM, GPU).
- Integrate into your workflow by using the OpenAI-compatible API from scripts or other apps.
For more details, see the requirements section above and the official Docker documentation.
Conclusion
Running your own local AI image generator is no longer a pipe dream. With Docker Model Runner and Open WebUI, you get a private, cost-free, and full-featured alternative to cloud services. No credits, no filters, no privacy concerns—just pure creative freedom on your own hardware. Give it a try and start generating dragons in business suits tonight.