Instant Preview Environments Under the Hood: Docker, WebSockets, and a Next.js Control Plane
A pull request sits open for four days because the backend change can't be reviewed without running it. Someone spins up a staging server manually. Someone else opens an ngrok tunnel from their laptop and forgets to close it. The client asks for a link and gets a screenshot instead.
We've been that engineer. We built PreviewDrop to stop being that engineer.
The core promise is simple: push a feature branch, get a live HTTPS URL posted to the pull request as a commit status check. The container runs, serves traffic, self-destructs when your TTL expires. No staging server to maintain. No ngrok tunnel to keep alive.
But the "simple" part took some doing. Here's the architecture that makes it work.
Why Most Preview Tools Don't Work for Backend Stacks
Vercel preview deployments are excellent — for Next.js. Render's PR previews work for web services that fit their build pipeline. But backend stacks are different. Django, Rails, Laravel, FastAPI, Spring Boot — these are long-lived processes in containers. They need a database connection, environment variables injected at runtime, background workers that might need to start, and WebSocket support that doesn't degrade.
A preview environment for a backend stack isn't a static deploy. It's a full container that needs to boot, bind a port, accept traffic, and stay healthy until the review is done. That requires infrastructure decisions most preview tools don't make — and it's where Docker becomes the foundation.
PreviewDrop's rule is: if it runs in Docker, it previews. No framework adapter. No serverless wrapper. No YAML manifest describing how to shape your app into a preview-shaped box. A working Dockerfile in the project root is the only prerequisite.
The Architecture: Three Layers
Layer 1 — The container runtime. Every preview is a Docker container running on a managed worker node. When GitHub Actions triggers a build, the image is pushed to a private registry. PreviewDrop pulls it, injects environment variables from the project's dashboard settings, and starts the container with the port your app exposes. The container gets a health check, a TTL, and an HTTPS reverse proxy route. It lives until the TTL expires or the PR merges — whichever comes first.
Layer 2 — WebSocket coordination. Building a container image and starting it takes time — anywhere from 30 seconds to a few minutes depending on the stack. During that window, the user is staring at a PR status check that says "pending." The dashboard needs to show real-time progress.
We use WebSockets for that pipeline of state transitions. The control plane pushes an event stream:
build_registered → image_pulled → container_allocated → starting → healthy → url_assigned
The dashboard subscribes to the relevant project channel and renders each stage as it happens. The PR commit status updates at the final healthy transition, with the URL attached. No polling. No spamming the API every 3 seconds waiting for a status field to flip.
Layer 3 — The Next.js dashboard. This is the surface area users actually touch. It handles authentication (GitHub OAuth), project management, environment variable configuration, deployment logs, and the live preview list. It consumes the WebSocket feed from layer 2 and renders the deployment pipeline in real time.
Next.js was the right choice here because the app is mostly server-rendered pages with some client-side reactivity for the deployment pipeline and log viewer. The API routes in /api/v1/ expose a programmatic interface for the CLI and CI workflows. The dashboard isn't a separate service — it's the same Next.js app, which keeps the architecture simpler than splitting into a separate SPA.
How the Share Link Works
When a preview reaches the healthy state, the control plane generates a shareable URL with a unique token. This is the link posted to the PR commit status. It's also the link you can send to a client, a PM, or a stakeholder who doesn't have a GitHub account.
The share link is a reverse proxy route through the control plane. A request hits https://{token}.previewdrop.dev, the proxy resolves the token to the correct container on the correct worker node, and the request is forwarded. TLS terminates at the edge. The container never sees the proxy — it just gets HTTP traffic on the port it bound.
The link is password-protectable per preview. The TTL is configurable per project — 1 hour minimum, up to 168 hours on the Team plan. When the TTL expires, the container is stopped, the route is removed, and the token is invalidated. No zombie containers. No URLs that still work six months later and confuse someone looking at a stale Google index entry.
What Happens When a Container Dies
Containers fail. Out-of-memory kills. Unhandled exceptions. Missing environment variables that cause a crash-loop. The architecture needs to handle this gracefully.
The worker node monitors container health via a periodic HTTP check to the app's exposed port. If the check fails three consecutive times, the container is marked unhealthy. The WebSocket connection pushes an unhealthy event with the exit code or the last error response. The dashboard shows the status change and surfaces the most recent log lines.
If the container is crash-looping, the worker node detects the pattern and stops restarting after the third attempt. The dashboard shows a crashed state with the failure reason extracted from the container exit log. No infinite restart loops burning CPU on the worker node.
Environment variable misconfiguration is the most common cause of failures. The dashboard's "Scan .env.example" feature detects which variables your app expects and flags any that aren't set in the preview environment.
The Build Pipeline End to End
A developer pushes a feature branch to GitHub. GitHub Actions picks up the previewdrop.yml workflow, builds the Docker image, tags it with the branch name and commit SHA, and pushes it to the PreviewDrop registry. The workflow triggers the PreviewDrop API: "new image available for project X, branch Y, commit Z."
The control plane assigns a worker node, pulls the image, starts the container with the project's environment variables, and begins the health check loop. The WebSocket feed pushes state transitions to the dashboard. When the container passes its health check, the URL is generated, the route is activated, and the PR commit status updates to "success" with the preview URL attached.
The whole pipeline from push to live URL typically finishes in 45–90 seconds for most Django, Rails, and Laravel apps. Cold starts add another 20–40 seconds for the image pull. Subsequent builds are faster because the base layers are cached.
The Pricing Decision That Shaped the Architecture
Usage-based billing creates design pressure. If you bill per container-second, every optimization looks like an opportunity to charge more — longer build times, longer health checks, no auto-cleanup. You don't want containers to self-destruct quickly because every second is revenue.
Flat pricing removes that incentive. PreviewDrop Starter is $19/month per workspace. That covers 5 concurrent previews, up to 3 team members, and a 4-hour TTL per container. The same $19 whether your team has a quiet week with two PRs or a crunch sprint with ten.
When billing doesn't punish usage, teams use the product more — more reviews, more client shares, fewer coordination bottlenecks. That's the behavior we want to encourage. It's also the behavior per-second pricing accidentally discourages.
Launching May 20th on Product Hunt
PreviewDrop launches on Product Hunt on May 20, 2026. The free tier ships that day — two concurrent previews, three projects, no credit card. Same one-command setup: npx previewdrop setup.
If your backend runs in Docker and your team has more than one open PR right now, try the free tier when we launch.
Ready to give every branch a live URL?
Free tier — 2 concurrent previews, no credit card required.
Start free with GitHub