Skip to content
A/B Routing With Nginx: No Third-Party Tools Required

A/B Routing With Nginx: No Third-Party Tools Required

By Amitav Roy Published April 28, 2026 6 Min Read

Nginx can handle A/B routing natively — no LaunchDarkly, no Optimizely, no SDK. Three directives and a cookie. Here's how it works.

TLDR

  • Nginx routes users to different application variants based on traffic source using three built-in directives — no SDK, no library, no service required
  • map + $arg_source + $cookie_ab_variant is all you need for source-based routing with cookie persistence
  • Routing at the infrastructure layer keeps application code completely unaware of variants — no feature flags in PHP, no branching logic, no imports
  • This works in production with known caveats: session-only cookies, CDN cache bleed, and silent default assignment when users bypass the entry path
  • If you need user-level targeting, percentage splits, or experiment analytics, a proper feature flag service is the right tool — for traffic-source routing, it's overkill

The Problem With Reaching for a Tool

A/B routing has become synonymous with third-party services. The moment a team needs to send blog traffic to one variant and LinkedIn traffic to another, the default answer is LaunchDarkly, Optimizely, or Split.io.

These services are genuinely powerful for user-level targeting, gradual rollouts, and experimentation analytics. They're also expensive, and they put routing logic inside the application. Your frontend or backend starts importing SDKs, checking flags, branching — and the routing knowledge leaks everywhere.

For simple traffic-source routing, Nginx already has everything you need. No SDK. No cost. No application changes.


The Architecture

The setup is deliberately minimal:

+--------------------+ | Nginx :8080 | | (traffic cop) | +--------+----------+ | +-------------+-------------+ | | +----------+--------+ +----------+--------+ | app_a :3000 | | app_b :3000 | | (blog variant) | | (LinkedIn variant)| +-------------------+ +-------------------+

One Nginx container acts as the reverse proxy. Behind it sit two PHP containers: app_a and app_b. The app containers are never exposed directly — all traffic enters through Nginx on port 8080.

nginx-ab-demo/ ├── docker-compose.yml ├── nginx/ │ └── nginx.conf ├── static/ │ └── index.html ├── app_a/ │ ├── Dockerfile │ └── src/ │ ├── index.php │ └── product.php └── app_b/ ├── Dockerfile └── src/ ├── index.php └── product.php

Five moving parts. The entry point is a static index.html with two buttons. Everything interesting happens in nginx.conf.


How Nginx A/B Routing Works

The flow across two requests:

Visit 1: ?source=blog -> $arg_source -> map -> variant_a -> Set-Cookie: ab_variant=variant_a Visit 2: /product.php -> $cookie_ab_variant -> map -> upstream: app_a

Step by step:

  1. User hits the home page with ?source=blog or ?source=linkedin
  2. Nginx reads the query param via $arg_source — a built-in variable, zero parsing required
  3. A map block converts the source value into a variant name
  4. Nginx sets the ab_variant cookie on the response using add_header Set-Cookie with always — fires even on non-200 responses
  5. User lands on a variant-specific home page served by the correct PHP container
  6. On subsequent requests, Nginx reads $cookie_ab_variant
  7. A second map block converts the cookie value into an upstream group name
  8. Nginx proxies to the correct container

The PHP containers have no idea any of this is happening. They serve their pages. Nginx owns the routing entirely.


The Nginx Config

map $arg_source $ab_variant { blog "variant_a"; linkedin "variant_b"; default "variant_a"; } map $cookie_ab_variant $target_upstream { variant_a "app_a"; variant_b "app_b"; default "app_a"; } upstream app_a { server app_a:3000; } upstream app_b { server app_b:3000; } server { listen 80; location = / { add_header Set-Cookie "ab_variant=$ab_variant; Path=/; HttpOnly" always; proxy_pass http://$target_upstream; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location = /product.php { proxy_pass http://$target_upstream; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }

Three directives carry the load: map, $arg_source, and $cookie_ab_variant. The location = / exact match modifier ensures cookie-setting logic fires only on the root path, not on every route.

Docker's internal DNS resolves app_a and app_b to the correct containers automatically. No service discovery configuration required.


Trade-offs and Failure Modes

Where Nginx Routing Holds

  • Traffic-source routing: blog, LinkedIn, email, any source you can pass as a query param
  • Variant persistence across a session via cookie
  • Zero application code changes — you can add or swap variants without touching the app
  • No added latency — Nginx is already in the request path

Where It Breaks Down

  • User-level targeting: Nginx has no concept of user identity. "Show variant B to returning users" is not possible here.
  • Percentage splits: there's no native mechanism to send 20% of organic traffic to variant B. You'd need OpenResty/Lua or a different layer.
  • Experiment analytics: Nginx won't tell you which variant converted better. You'd need to instrument the apps separately.
  • CDN cache bleed: a CDN in front of Nginx may cache a variant A response and serve it to a variant B user. Vary: Cookie is mandatory in that scenario.

The Silent Failure Worth Knowing

If a user lands directly on /product.php without first visiting the root path, the cookie is never set. They hit default app_a via the map fallback. That may be the right behavior — but it's an assumption baked into the config, not an explicit decision. Make it visible in your runbook.


What to Harden Before Production

The POC config works. In production, four additions are non-negotiable:

  • Max-Age on the cookie — without it, the cookie is session-only and a user who closes the browser gets re-assigned on return, possibly to a different variant
  • Secure flag in HTTPS environments — prevents the cookie from transmitting over plain HTTP
  • SameSite=Lax — prevents the cookie from being sent on cross-site requests and avoids CSRF exposure
  • Vary: Cookie on responses if a CDN sits upstream — without it, CDN nodes will cache a variant and serve it indiscriminately

The updated Set-Cookie header:

add_header Set-Cookie "ab_variant=$ab_variant; Path=/; HttpOnly; Max-Age=86400; Secure; SameSite=Lax" always; add_header Vary Cookie always;

The Broader Principle

Routing is an infrastructure concern. The moment you implement it in application code, you've coupled two things that shouldn't be coupled: what the app renders and who it renders it for.

Nginx sits at the edge of every request. It already owns the response. It can read headers, query params, and cookies without any help. For traffic-source routing, it's the right place to make the decision — and keeping that decision there means variant rules can change without a deployment, rollback is a config edit, and the application code stays clean.

Reach for a feature flag service when you need user-level targeting, experimentation analytics, gradual rollouts, or kill switches. Not as the default starting point.


Key Takeaways

  • Nginx's map directive, $arg_source, and $cookie_ab_variant are sufficient for traffic-source routing with session persistence — no third-party dependency required
  • Routing at the infrastructure layer keeps application code variant-agnostic; changes don't require a redeployment
  • The failure modes are specific and knowable: no user-level targeting, no percentage splits, CDN cache bleed without Vary: Cookie, and silent default assignment on direct URL access
  • Harden the cookie with Max-Age, Secure, and SameSite=Lax before calling it production-ready
  • Reach for LaunchDarkly or similar when you need analytics, targeting by user attribute, or percentage-based rollouts — not as the reflexive first choice for simple source-based routing
Software development

Continue Exploring

Need help with system design or architecture?

I work with engineering teams on technical audits, architecture reviews, and scaling strategy. Let's discuss your challenges.

Let's talk