Skip to content
Killed One Service and Nothing Should Break

Killed One Service and Nothing Should Break

By Amitav Roy Published May 7, 2026 8 Min Read

Speed is only an asset if your engineering culture can keep up. Learn why AI isn't DRY by default and how to manage the "junior developer" context gap before your technical debt spirals.

Most developers think splitting their app into services automatically makes them independent. But if your services are calling each other directly — or sharing job classes over a queue — you haven't built microservices. You've built a distributed monolith. And a distributed monolith is worse than a regular one, because now you have all the complexity of distribution with none of the benefits.

In this post, we fix that. We're going to build two Laravel applications that communicate through Redis as a message broker — no direct HTTP calls, no shared code — and prove that when one goes down, the other doesn't even notice.


The Problem With "Microservices" That Are Still Coupled

Here's what most people do when they split services. Service A needs to tell Service B something happened, so they add an HTTP call. Or they push a Laravel Job to a shared queue. Seems fine until Service B goes down during a deployment. Now Service A is throwing exceptions. Your users feel it. Your on-call engineer feels it.

The root cause is runtime coupling. Service A can't complete its job without Service B being alive. That's not independence — that's just distance.

The goal of microservices isn't to have small services. It's to have services that can fail, deploy, and scale independently of each other. If you can't take one down without affecting another, you haven't achieved that goal yet.


What Independent Deployability Actually Means

Independent deployability means exactly what it sounds like. You can deploy, restart, or kill Service B at 3am and Service A keeps running without a single error. No failed requests. No lost data. No angry users.

The way you get there is by removing the assumption that the other service is always available. Instead of Service A calling Service B directly, Service A puts a message in a broker and forgets about it. Service B picks that up whenever it's ready. If it's down, the message waits. When it comes back, it processes everything it missed.

The broker becomes the contract. Not the service endpoint.


The Architecture We're Building

We have two Laravel applications and Redis sitting between them.

Architecture Diagram

The main app is responsible for raising events — things like user.registered and order.created. Its only job is to say "something happened" and push that to the event consumer. That's it. Its responsibility ends there.

The event consumer (Redis in our case) sits in the middle. It holds the events, acknowledges receipt, and waits for someone to come collect them. It doesn't care if analytics is up or down.

The analytics app runs a queue worker that constantly asks the event consumer: "Do you have anything for me?" If yes, it picks it up, routes it to the right job, and processes it. If no, it waits and asks again.

With this setup, the main app and analytics app never talk to each other directly. They don't even know the other exists. They just share a queue.


Events on the Main App Side

In the main app, events are raised inside actions. The CreateQuoteAction handles the database insert and then immediately dispatches a QuoteCreated event. Same pattern for user registration — CreateUserAction dispatches UserRegistered. The event lives inside the action so the side effect is always bundled with the operation. You can't call the action without the event firing.

The event itself is then picked up by a listener — SendQuoteCreatedToAnalytics — and this is where the actual push to Redis happens.

Queue::connection('redis')->pushRaw(
    json_encode([
        'event' => 'quote.created',
        'payload' => [
            'quote_id'   => $event->quote->id,
            'user_id'    => $event->quote->user_id,
            'product_id' => $event->quote->product_id,
            'qty'        => $event->quote->qty,
            'created_at' => $event->quote->created_at?->toIso8601String(),
        ],
    ]),
    'analytics'
);

The listener pushes to a named queue called analytics. Main app's job is done. It doesn't wait. It doesn't check if analytics received it. It moves on.


The Event Contract — Why Raw Payload Matters

This is the part most people get wrong when they first set up cross-service queues in Laravel.

If you push a Laravel Job object instead of raw JSON, Laravel serializes the full class name into the payload — something like App\Jobs\QuoteCreated. Now your analytics app needs that exact class in the exact same namespace to deserialize it. You've just created tight coupling at the class level. Change the namespace, rename the job, restructure your app — and things break silently.

pushRaw gives you full control. You push plain JSON. You define the shape. The analytics app reads that shape and does whatever it wants with it. Neither side cares how the other is structured internally.

That JSON shape is your event contract. The rules are simple:

  • The main app can add new fields freely — analytics will ignore what it doesn't need
  • The main app should never remove or rename existing fields without versioning
  • Always include an event key as a type discriminator so the analytics app knows how to route it This is the only thing the two apps share. A documented payload shape. Everything else is completely independent.

Consuming Events in the Analytics App

On the analytics side, there's a console command — analytics:listen — that runs a continuous loop, polling the Redis queue. When it finds a message, it JSON decodes it, reads the event key, and dispatches the appropriate job through an event router.

If event is user.registered, it dispatches ProcessUserRegistration. If it's quote.created, it dispatches ProcessQuoteCreated. Each job reads the payload and saves it to the analytics database.

Conceptually it looks like this:

Redis queue
    
analytics:listen (polling loop)
    
EventRouter (reads event key)
    
ProcessUserRegistration  /  ProcessQuoteCreated
    
Analytics database

The analytics app has its own job classes with its own namespaces. None of them exist in the main app. None of the main app's classes exist here. The two codebases are completely separate — connected only by the agreed payload format over the queue.


The Demo — Kill the Service, Place More Orders

This is where the concept becomes real.

Start both apps running normally. Place a few orders, register a few users. The analytics app picks up every event in near real-time. Everything is working as expected.

Now kill the analytics queue worker. Stop it completely as if it's going through a deployment.

Go back to the main app and place more orders. Register more users. The main app doesn't throw a single error. It doesn't know analytics is down. It just keeps pushing events to Redis and moving on.

Check Redis directly:

redis-cli llen analytics

You'll see the events piling up — 3, 5, 10. They're not lost. They're just waiting.

Now start the analytics worker back up. Within seconds it processes the entire backlog. Every missed event lands in the analytics database. The numbers catch up completely.

That's the point. One service went down. The other never noticed. No data was lost. No users were impacted. That's what independent deployability actually looks like.


Throughput Control — An Underrated Advantage

There's another benefit here that doesn't get talked about enough.

When your main app has a traffic spike, you scale it up — more servers, load balancer, the usual. But do you really need to scale your analytics app at the same rate? Analytics isn't user-facing. It doesn't need to be real-time. It just needs to eventually catch up.

With direct service-to-service communication, a spike in the main app means a spike hits analytics immediately. You're forced to scale both together.

With this queue-based approach, the event consumer absorbs the spike. The main app dumps thousands of events into Redis, Redis holds them, and analytics clears through that backlog at whatever pace it's running at. You can scale analytics independently, on its own schedule, based on how far behind the queue is — not based on what the main app is doing.

This gives you a very predictable, measurable throughput. You can calculate how many events your analytics app processes per minute, set alerts on queue depth, and make informed decisions about when to scale. That's a level of operational control you simply don't get when services talk directly.


What This Pattern Gives You

To summarise what we've built:

Fault tolerance — one service going down doesn't cascade. The main app runs fine. Events accumulate safely. Analytics catches up when it recovers.

Independent deployability — deploy, restart, or update analytics without coordinating with the main app team or scheduling maintenance windows.

No shared code — two completely separate codebases. The only shared thing is a documented JSON payload shape.

Replay capability — since events sit in the queue until consumed, you can replay them, reprocess them, or extend the consumer to fan out to additional services without touching the main app at all.

Controlled throughput — analytics processes at its own pace. Scale it when you need to, not because the main app forced your hand.

This pattern isn't just a Laravel thing. It applies to any stack, any language, any framework. The principle is the same: identify which parts of your system can fail, decide what your system should do when they do fail, and architect accordingly. The queue is just the mechanism. The thinking is what matters.

System design

Continue Exploring

Need help with system design or architecture?

I work with engineering teams on technical audits, architecture reviews, and scaling strategy. Let's discuss your challenges.

Let's talk