← All posts

No boarding without a ticket

A backlog that grows isn't a reason to ditch the ticketing system. It's a diagnostic.

A company I know threw out their ticketing system last year. The CEO called it bureaucracy. "Just come talk to me," he said. Within a month, tasks were being lost. Within two, people were being blamed for losing them. The people being blamed were, naturally, the ones who'd wanted the system in the first place.

This pattern shows up everywhere. It usually happens because people conflate the ticket with the backlog.

What a ticket actually is

A ticket is a request with a tracking number. "I need X." Someone writes it down. It gets a state: pending, in progress, done, rejected. The person who asked doesn't have to stand over the person doing the work. They go do something else. When it's done, they get told.

But a ticket is also a notebook. Progress updates, notes on problems encountered, screenshots, data, links. The kind of context that's invaluable when a similar task comes up six months later, or when someone needs to understand why a decision was made. You don't need a committee-approved design document for every moderately complex piece of work -- a well-maintained ticket thread is often enough. It's institutional memory at the granularity of individual tasks, written by the people who did the work, while they were doing it.

Nobody complains about this when it works fast. Created 11:35, in progress by 13:05, completed and deployed at 15:10 if you're in a fast-paced startup. The system is invisible. The request flows.

The complaints start when the backlog grows. Fifteen requests ahead of yours. Three-week wait. The person who picks it up asks clarifying questions because the context went stale in the queue. Another week. I've written about what this looks like in practice -- a data scientist files a deployment request on Tuesday, and by the time her model goes live three weeks later, the data has drifted and the model is stale.

The other complaint is about the form itself. Fifteen mandatory fields. A dropdown for "business justification" with no option that fits. Requests rejected because the description didn't follow the template, even when it's obvious to anyone reading it what needs to be done. The team's dysfunction -- gatekeeping, CYA, process for the sake of process -- has leaked through into the thing the requester has to interact with. The system has become a bad interface.

Both failures get blamed on "tickets." Neither is actually caused by tickets.

What happens when you remove them

The person most opposed to a ticketing system is usually the one assigning the most tasks. The pushback comes from the top. Meanwhile, the people being blamed for losing tasks are the ones most in favour of having a system. They want the visibility. They want to stop being held accountable for things nobody tracked.

Remove the tickets and you get:

  • No queue visibility. You can't see the backlog, so you can't manage it.
  • No state tracking. Did that thing get done? Check Slack. Search for the thread. Hope someone replied.
  • Priority by volume. Whoever asks loudest gets served first. Quiet, important work sinks.
  • No data. A ticket system tells you where the bottlenecks are. Without it, you can feel that things are slow but you can't diagnose why.
  • No accountability. Without tickets, blame flows downhill. When a CEO tells a technician with a soldering iron in hand to do something and that task gets lost, it's not going to be the CEO who takes the blame. Tickets create traceability -- who asked for what, when, and what happened. Remove the tickets and you remove the operational data. What's left is org-chart politics.

You've replaced a bad implementation behind a good interface with no interface at all.

Think about your own team

How do requests flow in your organisation right now? If someone needs something from another team, what happens? Is there a system that tracks the request, or does it live in a Slack thread that will be buried by tomorrow? When something falls through the cracks, can you trace why -- or does it become a he-said-she-said about who forgot?

That engineer who left six months ago had some interesting findings about a sensor that suddenly started misbehaving. It's misbehaving again. He didn't have time to write it up on Confluence after the fact -- who does, when there's a deadline looming? But the notes, the data, the links he assembled during the work itself -- are they on a ticket somewhere, easily recalled? Or were they in his home folder on a laptop that's since been reimaged and issued to the new intern? Or in a Slack thread you can't find without remembering a specific word to search for?

If you removed your ticketing system and things got better, the tickets weren't the problem -- you just had a bad system. If you removed it and things got worse, you removed the only thing making the chaos legible.

The fix that actually worked

Keep the tickets. Fix the implementation.

The backlog is a diagnostic. If 60% of tickets are deployment requests that could be self-service, the answer isn't more people processing them -- it's automating them. The interface stays the same: request in, state tracked, result out. The implementation changes from "a person runs a script" to "a pipeline runs automatically." If tickets stall because of handoffs between teams, the underlying problem is coordination cost. If one person is the bottleneck for a class of request, the underlying problem is knowledge concentration.

Sometimes the fix is backpressure -- a team saying "we're not accepting new requests this sprint." Most teams never do this. They accept everything, deliver nothing on time. At Bolt, the philosophy was: if someone has to work late once to make something happen, that's unfortunate but sometimes stuff needs to get done. If it happens often, it's an organisational problem, and you make it visible by letting it fail. Not by heroically absorbing the overload and hiding the symptom. I remember an urgent request landing in the DE team support channel once -- some team needed something yesterday. The head of DE dropped into our private channel: "We told them a year ago to start working on this. They left it last minute and expect you to fix that for them. Let it fail." With a link to the confluence page on this exact culture value.

The technical parallel

If you're an engineer, you already know this pattern by a different name.

In Node.js, there's a metric called event-loop lag -- the delay between when the runtime schedules a callback and when it actually executes. When it climbs, your async code is still technically non-blocking, but everything feels slow. At Bolt, we tracked this in Grafana and alerted on it in Slack. When it spiked, the on-duty engineer had to acknowledge it.

We had one case where event-loop lag spiked to several seconds, once per day, like clockwork. A service was parsing a large data structure synchronously -- blocking the entire event loop while it chewed through the input. The fix was batching: split the input into chunks with a dummy await between each batch, yielding back to the event loop so other work could proceed. A manager doing the same thing would call it delegation.

A ticket is a Promise -- it tracks state (pending, in progress, resolved, rejected), the requester doesn't block, and producer and consumer are decoupled.

Removing a ticket system because the backlog is too long is like removing Promises from your codebase because the event loop is lagging. You haven't fixed the throughput problem. You've removed the interface that made it visible.

When event-loop lag spikes, you don't remove the event loop. You profile. You offload heavy synchronous work to worker threads. You add backpressure. The organisational equivalent is the same.

What patterns would you apply?

Software engineers have spent decades building patterns for exactly this: stop accepting work when you're overloaded, spread it evenly, slow down intake rather than dropping things silently, keep functioning even when part of the system is broken. Linus Torvalds runs fix-only merge windows for the Linux kernel -- that's backpressure at project scale. There's a longer conversation here about what distributed systems teach us about management. It starts with a map and some crayons.

It's like your team were playing games of chess on boards. Then someone decides boards are too slow, and switches to passing bits of paper with chess notation back and forth. Now people mistake a move from one game for another. They lose track of state. They lose moves. They lose entire games. Some of them can't even read notation. Lines get crossed, packets get dropped, state gets corrupted. The games haven't changed. You just removed the interface that made them playable.