Concept Note

Impersonal Flow Signaling (IFS)

Make coordination failures visible before they become crises

Org systems Workflow Coordination SLO thinking Work as graph

← Back to Concepts | Home

The Problem

Your team is blocked waiting on infrastructure. The infra team knows this, your manager knows this, everyone in the standup knows this. Nothing happens until someone escalates, which feels political and awkward, so everyone waits a bit longer. The delay compounds. The project slips. Nobody is at fault, but the work didn't flow.

This pattern is everywhere: code reviews that sit for days, approvals that disappear into queues, cross-team dependencies that stall silently. The failure isn't in any individual—it's in the coordination system itself, which has no way to signal when work has stopped moving.

Every time an engineer has to ping a colleague with "hey, can you look at this?", they're spending social capital. IFS makes that capital free—the system signals for them.

Impersonal Flow Signaling is a lightweight approach to make stalled work visible automatically, without blame or escalation theater.

Why this matters (simultaneously true):

What It Is

IFS treats work dependencies like service level objectives: you define what "healthy flow" looks like (code reviews complete in 48 hours, approvals acknowledge within 24 hours), and the system alerts when reality diverges from expectation.

The key word is impersonal. Signals come from observable facts—how long work has been in a particular state, whether dependencies are moving—not from someone deciding to raise their hand and say "we're blocked."

Think of it like monitoring for software systems, but applied to workflow: you're not watching for server crashes, you're watching for coordination breakdowns.

Core Principles

1) State over story

Signals are derived from observable state transitions, not intent, effort, or explanations.

2) Dependencies are first-class

Work is a graph. Edges (handoffs, approvals, reviews) matter as much as nodes (tickets).

3) Visibility before escalation

The system surfaces signals automatically; humans decide what action to take.

4) Flow is a system property

Stalls indicate coordination debt and queue health—not individual failure.

How It Works (Practically)

Start with one workflow that routinely causes pain. Let's say code reviews.

Step 1: Define the expectation

Step 2: Make violations visible

Step 3: Route attention to the right place

Step 4: Treat it as system data, not performance data

What you're not doing: Measuring individual review speed. Paging people when thresholds are breached. Creating new approval processes. Generating reports for management.

Diagram: Work as a Graph, Not a List

In list-based tooling, stalled work hides behind status labels. In a graph, stalls appear as edges that stop moving. The most useful "alert" is often simply: the graph stopped changing where it normally does.

Work dependency graph showing a stalled edge A workflow diagram showing nodes representing work items (Feature Ticket A, Infra Request B, Code Review C, Approval D, Merge E, Customer Outcome F). Edges connect dependencies between items. One edge from B to D is marked as stalled with a dashed line and labeled "IFS: dependency breach" indicating the edge exceeded its expected dwell-time SLO. This demonstrates how IFS surfaces stalls automatically by observing state transitions. A: Feature Ticket Team: App • State: In Progress B: Infra Request Team: Infra • State: Pending C: Code Review Owners: Repo • State: In Review D: Approval (Finance) Queue: Procure • State: Waiting E: Merge + Deploy Team: Release • State: Ready F: Customer Outcome Value delivered when A, B, C, D converge IFS: dependency breach Edge B→D exceeded dwell-time SLO Signal routes to owning queue, not a person

Look at your current sprint board: If you removed the 'Status' columns and only drew lines between dependencies, where would the lines be thickest and least likely to move?

Understanding Different Types of Stalls

Not all stalls are the same, and they don't all require the same response. IFS helps you distinguish between different signal types:

Capacity Signals: The queue is full because you don't have enough people.

Example: Your only DevOps engineer is on vacation, and all infrastructure requests are backed up. The team is overwhelmed, and every new request adds to the backlog.

Another example: A security team of two people handles approvals for all engineering teams. During peak release cycles, the approval queue grows to 20+ items, each taking 3-5 days. The team is working at capacity, but the organization's approval needs exceed available bandwidth.

Response: This is a resourcing problem. You need more capacity, better cross-training, or clearer prioritization mechanisms.

Clarity Signals: Work is stalled because requirements are fuzzy and nobody wants to touch it.

Example: A ticket has been "In Progress" for a week, but the engineer keeps asking questions in comments. The product manager responds with "let me check" and never follows up. The ticket bounces between states without moving forward.

Another example: A feature request sits in "Design Review" for two weeks. The designer asks clarifying questions, but the stakeholder who submitted it is unavailable. The ticket accumulates comments but no decisions are made. Other work waits on this decision.

Response: This is a requirements problem. Pause and clarify scope before resuming work. Consider time-boxing clarification periods.

Handoff Signals: The work is done, but the handoff was missed.

Example: A PR was approved but never merged; an approval was granted but the requester didn't see it. The work completed, but communication broke down at the transition point.

Another example: Infrastructure provisioning completed, but the notification went to a distribution list that nobody monitors. The requesting team doesn't know the work is done and continues waiting. A week passes before someone manually checks the status.

Response: This is a notification/process problem. Fix the handoff mechanism or add automated nudges. Consider explicit "ready for next step" states that trigger notifications.

Priority Signals: Work is stalled because something more important came up.

Example: Code reviews are delayed because the team is handling a production incident. Normal work paused to address an urgent issue.

Another example: A design review is delayed because the designer was pulled into a critical customer-facing feature request. The original ticket waits, but the team's priorities shifted to higher-value work.

Response: This is healthy system behavior, not dysfunction. Document it but don't "fix" it. Consider adjusting SLOs during known high-priority periods or using priority flags to context-switch gracefully.

Why This Is Hard (And Worth Doing Anyway)

The tooling problem

Most ticketing systems don't model dependencies cleanly. Jira links are messy, state transitions are inconsistent, and what the ticket says often diverges from what's actually happening. You'll spend time cleaning this up. That's not wasted effort—making dependencies explicit is valuable even without IFS.

The cultural problem

Any measurement can be weaponized. If your organization uses metrics punitively, IFS will become another cudgel. The only defense is leadership commitment to treating signals as system feedback, not individual scorecards. If you don't have that commitment, consider whether IFS is right for your context—it might make things worse.

The interpretation problem

Not all stalls are problems. Sometimes a review sits for 72 hours because the reviewer is deep in another priority. Sometimes a dependency "breach" reflects good judgment—the team decided to fix a production issue instead. Automation surfaces signals; humans still need to interpret them with context and nuance.

The Anti-Gaming Protocol

The biggest risk with IFS is people gaming the metrics: moving tickets to keep dashboards green without actually resolving the underlying coordination problems.

The No-Penalty Clause: IFS data is for system improvement only. Any use of IFS metrics in individual performance reviews, promotion decisions, or compensation discussions is a violation of the framework. Make this explicit. Write it down. Get leadership buy-in. Without this protection, IFS becomes surveillance, and people will optimize for the metric instead of the outcome.

Watch for Ghost States

If tickets move to "Done" but the actual work isn't deployed, the IFS signal has been bypassed. Validate that state changes reflect real progress:

Measure System Health, Not Compliance

The goal isn't "zero stalls"—it's "stalls that resolve quickly and don't repeat." If your dashboard shows improvement but your teams are more stressed, the system is being gamed. Watch for:

Starting Points

Don't build everything at once. Pick one high-pain coordination pattern and make it visible:

Code reviews taking too long?

Query for PRs in "review requested" state > 48 hours. Display them in a shared dashboard. Review patterns weekly: what's causing the delays?

Cross-team dependencies disappearing?

Create explicit dependency links in your ticketing system. Alert when a dependency hasn't changed state in 72 hours. Route alerts to the team owning the blocker, not upward.

Approvals lost in queues?

Track time from "approval requested" to "approval decision." Surface long-running approval requests to queue owners. Measure queue health, not individual approver speed.

The key is visibility without judgment. You're not trying to catch people being slow—you're trying to make coordination breakdowns obvious before they become deadline emergencies.

Progressive Implementation

IFS doesn't have to be all-or-nothing. You can adopt it incrementally:

Stage 1 - Visibility

Create a basic dashboard showing what's stuck. No automation, no alerts, just a shared view of stalled work. Review it weekly as a team and discuss patterns.

Stage 2 - Automated Routing

Add lightweight notifications: Slack messages to team channels (not individuals) when thresholds are breached. The system nudges, humans decide what to do.

Stage 3 - Process Evolution

Use accumulated IFS data to justify structural changes. If reviews consistently breach SLOs, maybe you need rotating review duty or clearer code ownership. If approvals routinely take 2+ weeks, maybe you need to decentralize decision-making or add capacity.

Most teams should stop at Stage 1 or 2. Stage 3 is only valuable if you have months of data showing persistent patterns that warrant organizational change.

The Cultural Introduction

How you introduce this matters more than the technical implementation.

Frame it as a team tool, not management surveillance

"This helps us see where work gets stuck so we can improve our processes." "You won't have to remember to report blockers—the system will show them automatically."

Start with opt-in

Find one team with chronic coordination pain who wants better visibility. Let them pilot it, learn what works, share results. Expand based on demand, not mandate.

Make the data team-owned

Dashboards belong to the team doing the work, not their manager. Teams decide what to do with the signals—no automatic escalation.

Talk about edges, not people

"This code review has been waiting 3 days" not "Sarah is slow at reviews." "Our approval queue is backed up" not "Finance is blocking us."

Acknowledge the skepticism: "Yes, this could be misused. Here's how we'll prevent that..." "No, this won't replace talking to each other—it's a supplement." "If this starts feeling like surveillance, we stop."

What Success Looks Like

You know IFS is working when:

You know IFS is failing when:

The Real Value

IFS won't magically fix broken organizations. If your culture is blame-heavy, your processes are chaotic, or your tooling is a disaster, adding measurement won't help—it might make things worse.

But if you have reasonable organizational health and just need better coordination visibility, IFS offers a useful lens: treat workflow health like you treat service reliability. Define expectations, measure reality, close the gap systematically.

The most valuable outcome isn't the dashboards or the alerts—it's the conversation shift. From "who's responsible for this delay?" to "why does this handoff routinely take 5 days when we expected 2?" From blame to system improvement.

That shift is hard to achieve and easy to lose. But when you get it, work starts flowing again.

Glossary

Dependency breach
A dependency edge exceeds its expected dwell time: upstream work has not transitioned state within an agreed threshold, and downstream work is waiting.
Example: Ticket A can't proceed until Ticket B (owned by another team) moves from "In Review" to "Done," but B hasn't changed for 72 hours.
Flow debt
Accumulated "hidden delay" created by chronic stalls, queues, and approvals that the organization normalizes. Flow debt compounds like interest: each new dependency inherits prior delays.
Example: "Security review always takes 2+ weeks" becomes accepted reality; projects schedule around it; the bottleneck never improves.
Coordination fault
A systemic condition where cross-team alignment fails to resolve within expected time, regardless of intent. Coordination faults are identified by repeated dependency breaches across ownership boundaries.
Example: Infra requests repeatedly stall at the same approval step, causing multiple teams to miss integration windows.
SLO (Service Level Objective)
A measurable target for system behavior (borrowed from reliability engineering). In IFS, SLOs apply to work flow: "90% of reviews complete within 48 hours" or "99% of dependency requests acknowledge within 24 hours."
Example: Instead of "reviews should be fast," define "In Review state should transition within 2 business days for 90% of tickets."

Published: December 2025 · Author: Sean Wylie · seanwylie.ca

← Back to Concepts | Home