Three-Pool Architecture
The Three-Pool Architecture is the organizing pattern at the center of the Value-Based Planning Model. It replaces the single-pool / single-staffing-model assumption that has governed contact-center planning since Erlang with three structurally different pools, each requiring a different staffing methodology, each consuming a different cost structure, and each connected to the others through escalation cascades.
The architecture is the Level 4 — Advanced (The Ecosystem Emerges) WFM Labs Maturity Model™ articulation of how AI integrates into workforce planning — not as a deflection layer above the operation, but as a workforce pool inside it.
The architecture is documented in Lango (2026), Value-Based Models for Customer Operations.[1]
The pool concept
A pool is a class of interactions that shares a staffing model. Two interaction types belong to the same pool when (a) they are best handled by the same agent type — fully autonomous AI, AI-supervised human, or specialist human — and (b) the staffing math that sizes that agent type is structurally appropriate to their work.
The three pools are not a typology of automation maturity. They are an architectural decomposition of the workforce that reflects how different work types must be staffed under different mathematical models.
Pool AA — Autonomous AI
When to use: interactions with high AI Capability (>80%) and low Value Score (≤4) on the Value Routing Model composite.
Staffing math: Pool AA is not staffed by headcount; it is sized by cost. The math is a five-layer cost model, not a queuing model:
- Initial investment. Build, integration, content, training data, model selection.
- Transaction cost. Marginal cost per AI-handled interaction. This is the only layer most vendor business cases show.
- Escalation cost. Expected cost adjusted for the cascade probability — when AI hands off to Pool Collab or Pool Spec, the full handoff cost is properly attributed.
- Maintenance cost. Model retraining, content updates, regression testing, drift monitoring. Persistent and underestimated.
- Rebound cost. The capacity required to handle induced demand is part of Pool AA's cost, not Pool Collab's. New volume that AA's existence creates is AA's responsibility.
The full Pool AA cost is materially higher than transaction cost alone — typically 3-10× higher when escalation, maintenance, and rebound are properly priced.
Pool Collab — Collaborative
When to use: interactions with mid-range AI Capability and Value Score — the "everything else" bucket from the routing heuristic.
Staffing math: the Cognitive Portfolio Model (N*). A single human monitors and intervenes across N concurrent AI-handled interactions, where N is solved as a fixed point:
N* = ρ_max / ( λ_int · (E[S_int] + γ(N)) + m(N) )
with γ(N) = γ_0 + γ_1·ln(N) (logarithmic switching cost) and m(N) = m_0·N^α (monitoring overhead). Pool Collab headcount is then volume / (N* × throughput).
The work pattern in Pool Collab is novel. There is no Erlang-C analog. The cognitive constraint, not the arrival rate, determines staffing. Calibration of the five parameters is open research; the white paper is explicit that practitioners should use expert-estimated ranges with sensitivity analysis until in-house data accumulates.
Pool Spec — Specialist
When to use: interactions with low AI Capability (<30%) or high Value Score (≥8). The work that concentrates here is the long, heterogeneous, judgment-intensive remainder.
Staffing math: simulation, not closed-form Erlang. The work is too heterogeneous for Erlang-C's homogeneous-arrivals / homogeneous-service assumptions to hold. Specialist staffing has been simulation-driven in mature operations for years; what changes at Level 4 is that Pool Spec receives a structurally harder workload than the historical specialist tier, because complexity has been concentrated by deflection.
The Complexity Premium applies here. AHT distributions in Pool Spec are wider and right-skewed compared to pre-deflection specialist work. Simulation models must be re-fit on post-deflection data before being used for Pool Spec sizing.
The default routing heuristic
The white paper proposes a default routing rule based on the two most discriminating taxonomy dimensions:
- Pool AA (Autonomous AI): AI Capability > 80% AND Value Score ≤ 4
- Pool Spec (Specialist): AI Capability < 30% OR Value Score ≥ 8
- Pool Collab (Collaborative): everything else
The thresholds are planning decisions, not laws. They are motivated by Cobham's c-μ scheduling rule (work goes where its product of cost and service rate is highest) but lack a closed-form optimality guarantee. Practitioners should treat the defaults as starting points and sweep thresholds against the multi-objective cost / CX / EX surface.
How the pools connect
Pools are not independent. Three connections matter:
- Escalation cascades. Interactions can move AA → Collab → Spec, or AA → Spec directly, or Collab → Spec. Each cascade hop adds the escalation tax to expected cost and may degrade CX. Cascade probabilities must be measured per interaction type, not assumed.
- Rebound flow. Demand rebound generated by Pool AA's existence appears as new volume. Some lands in Pool AA again (R_d), some in adjacent pools (R_i), some as new-category volume distributed across the architecture (R_s).
- Routing as a coupled decision. The threshold settings for the routing heuristic affect all three pools simultaneously. Lowering the AI Capability threshold for Pool AA from 80% to 70% expands Pool AA, raises the escalation rate, increases load on Pool Collab, and concentrates more complexity into Pool Spec. The decision is a portfolio decision, not three independent ones.
Worked example
The white paper's reference scenario: 10M annual contacts, baseline staffing under traditional Erlang-C ≈ 266 FTE. Re-architected through the three-pool model:
- Pool AA absorbs ~50% of volume at full-loaded cost equivalent of ~30 FTE (with escalation tax and rebound priced).
- Pool Collab handles ~35% of volume with N* ≈ 25 → ~28 FTE.
- Pool Spec handles ~15% of remaining (highest-complexity, post-rebound) volume at ~46 FTE under simulation-based staffing.
- 'Total: ~104 FTE — but this is 104 FTE under proper full-cost accounting, not 84 FTE assuming marginal AI cost only.
The 266 → 104 reduction is real. The 104 → 84 "savings" that some vendor models project is largely fictional once the escalation tax, rebound, and complexity premium are priced in.
Limitations
The architecture is honestly bounded:
- Routing is heuristic. The default thresholds are not provably optimal. Operations should sweep them against their own multi-objective cost surface.
- Pool boundaries are not crisp. Some interaction types straddle Pool Collab and Pool Spec; the routing decision is a soft probability, not a hard partition. Practitioners should measure boundary stability over time.
- Three pools is a current-decade architecture. If AI capability advances such that a fourth pool (e.g., autonomous specialist AI) becomes meaningful, the architecture extends naturally — but no published implementation of that extension exists today.
- No published large-scale implementation. The architecture is mathematically coherent and operationally implementable, but the empirical evidence base is small. Early adopters will be calibrating in real time.
Maturity Model Position
The Three-Pool Architecture is the architectural unit of Level 4 — Advanced (The Ecosystem Emerges).
- Level 1 — Initial (Emerging Operations) — Architecture is unreachable. There is no taxonomy that would allow pool assignment.
- Level 2 — Foundational (Traditional WFM Excellence) — Architecture is invisible. Operations run a single pool against a single service-level metric. AI, when deployed, is conceived as a deflection layer above the workforce, not a pool inside it.
- Level 3 — Progressive (Breaking the Monolith) — Architecture is approachable. Multi-skill routing exists; per-skill staffing exists; but the explicit three-pool partition with three different staffing models is not yet in place. Pool Collab in particular has no analog.
- Level 4 — Advanced (The Ecosystem Emerges) — Architecture is the operating model. All three pools are explicit, each with its own staffing math. Routing is governed by the explicit heuristic. Cascade and rebound are priced.
- Level 5 — Pioneering (Enterprise-Wide Intelligence) — Architecture is closed-loop. Routing thresholds, Cognitive Portfolio parameters, and cascade probabilities are recalibrated automatically from drift signals.
The Level 3 → Level 4 transition is the first decisive structural change in WFM since multi-skill routing in the 1990s.
See Also
- Value-Based Planning Model — the framework the architecture lives inside
- Cognitive Portfolio Model (N*) — Pool Collab's staffing equation
- The Escalation Tax — the cost penalty for cross-pool cascades
- Service Demand Rebound Model — Pool AA's rebound responsibility
- Interior Optimum (containment rate) — sets Pool AA's volume share
- Value Routing Model — the composite Value Score that drives pool assignment
- Discrete-Event vs. Monte Carlo Simulation Models — Pool Spec's staffing basis
- Multi-Objective Optimization in Contact Center — the optimization surface routing thresholds are swept against
References
- ↑ Lango, T. (2026). Value-Based Models for Customer Operations — From Traditional Queuing to Bottom-Up Value Planning. WFM Labs white paper.
