Value-Based Planning Model

From WFM Labs

Template:Short description

The Value-Based Planning Model: bottom-up workforce planning that classifies interactions by value and AI capability, routes them across three workforce pools, and governs the system across cost, customer experience, and employee experience.


The Value-Based Planning Model (VPM) is a workforce planning framework for contact centers operating with agentic AI as part of the workforce. It replaces the century-old Erlang-lineage question — "how many agents to handle the volume that arrives?" — with a bottom-up structure: classify interactions by value and AI capability, route each to one of three workforce pools, staff each pool with a methodology fitted to its work, and govern the system across cost, customer experience, and employee experience as a coupled multi-objective optimization.

VPM is the canonical WFM Labs Maturity Model™ Level 4 — Advanced (The Ecosystem Emerges) framework. It assumes Level 3 instrumentation (probabilistic forecasting, variance harvesting, simulation-grade capacity planning) as a prerequisite and reaches into Level 5 — Pioneering (Enterprise-Wide Intelligence) through closed-loop governance and the Cognitive Portfolio Model (N*).

The framework is documented in full in Lango (2026), Value-Based Models for Customer Operations.[1]

The Erlang Inversion

A century of contact-center planning has worked top-down. Forecast volume. Apply Erlang-C (or Erlang-A, or simulation). Solve for headcount. Every contact is treated as an interchangeable unit of work. The arrival rate is the planning input.

VPM inverts this. The planning input is no longer volume; it is the interaction taxonomy — the catalogue of work types the operation actually handles, each classified by value, AI capability, handle time, skill requirements, and emergence probability. Each type flows to one of three pools. Headcount is solved per pool, then summed.

The inversion changes what practitioners build. Top-down asks: how big should the operation be? Bottom-up asks: what work is the operation doing, who (or what) should be doing it, and what does each pool need to run? The first question has an answer in volume. The second has an answer in structure.

Where it sits in the Maturity Model

VPM is a Level 4 framework. The architecture cannot be operated at Level 1 or Level 2 — there is no interaction taxonomy, no per-type cost modeling, no probabilistic forecasting. Level 3 organizations are the natural population for adoption: they have broken the staffing monolith, instrumented variance, and adopted probabilistic outputs, but they still typically run a single pool against a single service-level metric.

The Level 4 transition is structural, not incremental. Before: one pool, one metric, one staffing methodology. After: three pools, three coupled metrics, three staffing methodologies, and a governance layer that recalibrates the system when outcomes drift.

Specific Level 5 reaches:

  • The Cognitive Portfolio Model (N*) — Pool Collab's staffing equation — is novel. No Level 1-3 analog exists.
  • Closed-loop governance — using drift signals across cost, CX, and EX to trigger model recalibration — is the Level 5 signature.

Both are present in the framework but await empirical calibration in contact-center contexts.

The four building blocks

1. Interaction Taxonomy

Every interaction type is scored on five dimensions: Value Score (composite, 0-10), AI Capability (%), Handle Time, Skill Requirements, and Emergence Probability. The fifth dimension is structurally absent from traditional taxonomies and is the diagnostic for demand rebound — it forces planners to budget for inquiry types that do not exist pre-deployment.

The Value Score is itself a composite of four sub-dimensions: CLV Impact, Customer Effort Score (CES), Revenue Opportunity, and Churn Risk. Each is scored 0-10 for both human and AI handling; the differential drives the routing recommendation. The composite methodology, scoring rubrics, measurement protocols, and source literature are documented on the Value Routing Model page.

Data sources for the taxonomy: contact-center QA samples, IVR/digital logs, web analytics, email/social/chat archives, CRM lifetime-value data, and (for Emergence Probability) post-deployment contact-driver analysis.

2. Three-Pool Architecture

Each interaction type routes to exactly one of three pools:

  • Pool AA — Autonomous AI. High AI Capability, low Value Score. Cost-modeled across five layers (initial investment, transaction, escalation, maintenance, rebound) — not the marginal-cost-only basis vendors typically present.
  • Pool Collab — Collaborative. Mid-range AI Capability and Value Score. Staffed via the Cognitive Portfolio Model (N*) — humans monitoring N concurrent AI-handled interactions, with N solved as a fixed point.
  • Pool Spec — Specialist. Low AI Capability, high Value Score. Staffed via simulation, not closed-form Erlang, because the workload that concentrates here is too heterogeneous for closed-form approximations.

The default routing heuristic:

  • Autonomous AI — AI Capability > 80% AND Value Score ≤ 4
  • Specialist — AI Capability < 30% OR Value Score ≥ 8
  • Collaborative — everything else

The thresholds are planning decisions, not laws. Full architecture, escalation cascades, and worked example on the Three-Pool Architecture page.

3. Pool-specific staffing methodologies

The three pools run on three different staffing models because the work is structurally different. Pool AA is a cost model, not a queuing model — but its full cost expression includes the escalation tax (cascade-adjusted expected cost) and the demand-rebound penalty. Pool Collab uses the Cognitive Portfolio fixed-point equation. Pool Spec uses discrete-event or Monte Carlo simulation over heterogeneous arrival and service distributions.

A consequence: there is no single Erlang-C calculator for VPM. The "calculator" is a pipeline.

4. Multi-objective governance layer

The governance layer replaces the single service-level metric with three coupled dimensions — Cost, Customer Experience (CX), Employee Experience (EX) — framed as a Pareto frontier optimization. Outputs are distributional, not point estimates: resolution rates as Beta, effort scores as Gamma, value-per-interaction as shifted lognormal, cost as a simulated distribution.

Drift signals trigger recalibration: if CX falls, re-solve N* and re-route borderline types; if cost rises, recalibrate the containment rate; if EX deteriorates, audit Pool Collab cognitive load. The loop is the Level 5 reach.

What the framework replaces

Traditional WFM (Level 2-3) → Value-Based Planning (Level 4)
Dimension Traditional Value-Based Planning
Planning input Volume forecast Interaction taxonomy
Staffing math Erlang-C / Erlang-A / multi-skill simulation Three pool-specific models, summed
Output form Point estimate ("287 FTEs") Distributional ("P50 / P90 staffing across CX scenarios")
Optimization target Service level (single metric) Cost / CX / EX (Pareto frontier)
AI's role Deflection layer above the operation Third workforce pool inside the operation
Cost modeling Marginal AI cost vs. fully-loaded human cost Five-layer AI cost (incl. escalation tax + rebound)
Demand model Volume + AHT Volume + AHT + emergent inquiry types + rebound elasticity

Practitioner playbook

For a Level 3 organization moving toward Level 4 adoption:

  1. Start with the taxonomy, not the math. Catalogue interaction types from QA samples and contact-driver analysis. Score each on the five dimensions. This is the work; the rest follows from it.
  2. Use the Value Routing Model to score Value. The composite (CLV / CES / Revenue / Churn) is more defensible than a single 0-10 expert estimate, and the differential between human and AI scores per dimension is what actually drives routing.
  3. Apply the routing heuristic before doing any math. Sort interaction types into the three pools. Discover what the pool-AA volume actually is — it is rarely the vendor-promised number.
  4. Model Pool AA's full cost. Apply the escalation tax formula. Apply the demand-rebound discount to projected savings. Find the interior cost optimum — it is rarely 100% containment.
  5. Solve N* for Pool Collab. Use the Cognitive Portfolio fixed-point equation with calibrated estimates for the five parameters. Run sensitivity over ρ_max and intervention rate.
  6. Simulate Pool Spec. Use existing simulation tooling; the work that concentrates here is the longest, most heterogeneous, most variance-rich portion of the operation.
  7. Stand up the governance layer. Define drift signals for cost, CX, and EX. Schedule recalibration cadences. Treat the three metrics as coupled, not independent.

The first three steps are accessible to any Level 3 organization. Steps 4-7 require simulation-grade infrastructure.

Limitations and research agenda

The framework is documented honestly. Limitations the white paper enumerates:

  • Cognitive Portfolio parameters await empirical calibration in contact-center contexts. Cross-domain analogs (air traffic control, ICU monitoring) inform the parameter ranges; direct contact-center calibration is open research.
  • Three-pool implementation evidence is thin. The architecture is mathematically coherent and operationally implementable, but no published large-scale implementation exists yet.
  • Routing heuristic is not optimal. The default thresholds are motivated by Cobham's c-μ scheduling rule but lack a closed-form optimality guarantee.
  • Complexity Premium needs better empirical anchoring. The 5-8% AHT increase per 10pp containment is reasonable as a planning estimate but is not yet empirically pinned for contact-center work.
  • Demand rebound elasticities are short-run. Long-run elasticities (which transportation and energy economics consistently find to be larger) are not yet measured for service operations.

Maturity Model Position

VPM is the named Level 4 — Advanced (The Ecosystem Emerges) framework on the WFM Labs Maturity Model.

  • Level 1 — Initial (Emerging Operations) — VPM is unreachable. No interaction taxonomy, no per-type costing, no probabilistic outputs, no instrumentation for variance.
  • Level 2 — Foundational (Traditional WFM Excellence) — VPM is unreachable. Operation runs on point-estimate forecasts and a single Erlang-C-based staffing model against a single service-level metric. The framework's premise (interaction taxonomy as planning input) is invisible from inside this paradigm.
  • Level 3 — Progressive (Breaking the Monolith) — VPM is the natural next destination. Level 3 organizations have already adopted Probabilistic Forecasting and Variance Harvesting; the missing pieces are the interaction taxonomy, three-pool architecture, and pool-specific staffing models.
  • Level 4 — Advanced (The Ecosystem Emerges) — VPM is the canonical operating model. All four building blocks are in place. Outputs are distributional. AI is treated as a workforce pool, not a deflection layer.
  • Level 5 — Pioneering (Enterprise-Wide Intelligence) — VPM is extended via closed-loop governance: drift signals trigger model recalibration without human intervention, and the Cognitive Portfolio Model (N*) is empirically calibrated against in-house data rather than estimated.

The transition that matters for most practitioners is Level 3 → Level 4. The structural change is moving from one pool / one metric / one model to three pools / three metrics / three models, plus a governance layer that couples them.

See Also

References

  1. Lango, T. (2026). Value-Based Models for Customer Operations — From Traditional Queuing to Bottom-Up Value Planning. WFM Labs white paper. 207 sources, 600+ evidence claims, 94 calibrated parameters.