All case studies
B2B SaaS · growth stage

Churn model flagged at-risk accounts six months early and lifted retention by 15% on that cohort.

15%
Retention lift on flagged at-risk accounts vs historical baseline
6 mo
Lead time on the churn signal vs cancellation date
100%
Account coverage with refreshed scores on a regular cadence
Proactive
CS team's renewal motion shifted upstream of cancel intent
0 → 1
Primary CS prioritization tool the team actually opens daily
Client context

A growth-stage B2B SaaS company in the $10M to $30M ARR range, with a customer success team that was reacting to churn rather than getting ahead of it. They had plenty of product usage data and support ticket history but no systematic way to spot accounts going sideways.

The problem

By the time an account showed cancellation intent, it was usually too late to save. Usage drops, support escalations, and changes in engagement were all visible in the data, but nobody was stitching them together into a coherent early-warning signal. The CS team prioritized outreach based on gut feel and recent conversations, which meant the quiet accounts (often the ones most at risk) slipped through. Renewal conversations turned into save attempts instead of expansion plays.

The pattern repeated across every quarter. Logo retention numbers in the board deck explained themselves only in hindsight. The CS team knew which accounts they had saved and which they had lost, but they couldn't articulate which signals predicted the difference, so the next quarter's prioritization kept running on instinct. Leadership wanted a way to put the CS team's energy where it actually moved retention, with a defensible signal underneath.

What we built
01

Foundation: feature engineering in dbt

Feature engineering ran in dbt on the client's existing Snowflake warehouse: usage patterns (logins, feature adoption, depth of engagement), support ticket signals (volume, sentiment, escalation type), and product engagement metrics (active users per account, time-since-last-meaningful-action). Every feature was version-controlled and traceable back to source so the model's inputs were defensible.

02

Modeling: interpretable churn predictor

A scikit-learn churn model trained on the historical accounts that had churned versus retained, with model selection biased toward interpretability rather than incremental accuracy. CS reps could see why an account was flagged (which signals contributed, in what direction) rather than just getting a score, which is what made them actually use it. Retraining cadence and drift monitoring were built in from the start.

03

Activation: scores in Salesforce

Scores synced into Salesforce on the client's existing CS workflow, so reps saw at-risk flags in the system they already used every day. No new dashboard to learn, no separate tool to log into. The CS team's prioritization changed quietly: same workflow, better-ranked list of accounts.

How we worked

The Blueprint ran four weeks: defining what 'churn risk' actually meant for the business (cancellation, downgrade, contraction), auditing the historical data to see whether the signals we wanted to use were clean enough to model on, and pressure-testing feature ideas with the CS team's pattern-matching as a sense-check. We came out with a tight feature set, a defensible label for what a churn event was, and explicit decisions on the data quality issues we'd fix versus model around.

Through the eight-week Build, we paired with the CS analytics lead in weekly working sessions. Each iteration of the model went to the CS team for a gut-check before we kept tuning: did the accounts the model flagged feel right, and were the accounts the model didn't flag actually safe? That feedback loop was where the model's interpretability had to land, not just at the technical level but at the workflow level.

Knowledge transfer included Python notebooks the analytics team could rerun, model documentation explaining each feature's contribution, a decision-making framework for when to retrain, and a runbook for the Salesforce sync. The CS analytics lead owned the model after handoff and ran the first retraining cycle himself two months in.

Results
  • 15% improvement in retention for flagged cohort vs historical baseline, within 6 months
  • CS team shifted from reactive to proactive outreach on at-risk accounts
  • At-risk signals identified early enough for CS to actually do something about them
  • Interpretable scores meant reps trusted the model and used it in their workflow
  • Model expanded from initial pilot into the primary CS prioritization tool
Our CS team used to find out an account was leaving when they handed in their cancellation. Now we know six months out, and we can actually do something about it.
VP of Customer Success, B2B SaaS client

Outcomes start with a Blueprint. We plan, build and run from there.

Thirty minutes with a 829 Analytics partner. You leave with a prioritized view of what to build first, what's worth waiting on, and the business metric anchoring each move. Whether or not we end up working together.