A growth-stage B2B SaaS company in the $10M to $30M ARR range, with a customer success team that was reacting to churn rather than getting ahead of it. They had plenty of product usage data and support ticket history but no systematic way to spot accounts going sideways.
By the time an account showed cancellation intent, it was usually too late to save. Usage drops, support escalations, and changes in engagement were all visible in the data, but nobody was stitching them together into a coherent early-warning signal. The CS team prioritized outreach based on gut feel and recent conversations, which meant the quiet accounts (often the ones most at risk) slipped through. Renewal conversations turned into save attempts instead of expansion plays.
The pattern repeated across every quarter. Logo retention numbers in the board deck explained themselves only in hindsight. The CS team knew which accounts they had saved and which they had lost, but they couldn't articulate which signals predicted the difference, so the next quarter's prioritization kept running on instinct. Leadership wanted a way to put the CS team's energy where it actually moved retention, with a defensible signal underneath.
Feature engineering ran in dbt on the client's existing Snowflake warehouse: usage patterns (logins, feature adoption, depth of engagement), support ticket signals (volume, sentiment, escalation type), and product engagement metrics (active users per account, time-since-last-meaningful-action). Every feature was version-controlled and traceable back to source so the model's inputs were defensible.
A scikit-learn churn model trained on the historical accounts that had churned versus retained, with model selection biased toward interpretability rather than incremental accuracy. CS reps could see why an account was flagged (which signals contributed, in what direction) rather than just getting a score, which is what made them actually use it. Retraining cadence and drift monitoring were built in from the start.
Scores synced into Salesforce on the client's existing CS workflow, so reps saw at-risk flags in the system they already used every day. No new dashboard to learn, no separate tool to log into. The CS team's prioritization changed quietly: same workflow, better-ranked list of accounts.
The Blueprint ran four weeks: defining what 'churn risk' actually meant for the business (cancellation, downgrade, contraction), auditing the historical data to see whether the signals we wanted to use were clean enough to model on, and pressure-testing feature ideas with the CS team's pattern-matching as a sense-check. We came out with a tight feature set, a defensible label for what a churn event was, and explicit decisions on the data quality issues we'd fix versus model around.
Through the eight-week Build, we paired with the CS analytics lead in weekly working sessions. Each iteration of the model went to the CS team for a gut-check before we kept tuning: did the accounts the model flagged feel right, and were the accounts the model didn't flag actually safe? That feedback loop was where the model's interpretability had to land, not just at the technical level but at the workflow level.
Knowledge transfer included Python notebooks the analytics team could rerun, model documentation explaining each feature's contribution, a decision-making framework for when to retrain, and a runbook for the Salesforce sync. The CS analytics lead owned the model after handoff and ran the first retraining cycle himself two months in.
Our CS team used to find out an account was leaving when they handed in their cancellation. Now we know six months out, and we can actually do something about it.
Thirty minutes with a 829 Analytics partner. You leave with a prioritized view of what to build first, what's worth waiting on, and the business metric anchoring each move. Whether or not we end up working together.