How Sable measures, derives, and interprets community health. Every metric in the portal is tagged with its signal type so you know exactly what you're looking at.
Every data point in Sable is classified by how it was produced. This taxonomy is surfaced throughout the portal via colored badges so you can calibrate your confidence accordingly.
Measured
Derived
Interpretive
Mixed
Community health is summarized as a letter grade (A through F) by the Cult Grader diagnostic system. Grades are interpretive signals— they represent an AI-assisted holistic assessment, not a mechanical formula.
A
Exceptional
B
Healthy
C
Developing
D
Struggling
F
Critical
Grades consider engagement depth, recurring participation, lateral conversation structure, bot contamination, content performance, and cultural signal health. An INC (incomplete) grade is assigned when insufficient data exists to make a meaningful assessment. Grade history is tracked across diagnostic runs to compute trajectory.
These measured values form the foundation of all derived and interpretive outputs.
Engagement Rate
Recurring Account Share
Unique Mentioners
Bot Reply Rate
Lateral Reply Pairs
Community Graph Density
Raw engagement rate blends genuine recurring participation with transient visitors and bot noise. Decomposition separates these layers to show what's real.
Genuine Floor
genuine_floor = recurring_account_share × engagement_rateEngagement attributable to returning community members.
Transient Ceiling
transient_ceiling = engagement_rate − genuine_floorGap between total engagement and the genuine floor — new, unknown, or one-time participants.
Bot-Adjusted Rate
bot_adjusted = engagement_rate × (1 − bot_reply_rate)Engagement after factoring out estimated bot activity. Null when bot data is unavailable.
The arithmetic is derived (deterministic from measured inputs), but the framing of “genuine” vs “transient” is interpretive — hence signal type mixed.
Measures how dependent a community's conversation structure is on a small number of nodes, using a Herfindahl-style concentration index.
Pair Saturation
pair_saturation = lateral_reply_pairs / (mentioners × (mentioners − 1) / 2)How much of the theoretical maximum peer connectivity is realized.
Concentration Index
concentration = 1 − min(density × 10, 1) × min(pair_saturation × 100, 1)Clamped to [0, 1]. Lower is more distributed.
Distributed
< 0.3
Concentrated
0.3–0.7
Fragile
≥ 0.7
A fragile community depends heavily on a few key connectors. If those accounts go inactive, lateral conversation collapses. Distributed communities sustain themselves through many independent conversation threads.
Projects what key metrics would look like if Sable activity ceased, using exponential decay toward the pre-engagement baseline.
Decay Model
decay(t) = baseline + (current − baseline) × e^(−λt)λ = ln(2) / 60, calibrated so 50% of the gap closes in 60 days. The first diagnostic run serves as the baseline proxy.
This is a directional projection, not a causal prediction. Actual outcomes depend on community dynamics, market conditions, and factors outside Sable's control. Requires at least 2 diagnostic runs to compute.
Trajectory summarizes grade direction across all diagnostic runs. Requires at least 3 runs with non-INC grades.
When the grade trajectory says “improving” but both engagement rate and recurring account share declined (or vice versa), a contradiction flag is raised. This catches cases where the interpretive grade may be misleading relative to measured trends.
Each report receives a confidence level synthesized from three factors: data freshness, sample size sufficiency, and bot contamination.
Starts at score 3 (high), degrades on:
High
score 3
Moderate
score 2
Low
score 1
Insufficient
score 0
Emergent community terms (slang, memes, cultural vocabulary) are tracked over time and fitted to a logistic S-curve to estimate adoption stage.
Logistic Model
f(x) = L / (1 + e^(−k × (x − x₀)))L = max observed usage (×1.05), k = growth rate, x₀ = midpoint. Fitted via grid search minimizing sum of squared residuals. Requires ≥3 data points.
R-squared is reported alongside the fit to indicate how well the sigmoid model matches actual adoption data. Poor fits (low R²) suggest the term may not be following a typical adoption curve.
Every data section displays a freshness indicator. Staleness in upstream sources cascades to downstream sections.
< 24h — Pulsing dot
1–7 days — Amber dot
> 7 days — Red dot
When a primary source (diagnostic, tracking sync, or pulse scan) goes stale, all sections that depend on it display a cascade warning identifying the stale upstream source and its last update date.
Metrics derived from small sample sizes are flagged with a low-sample warning. The threshold is determined by the upstream diagnostic system and surfaced as a status on each metric's sample context.
When sample size is low, the metric is labeled n=X (low) — directional only to indicate that while the direction may be informative, the precise value should not be relied upon for decisions.
The portal assembles community intelligence from four primary systems. Each contributes a specific data domain.
Cult Grader
Slopper
Lead Identifier
SablePlatform
Different data sources update on different schedules. Freshness is surfaced per-section in the portal via colored indicators.
The freshness indicator on each section reflects the last update from its upstream source, not when the portal last loaded.
When an upstream data source goes stale, all portal sections that depend on it inherit a staleness warning. This prevents misleading freshness signals on derived data.
Cascade Rule
if source.status = stale → all dependent sections show warningEach section's dependency on upstream sources is defined in the cascade map. A stale diagnostic, for example, marks all grade, metric, recommendation, and language sections as stale.
Cascade warnings identify the specific stale source and its last update date, so you can distinguish between “this section’s own data is old” and “this section depends on something that’s old.”
The portal operates in two modes, determined by its deployment configuration.
In live mode, sections that cannot reach their data source display the specific reason (unavailable, timeout, authentication) rather than falling back to sample data.