Skip to content

Instantly share code, notes, and snippets.

@gsingal
Created April 11, 2026 18:13
Show Gist options
  • Select an option

  • Save gsingal/8867ebddc45e9fdb5fdc9795a0ce507b to your computer and use it in GitHub Desktop.

Select an option

Save gsingal/8867ebddc45e9fdb5fdc9795a0ce507b to your computer and use it in GitHub Desktop.
KPI Sub-Metrics: 100 candidates → ranked → reviewed → final 20 (April 2026)

100 Candidate Sub-Metrics → Ranked → Top 30 → Proposed 20

Starting from all available data sources (production DB, Stripe, PostHog, GSC, PSI, Rollbar, Postmark, GitHub), generating 100 candidates, ranking by decision utility × collectibility × uniqueness.


Scoring Criteria

Each metric scored 1-5 on three axes:

  • Decision utility (D): Would this metric change a decision this month? (5 = definitely, 1 = nice to know)
  • Collectibility (C): Can we automate collection today? (5 = SQL/API query, 1 = requires new infrastructure)
  • Uniqueness (U): Does it tell us something the 8 heroes don't? (5 = completely new signal, 1 = redundant)

Composite score = D × 0.5 + C × 0.3 + U × 0.2 (weighted toward decision utility)


The 100 Candidates

Revenue & Pricing (15)

# Metric Source D C U Score Current value
1 New MRR added this month Stripe 5 5 5 5.0
2 Churned MRR lost this month Stripe 5 5 5 5.0
3 Revenue per listing (MRR / paid listings) Stripe+SQL 4 5 4 4.3 ~$14.14
4 Plan tier distribution (% monthly/quarterly/annual/premium) Stripe 4 5 3 4.1 15.6%/13.9%/44.2%/17.9%
5 Net new subscriptions/week Stripe 4 5 3 4.1
6 Checkout completion rate PostHog 4 3 4 3.9
7 Premium upgrade rate (standard → premium) Stripe 4 4 4 4.0
8 Coupon redemption count/month Stripe 3 5 3 3.6 1
9 Average revenue per user (ARPU) Stripe 3 5 3 3.6 ~$18/mo
10 Failed payment amount/month Stripe 4 5 4 4.3
11 Failed payment recovery rate Stripe 4 4 5 4.2
12 Expansion MRR (upgrades) Stripe 3 4 4 3.5
13 Contraction MRR (downgrades) Stripe 3 4 5 3.7
14 Free-to-paid conversion rate SQL 3 4 4 3.5
15 Trial conversion rate Stripe 2 4 3 2.8

Churn & Retention (15)

# Metric Source D C U Score Current value
16 No-inquiry churn rate (0 inquiries → canceled) SQL 5 4 5 4.7 ~80.1% of 0-inq listers churn
17 Got-inquiry churn rate (≥1 inquiry → canceled) SQL 5 4 5 4.7 ~66.3%
18 Churn by billing cycle (monthly/quarterly/annual) Stripe 4 5 4 4.3 87.7%/65.2%/49.3%
19 30-day cohort retention SQL 4 4 4 4.0
20 90-day cohort retention SQL 4 4 4 4.0
21 Cancellation count/month (from subscription_cancellations) SQL 3 5 2 3.4 353
22 Reactivation rate (canceled → re-subscribed) Stripe 3 3 5 3.4
23 Days to cancellation (median sub lifetime) Stripe 3 4 4 3.5
24 Involuntary churn rate (payment failure → canceled) Stripe 3 4 4 3.5
25 Lister NPS or satisfaction proxy Manual 5 1 5 3.8
26 Support ticket volume/week Manual 3 2 4 2.9
27 Churn by listing age (first 30d vs 30-90d vs 90d+) SQL+Stripe 4 3 4 3.7
28 Churned lister win-back rate (re-signup within 90d) SQL 2 3 4 2.7
29 Subscription pause rate Stripe 2 4 3 2.7
30 % of churned listers who never got an inquiry SQL 5 4 4 4.5

Demand & Searcher Experience (15)

# Metric Source D C U Score Current value
31 Thread completion rate (2+ msgs each side) SQL 5 5 4 4.8 ~28%
32 Messages/week SQL 3 5 2 3.4 4,810
33 New conversations/week SQL 3 5 3 3.6 ~1,750
34 Unique searchers/week SQL 4 5 3 4.1 ~570/wk
35 Searcher return rate (messaged again within 30d) SQL 4 4 4 4.0 59.3% (if replied to)
36 Mobile vs desktop inquiry rate PostHog 3 3 4 3.2
37 Search-to-message conversion (search → listing view → inquiry) PostHog 4 2 5 3.6
38 Listing view → inquiry rate PostHog 4 3 4 3.7
39 Zero-result search rate PostHog 3 2 5 3.1
40 Avg messages per conversation SQL 2 5 3 2.9
41 Searcher signup → first message (days) SQL 4 4 4 4.0
42 Inquiry-to-booking conversion (if tracked) SQL 5 2 5 4.0
43 Repeat searcher rate (2+ conversations, different rooms) SQL 3 4 4 3.5
44 Conversation abandonment rate (started, never completed) SQL 3 4 4 3.5
45 Mobile listing creation rate (PostHog) PostHog 3 4 3 3.3 18/wk

Fraud & Trust (10)

# Metric Source D C U Score Current value
46 Median time-to-block (hours) SQL 4 4 4 4.0
47 Conversations with eventually-blocked accounts SQL 4 4 4 4.0 41/mo
48 Dispute win rate (won / total disputes) Stripe 4 5 5 4.5
49 Dispute loss amount/month Stripe 3 5 3 3.6 $35
50 Blocked users/month SQL 3 5 3 3.6
51 Verification request completion rate SQL 3 4 4 3.5
52 Fraudulent listing lifespan (hours from creation to block) SQL 3 3 4 3.2
53 Disposable email signup attempts (when blocking ships) SQL 3 3 4 3.2
54 Reports/flags from users SQL 2 3 4 2.7
55 Stripe Radar risk score distribution Stripe 2 3 4 2.7

Supply & Listing Quality (15)

# Metric Source D C U Score Current value
56 Time-to-first-inquiry (median days, new listings) SQL 5 4 5 4.7 27.1% never get one
57 Listing completeness score (title + desc + photos + address) SQL 4 4 5 4.3
58 Paid listings with 0 inquiries in 30d (%) SQL 4 5 3 4.1 ~28%
59 Premium vs standard inquiry rate SQL 4 4 4 4.0 5.63 vs 4.48/mo
60 Listings per city distribution (concentration) SQL 3 4 5 3.7 550 cities
61 Multi-listing listers (power hosts) SQL 3 5 4 3.7 131
62 New listings/week SQL 3 5 3 3.6
63 Avg listing rent SQL 2 5 3 2.9 $1,470
64 Median listing rent SQL 2 5 3 2.9 $1,200
65 Listing deactivation rate (active → inactive) SQL 3 4 4 3.5
66 Avg description length SQL 2 5 3 2.9
67 Listings with reviews (%) SQL 2 4 4 2.9 0 (reviews not used)
68 Room availability fill rate SQL 3 3 4 3.2
69 Stale listings (active >180d, no edits) SQL 2 4 3 2.7
70 Boosted listings (active boosts) SQL 2 5 3 2.9 0

Organic Growth & SEO (15)

# Metric Source D C U Score Current value
71 Organic sessions/week PostHog 4 4 3 3.8 6,070
72 Blog impressions/week GSC 3 5 3 3.6 9,100
73 Blog clicks/week GSC 3 5 3 3.6 105
74 Blog CTR GSC 3 5 3 3.6 ~1.2%
75 Indexed blog pages GSC 3 5 3 3.6 66
76 Sublets page impressions/week GSC 4 5 5 4.5 — (not tracked separately)
77 Homepage LCP (mobile) PSI 4 5 2 3.9 14,400ms
78 Search page LCP (mobile) PSI 3 5 2 3.4 13,300ms
79 TN keyword position GSC 3 5 3 3.6 #12
80 Total site impressions/week GSC 3 5 3 3.6
81 Organic signup rate (organic visitors → account created) PostHog 5 3 5 4.4
82 Blog → signup conversion PostHog 4 2 5 3.6
83 Landing page impressions (non-blog content pages) GSC 3 5 4 3.7
84 New backlinks/month Ahrefs 2 4 4 3.0
85 Domain rating trend Ahrefs 2 4 3 2.7

Marketplace Dynamics (10)

# Metric Source D C U Score Current value
86 Inquiries per active listing (supply-demand ratio) SQL 5 5 5 5.0 ~3.4/mo
87 Top 10 cities liquidity (% with inquiry in 14d) SQL 4 4 5 4.3
88 Thin markets count (cities with searchers but <5 listings) SQL 3 4 5 3.7
89 Searcher-to-lister ratio SQL 3 5 4 3.7 2,289 searchers / 1,454 listers
90 Cross-market search rate (searcher looks in 2+ cities) SQL 2 3 5 2.9
91 Avg days listing stays active SQL 2 4 3 2.7
92 Seasonal inquiry volume (MoM change) SQL 3 4 3 3.3
93 Weekend vs weekday inquiry patterns SQL 1 4 3 2.0
94 Price-to-inquiry correlation by city SQL 3 3 4 3.2
95 Avg listings viewed before first inquiry PostHog 3 2 4 2.9

Product & Operations (5)

# Metric Source D C U Score Current value
96 Verification completion rate (started → approved) SQL 3 4 4 3.5 839 requests/30d
97 CI test pass rate GitHub 2 5 2 2.7 0/10
98 Rollbar active error count Rollbar 2 5 2 2.7
99 Email delivery rate Postmark 2 4 3 2.7
100 Page load errors (client-side JS errors) PostHog 2 3 3 2.4

Ranked Top 30

Sorted by composite score, filtered for uniqueness (removing near-duplicates):

Rank # Metric Score Category Why it matters
1 1 New MRR added/month 5.0 Revenue Half of the MRR waterfall — is acquisition working?
2 2 Churned MRR lost/month 5.0 Revenue Other half — how much are we leaking?
3 86 Inquiries per active listing 5.0 Marketplace The supply-demand ratio — marketplace efficiency
4 31 Thread completion rate 4.8 Demand Do conversations turn into real housing discussions?
5 16 No-inquiry churn rate 4.7 Churn The biggest churn driver decomposed
6 56 Time-to-first-inquiry (median days) 4.7 Supply How fast do new listers see value?
7 30 % churned who never got inquiry 4.5 Churn Validates the inquiry→retention link at churn time
8 48 Dispute win rate 4.5 Fraud Is the evidence system working?
9 76 Sublets page impressions 4.5 SEO Core product pages, separate from blog
10 81 Organic signup rate 4.4 SEO Does organic traffic convert to accounts?
11 3 Revenue per listing 4.3 Revenue Volume growth vs value growth
12 10 Failed payment amount/month 4.3 Revenue Revenue at risk from payment failures
13 18 Churn by billing cycle 4.3 Churn Validates annual tilt strategy
14 57 Listing completeness score 4.3 Supply Quality predicts inquiry rate
15 87 Top 10 cities liquidity 4.3 Marketplace Geographic health check
16 11 Failed payment recovery rate 4.2 Revenue Can we save failed charges?
17 4 Plan tier distribution 4.1 Revenue Portfolio mix monitoring
18 5 Net new subscriptions/week 4.1 Revenue Acquisition velocity
19 34 Unique searchers/week 4.1 Demand Demand-side volume
20 58 Paid listings 0 inquiries 30d (%) 4.1 Supply Inverse of liquidity, longer window
21 7 Premium upgrade rate 4.0 Revenue Is Premium delivering enough value to upgrade?
22 19 30-day cohort retention 4.0 Churn Are new users retaining better over time?
23 35 Searcher return rate 4.0 Demand Platform stickiness for demand side
24 41 Signup → first message (days) 4.0 Demand Activation speed
25 46 Median time-to-block 4.0 Fraud Fraud detection speed
26 47 Conversations with blocked accounts 4.0 Fraud Harm exposure before detection
27 59 Premium vs standard inquiry rate 4.0 Supply Does Premium deliver value?
28 77 Homepage LCP 3.9 SEO Page speed → rankings
29 71 Organic sessions/week 3.8 SEO Traffic volume trend
30 83 Landing page impressions 3.7 SEO Non-blog content pages (TN, FAQ, etc.)

Proposed 20 Sub-Metrics

From the top 30, removing 10 that are either too similar to a hero metric or not actionable enough this month:

Removed from top 30:

  • #20 (0-inquiry 30d %) — redundant with listing liquidity hero
  • #28 (Homepage LCP) — already collected as feature-level metric
  • #29 (Organic sessions) — already collected as feature-level metric
  • #30 (Landing page impressions) — too granular for sub-metric tier
  • #27 (Premium vs standard inquiry rate) — redundant with #14 (listing completeness predicts this)
  • #22 (30-day cohort retention) — requires new cohort infrastructure, deferred
  • #26 (Conversations with blocked accounts) — redundant with hero #7 (searchers affected)
  • #25 (Median time-to-block) — important but low action frequency
  • #21 (Premium upgrade rate) — wait for Pro Plan to ship
  • #17 (Plan tier distribution) — interesting but not decision-driving weekly

The 20:

# Metric Question Source Why included
1 New MRR/month Q1 Stripe Acquisition health — paired with churned MRR tells the growth story
2 Churned MRR/month Q1 Stripe Retention cost in dollars — more visceral than %
3 Revenue per listing Q1 Stripe÷SQL Efficiency — growing because more listings or better monetization?
4 Failed payment amount/month Q1 Stripe Revenue at risk, currently invisible
5 Net new subscriptions/week Q1 Stripe Acquisition velocity
6 No-inquiry churn rate Q2 SQL Decomposed churn — the inquiry-starved segment
7 % churned who never got inquiry Q2 SQL Validates that inquiry→retention link holds at churn time
8 Churn by billing cycle Q2 Stripe Validates annual tilt — monthly at 87.7% vs annual at 49.3%
9 Thread completion rate Q3 SQL Quality of conversations — did both sides actually talk?
10 Unique searchers/week Q3 SQL Demand-side volume — are searchers growing?
11 Searcher return rate (30d) Q3 SQL Platform stickiness — do searchers come back?
12 Signup → first message (days) Q3 SQL Activation speed — how fast do new users find value?
13 Dispute win rate Q4 Stripe Evidence system effectiveness
14 Failed payment recovery rate Q1 Stripe Can dunning save revenue?
15 Inquiries per active listing Q5 SQL Supply-demand ratio — marketplace balance
16 Time-to-first-inquiry (median days) Q5 SQL New lister experience — how fast do they see value?
17 Listing completeness score Q5 SQL Quality floor — title + desc + photos + address
18 Top 10 cities liquidity Q5 SQL Geographic health — aggregate masks city problems
19 Sublets page impressions Q6 GSC Core product pages separate from blog
20 Organic signup rate Q6 PostHog Conversion quality — do organic visitors become users?

Distribution by question:

  • Q1 (Revenue): 5 sub-metrics
  • Q2 (Churn): 3 sub-metrics
  • Q3 (Demand): 4 sub-metrics
  • Q4 (Fraud): 1 sub-metric
  • Q5 (Supply): 4 sub-metrics
  • Q6 (Organic): 3 sub-metrics

This gives Q1 (revenue) the most sub-metrics because it's the weakest-covered question today, and Q4 (fraud) the fewest because fraud is currently under control (37 affected searchers/month).


What These 20 Add to the 8 Heroes

Hero metric What it tells you What the sub-metrics add
MRR ($36K) Are we making money? WHERE the money comes from (new vs churn vs failed), HOW efficiently (per listing)
Churn (25%) Are we losing people? WHO is churning (no-inquiry vs got-inquiry), WHY (billing cycle, inquiry status)
Reply rate (51%) Is the marketplace delivering? HOW deep (thread completion), HOW fast (signup→message), WHO returns
Fraud searchers (37) Are people being harmed? HOW effectively we fight back (dispute win rate)
Liquidity (55.7%) Is supply useful? HOW fast (time-to-first-inquiry), HOW balanced (per-listing ratio, geographic), HOW quality (completeness score)
Organic funnel Is growth sustainable? WHERE from (sublets vs blog), WHAT quality (signup rate)

Generated 2026-04-11. Scores based on decision utility for a 3-person marketplace team at $36K MRR.

Final Sub-Metrics: 20 Below the Fold

After 100 candidates → ranked → reviewed by strategic subagent + Codex adversarial → revised.

Changes From Draft

Change Reason Source
Dropped Net new subs/week Redundant with New MRR (both measure acquisition) Both reviewers
Dropped % churned who never got inquiry Redundant with No-inquiry churn rate (same signal, different angle) Strategic review
Dropped Failed payment amount Redundant with recovery rate (recovery is the action metric) Both reviewers
Dropped Sublets page impressions Low actionability (what would you do differently?) Codex
Added Median time-to-block Fraud detection speed — #1 strategic priority needs >1 metric Both reviewers
Added Conversations with blocked accounts Fraud exposure — how much harm before detection Strategic review
Added Search-to-inquiry conversion THE marketplace efficiency metric — connects demand to outcomes Codex (critical missing)
Added 30-day cohort retention Are product changes improving retention for NEW users? Strategic review
Swapped Dispute win rate for Fraud blocked accounts/month $35/month disputes not decision-driving; detection volume is Strategic review

The Final 20

# Metric Question Source What decision it informs
Q1: Revenue
1 New MRR added/month Q1 Stripe Is acquisition working in dollar terms?
2 Churned MRR lost/month Q1 Stripe How much are we leaking? Paired with #1 = net growth
3 Revenue per listing Q1 Stripe÷SQL Growing from volume or monetization?
4 Failed payment recovery rate Q1 Stripe Is dunning saving revenue? (amount shown as context)
Q2: Churn
5 No-inquiry churn rate Q2 SQL What % of inquiry-starved listers churn? (the biggest driver)
6 Churn by billing cycle Q2 Stripe Is the annual tilt reducing churn? (87.7% monthly vs 49.3% annual)
7 30-day cohort retention Q2 SQL Are product changes improving retention for new cohorts?
Q3: Demand
8 Thread completion rate Q3 SQL Do conversations turn into real housing discussions?
9 Unique searchers/week Q3 SQL Is demand-side volume growing?
10 Searcher return rate (30d) Q3 SQL Do searchers come back? (59.3% if replied to vs 37.9% ghosted)
11 Signup → first message (days) Q3 SQL How fast do new users find value? (activation speed)
12 Search-to-inquiry conversion Q3 PostHog THE marketplace efficiency metric — do searches turn into messages?
Q4: Fraud
13 Median time-to-block (hours) Q4 SQL How fast do we detect and neutralize fraud?
14 Conversations with blocked accounts Q4 SQL How much exposure do searchers have before we catch fraudsters?
15 Blocked accounts/month Q4 SQL Detection volume — is fraud increasing or decreasing?
Q5: Supply
16 Inquiries per active listing Q5 SQL Supply-demand ratio — is the marketplace balanced?
17 Time-to-first-inquiry (median days) Q5 SQL How fast do new listers see value? (leading retention indicator)
18 Listing completeness score Q5 SQL Quality floor — title + desc + photos + address
Q6: Organic
19 Organic signup rate Q6 PostHog Do organic visitors become users? (conversion quality)
20 Blog clicks/week Q6 GSC Is content driving traffic? (clicks > impressions for actual engagement)

Distribution

Question Heroes Sub-metrics Total
Q1: Revenue 2 (MRR, Annual %) 4 6
Q2: Churn 1 (Churn rate) 3 4
Q3: Demand 1 (Reply rate) + 1 (Response time) 5 7
Q4: Fraud 1 (Searchers harmed) 3 4
Q5: Supply 1 (Liquidity) 3 4
Q6: Organic 1 (Funnel) 2 3
Total 8 20 28

What Each Sub-Metric Adds to Its Hero

Hero Sub-metrics What they add
MRR ($36K) New MRR, Churned MRR, Revenue/listing, Payment recovery WHERE money comes from, HOW efficiently, WHAT leaks
Annual % (53.7%) (shares Q1 tile)
Churn (25%) No-inquiry churn, Churn by cycle, Cohort retention WHO churns, WHY, and is it IMPROVING over time?
Reply rate (51%) Thread completion, Searchers/wk, Return rate, Activation, Search→inquiry HOW deep, HOW many, WHO returns, HOW fast, WHERE they convert
Response time (34h) (shares Q3 tile)
Fraud searchers (37) Time-to-block, Convos with blocked, Blocked/month HOW fast detected, HOW much exposure, WHAT volume
Liquidity (55.7%) Inquiries/listing, Time-to-first-inquiry, Completeness HOW balanced, HOW fast, HOW quality
Organic funnel Signup rate, Blog clicks WHAT converts, WHAT engages

UX Design: Below the Fold

Per reviewer feedback, 20 metrics displayed as a compact table (not cards):

  • One row per metric: name | value | trend arrow (↑↓→) | status dot (green/yellow/red/gray)
  • Grouped by question with Q-label headers
  • Thumbs up/down on each row (lightweight — signals which metrics are useful)
  • No sparklines below the fold — sparklines live in the hero tiles only
  • Scannable in 10 seconds — the goal is "anything weird?" not deep analysis

Thumbs up/down gets reviewed monthly: metrics with net-negative votes get replaced. This creates a natural pruning mechanism.


Collectibility Assessment

Source Metrics Status
SQL (production RO) 13 collectSql() exists
Stripe (paginated) 5 collectStripe() with sgetAll()
PostHog (HogQL) 1 ⚠️ Search-to-inquiry needs event instrumentation
GSC 1 collectGsc() exists

19 of 20 collectible today. #12 (search-to-inquiry) needs PostHog event instrumentation for search events — tracked as a dependency on the PostHog Messaging Instrumentation feature.


Reviewed by strategic subagent + Codex adversarial. Both reviews converged on: add fraud metrics, remove redundancies, add search-to-inquiry conversion. 9 changes applied from draft.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment