You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Inquiries per active listing (supply-demand ratio)
SQL
5
5
5
5.0
~3.4/mo
87
Top 10 cities liquidity (% with inquiry in 14d)
SQL
4
4
5
4.3
—
88
Thin markets count (cities with searchers but <5 listings)
SQL
3
4
5
3.7
—
89
Searcher-to-lister ratio
SQL
3
5
4
3.7
2,289 searchers / 1,454 listers
90
Cross-market search rate (searcher looks in 2+ cities)
SQL
2
3
5
2.9
—
91
Avg days listing stays active
SQL
2
4
3
2.7
—
92
Seasonal inquiry volume (MoM change)
SQL
3
4
3
3.3
—
93
Weekend vs weekday inquiry patterns
SQL
1
4
3
2.0
—
94
Price-to-inquiry correlation by city
SQL
3
3
4
3.2
—
95
Avg listings viewed before first inquiry
PostHog
3
2
4
2.9
—
Product & Operations (5)
#
Metric
Source
D
C
U
Score
Current value
96
Verification completion rate (started → approved)
SQL
3
4
4
3.5
839 requests/30d
97
CI test pass rate
GitHub
2
5
2
2.7
0/10
98
Rollbar active error count
Rollbar
2
5
2
2.7
—
99
Email delivery rate
Postmark
2
4
3
2.7
—
100
Page load errors (client-side JS errors)
PostHog
2
3
3
2.4
—
Ranked Top 30
Sorted by composite score, filtered for uniqueness (removing near-duplicates):
Rank
#
Metric
Score
Category
Why it matters
1
1
New MRR added/month
5.0
Revenue
Half of the MRR waterfall — is acquisition working?
2
2
Churned MRR lost/month
5.0
Revenue
Other half — how much are we leaking?
3
86
Inquiries per active listing
5.0
Marketplace
The supply-demand ratio — marketplace efficiency
4
31
Thread completion rate
4.8
Demand
Do conversations turn into real housing discussions?
5
16
No-inquiry churn rate
4.7
Churn
The biggest churn driver decomposed
6
56
Time-to-first-inquiry (median days)
4.7
Supply
How fast do new listers see value?
7
30
% churned who never got inquiry
4.5
Churn
Validates the inquiry→retention link at churn time
8
48
Dispute win rate
4.5
Fraud
Is the evidence system working?
9
76
Sublets page impressions
4.5
SEO
Core product pages, separate from blog
10
81
Organic signup rate
4.4
SEO
Does organic traffic convert to accounts?
11
3
Revenue per listing
4.3
Revenue
Volume growth vs value growth
12
10
Failed payment amount/month
4.3
Revenue
Revenue at risk from payment failures
13
18
Churn by billing cycle
4.3
Churn
Validates annual tilt strategy
14
57
Listing completeness score
4.3
Supply
Quality predicts inquiry rate
15
87
Top 10 cities liquidity
4.3
Marketplace
Geographic health check
16
11
Failed payment recovery rate
4.2
Revenue
Can we save failed charges?
17
4
Plan tier distribution
4.1
Revenue
Portfolio mix monitoring
18
5
Net new subscriptions/week
4.1
Revenue
Acquisition velocity
19
34
Unique searchers/week
4.1
Demand
Demand-side volume
20
58
Paid listings 0 inquiries 30d (%)
4.1
Supply
Inverse of liquidity, longer window
21
7
Premium upgrade rate
4.0
Revenue
Is Premium delivering enough value to upgrade?
22
19
30-day cohort retention
4.0
Churn
Are new users retaining better over time?
23
35
Searcher return rate
4.0
Demand
Platform stickiness for demand side
24
41
Signup → first message (days)
4.0
Demand
Activation speed
25
46
Median time-to-block
4.0
Fraud
Fraud detection speed
26
47
Conversations with blocked accounts
4.0
Fraud
Harm exposure before detection
27
59
Premium vs standard inquiry rate
4.0
Supply
Does Premium deliver value?
28
77
Homepage LCP
3.9
SEO
Page speed → rankings
29
71
Organic sessions/week
3.8
SEO
Traffic volume trend
30
83
Landing page impressions
3.7
SEO
Non-blog content pages (TN, FAQ, etc.)
Proposed 20 Sub-Metrics
From the top 30, removing 10 that are either too similar to a hero metric or not actionable enough this month:
Removed from top 30:
#20 (0-inquiry 30d %) — redundant with listing liquidity hero
#28 (Homepage LCP) — already collected as feature-level metric
#29 (Organic sessions) — already collected as feature-level metric
#30 (Landing page impressions) — too granular for sub-metric tier
#27 (Premium vs standard inquiry rate) — redundant with #14 (listing completeness predicts this)
#22 (30-day cohort retention) — requires new cohort infrastructure, deferred
#26 (Conversations with blocked accounts) — redundant with hero #7 (searchers affected)
#25 (Median time-to-block) — important but low action frequency
#21 (Premium upgrade rate) — wait for Pro Plan to ship
#17 (Plan tier distribution) — interesting but not decision-driving weekly
The 20:
#
Metric
Question
Source
Why included
1
New MRR/month
Q1
Stripe
Acquisition health — paired with churned MRR tells the growth story
2
Churned MRR/month
Q1
Stripe
Retention cost in dollars — more visceral than %
3
Revenue per listing
Q1
Stripe÷SQL
Efficiency — growing because more listings or better monetization?
4
Failed payment amount/month
Q1
Stripe
Revenue at risk, currently invisible
5
Net new subscriptions/week
Q1
Stripe
Acquisition velocity
6
No-inquiry churn rate
Q2
SQL
Decomposed churn — the inquiry-starved segment
7
% churned who never got inquiry
Q2
SQL
Validates that inquiry→retention link holds at churn time
8
Churn by billing cycle
Q2
Stripe
Validates annual tilt — monthly at 87.7% vs annual at 49.3%
9
Thread completion rate
Q3
SQL
Quality of conversations — did both sides actually talk?
10
Unique searchers/week
Q3
SQL
Demand-side volume — are searchers growing?
11
Searcher return rate (30d)
Q3
SQL
Platform stickiness — do searchers come back?
12
Signup → first message (days)
Q3
SQL
Activation speed — how fast do new users find value?
13
Dispute win rate
Q4
Stripe
Evidence system effectiveness
14
Failed payment recovery rate
Q1
Stripe
Can dunning save revenue?
15
Inquiries per active listing
Q5
SQL
Supply-demand ratio — marketplace balance
16
Time-to-first-inquiry (median days)
Q5
SQL
New lister experience — how fast do they see value?
17
Listing completeness score
Q5
SQL
Quality floor — title + desc + photos + address
18
Top 10 cities liquidity
Q5
SQL
Geographic health — aggregate masks city problems
19
Sublets page impressions
Q6
GSC
Core product pages separate from blog
20
Organic signup rate
Q6
PostHog
Conversion quality — do organic visitors become users?
Distribution by question:
Q1 (Revenue): 5 sub-metrics
Q2 (Churn): 3 sub-metrics
Q3 (Demand): 4 sub-metrics
Q4 (Fraud): 1 sub-metric
Q5 (Supply): 4 sub-metrics
Q6 (Organic): 3 sub-metrics
This gives Q1 (revenue) the most sub-metrics because it's the weakest-covered question today, and Q4 (fraud) the fewest because fraud is currently under control (37 affected searchers/month).
What These 20 Add to the 8 Heroes
Hero metric
What it tells you
What the sub-metrics add
MRR ($36K)
Are we making money?
WHERE the money comes from (new vs churn vs failed), HOW efficiently (per listing)
Churn (25%)
Are we losing people?
WHO is churning (no-inquiry vs got-inquiry), WHY (billing cycle, inquiry status)
Reply rate (51%)
Is the marketplace delivering?
HOW deep (thread completion), HOW fast (signup→message), WHO returns
Fraud searchers (37)
Are people being harmed?
HOW effectively we fight back (dispute win rate)
Liquidity (55.7%)
Is supply useful?
HOW fast (time-to-first-inquiry), HOW balanced (per-listing ratio, geographic), HOW quality (completeness score)
Organic funnel
Is growth sustainable?
WHERE from (sublets vs blog), WHAT quality (signup rate)
Generated 2026-04-11. Scores based on decision utility for a 3-person marketplace team at $36K MRR.
Per reviewer feedback, 20 metrics displayed as a compact table (not cards):
One row per metric: name | value | trend arrow (↑↓→) | status dot (green/yellow/red/gray)
Grouped by question with Q-label headers
Thumbs up/down on each row (lightweight — signals which metrics are useful)
No sparklines below the fold — sparklines live in the hero tiles only
Scannable in 10 seconds — the goal is "anything weird?" not deep analysis
Thumbs up/down gets reviewed monthly: metrics with net-negative votes get replaced. This creates a natural pruning mechanism.
Collectibility Assessment
Source
Metrics
Status
SQL (production RO)
13
✅ collectSql() exists
Stripe (paginated)
5
✅ collectStripe() with sgetAll()
PostHog (HogQL)
1
⚠️ Search-to-inquiry needs event instrumentation
GSC
1
✅ collectGsc() exists
19 of 20 collectible today. #12 (search-to-inquiry) needs PostHog event instrumentation for search events — tracked as a dependency on the PostHog Messaging Instrumentation feature.
Reviewed by strategic subagent + Codex adversarial. Both reviews converged on: add fraud metrics, remove redundancies, add search-to-inquiry conversion. 9 changes applied from draft.