Metrics That Matter Early Stage
Pageviews, total signups, social media followers. These numbers feel good. They also tell you almost nothing about whether your product is working.
I track five things for AssessAI. Everything else is noise.
1. activation rate
What percentage of people who sign up actually use the product? For AssessAI, activation means: created at least one assessment and sent it to at least one candidate.
A signup who never creates an assessment isn't a user. They're a row in the database. If my activation rate is 20%, I don't have a growth problem — I have an onboarding problem. The funnel is leaking before people even experience the product.
This number tells me where to focus. Low activation? Fix onboarding. High activation? Focus on retention.
2. completion rate
What percentage of candidates who start an assessment actually finish it? If they're dropping off mid-assessment, something is broken — the UX is confusing, the questions are too hard, the time limit is too aggressive, or the interface is frustrating.
I track this per assessment and per question. If question 7 has a 40% drop-off rate, that specific question needs work. Maybe it's poorly worded. Maybe the expected response format is unclear. The granular data tells me exactly where.
3. time to value
How long does it take from signup to the moment a hiring manager sees a scored assessment? If it's 45 minutes of setup before they get any value, most people will leave before experiencing the product.
I measure this in minutes. The goal is under 15. Create an assessment (2 min), invite a candidate (1 min), candidate completes it (their time), scoring happens (30 seconds), results appear. The setup overhead needs to be near zero.
Every feature I build gets evaluated against this: does it increase time to value or decrease it? Features that add setup steps need to deliver proportionally more value.
4. repeat usage
Does a company that runs one assessment come back to run more? This is the retention signal. One assessment could be a trial. Three assessments means the product is working for them.
I track assessments per organization over time. If an org runs one assessment and never returns, I want to know why. Did the results disappoint them? Did they switch to a competitor? Did the hiring need end? Each reason requires a different response.
5. qualitative feedback
This isn't a metric. It's better than a metric.
I talk to every early user. Short calls — 15 minutes. Three questions: What made you sign up? What almost made you leave? Would you recommend this to another hiring manager?
The answers don't show up in any dashboard. They're the most valuable data I collect. One hiring manager told me: "The AI collaboration scoring is interesting but I can't explain the rubric to my team." That's a product problem no amount of quantitative data would surface.
what I don't track
Total signups. Without activation rate context, this number is meaningless. 10,000 signups with 2% activation is worse than 200 signups with 50% activation.
Page views. Unless I'm running a media company, page views don't correlate with business outcomes.
Time on site. More time on site might mean engagement. It might also mean confusion. Without context, it's uninterpretable.
Social media metrics. Likes, shares, and followers are social proof. They're not product-market fit signal. I've seen products with 50k Twitter followers and zero revenue.
the meta-principle
Early stage, every metric should answer one of two questions:
- Is the product working? (Activation, completion, repeat usage)
- Where is it broken? (Drop-off points, time to value, qualitative feedback)
If a metric doesn't help answer either question, it's a vanity metric. Looks good on a pitch deck. Doesn't help you make better product decisions.
Track fewer things. Understand them deeply. Act on what they tell you. That's it.
More in Product
Building AssessAI: Why Coding Tests Are Broken
Most companies still use LeetCode to hire senior engineers. In 2026. When AI does 70% of the coding. Here's what I built instead.
Solo Founder Stack 2026
Next.js + Supabase + Vercel + Claude Code. The stack that lets one person ship like five.
Product Thinking vs. Coding
Why the best engineers think about users first. The skill gap companies don't test for.