Product Thinking vs. Coding
The best engineer I ever worked with rarely wrote code first.
At Avoca, we had a feature request: let restaurants set custom hold music for callers. The obvious implementation was a file upload system, a storage layer, and audio streaming logic. Three of us started sketching the architecture.
She asked one question: "How many restaurants actually have custom hold music files ready to upload?"
The answer was almost none. What they wanted was to pick from a few options, not upload their own. We shipped a dropdown with 8 pre-loaded tracks in two days. The original architecture would have taken two weeks and served the same user need.
That's product thinking. It's the gap between "can you build it" and "should you build it this way."
the skill nobody tests for
I've sat on both sides of hiring tables — as a candidate doing LeetCode problems and as an interviewer watching candidates do LeetCode problems. The correlation between "can solve dynamic programming challenges" and "ships features users love" is close to zero.
What actually predicts great engineering work:
Problem decomposition. Given a vague requirement like "make the dashboard faster," can you identify the three specific bottlenecks and prioritize which one to fix first? Or do you immediately start optimizing random queries?
Tradeoff reasoning. Every decision has costs. Caching improves speed but adds staleness risk. Microservices improve deployment independence but add network complexity. Can you articulate the tradeoffs before committing to a direction?
User model accuracy. Do you understand who's using the feature and what they're actually trying to accomplish? The hold music story is a small example. The pattern plays out constantly at every scale.
Scope management. The ability to ship the 80% version first and iterate. Not because you're cutting corners, but because you understand that shipping fast, getting feedback, and iterating produces better products than building the "complete" version in isolation.
why AI makes this more important, not less
In 2026, AI writes production code. It writes tests. It writes database migrations. It can even do basic architecture if you give it enough context.
What AI can't do: talk to users. Understand why a feature matters. Decide which of five possible approaches to take based on business context. Recognize that the feature as spec'd doesn't solve the user's actual problem.
The mechanical skill of writing code is depreciating. The judgment skill of deciding what to build is appreciating. The engineers who combine both — who can use AI tools effectively and think clearly about products — are the ones who'll thrive.
This is literally why I built AssessAI. The industry tests for the depreciating skill and ignores the appreciating one.
how to build the muscle
Product thinking isn't innate talent. It's a practice.
Talk to users. Actually talk to them. Not surveys. Not analytics dashboards. Conversations. "What were you trying to do? What happened? What did you expect to happen?" Five user conversations will teach you more about your product than five weeks of analytics.
Ask "why" before "how." When you get a feature request, ask why the user needs it. Often the stated request ("add a CSV export") isn't the real need ("I need to share this data with my finance team"). The real need might have a simpler solution ("add a shareable link with read-only access").
Ship the smallest version first. Not a prototype. Not an MVP with scare quotes. The smallest thing that delivers value. You'll learn more from real usage in one week than from planning documents in one month.
Study products you use. When an app frustrates you, diagnose why. When an app delights you, figure out how. The patterns transfer. Good product thinking in a note-taking app teaches you about good product thinking in general.
The best engineers I've worked with weren't the fastest coders. They were the ones who built the right thing on the first try because they understood the problem before they opened their editor.
More in Product
Metrics That Matter Early Stage
Vanity metrics vs. real signal. What I track for AssessAI and why most startup dashboards are performance art.
Building AssessAI: Why Coding Tests Are Broken
Most companies still use LeetCode to hire senior engineers. In 2026. When AI does 70% of the coding. Here's what I built instead.
Solo Founder Stack 2026
Next.js + Supabase + Vercel + Claude Code. The stack that lets one person ship like five.