My Code Review Checklist
At Blinq I reviewed code from a team of engineers across three time zones. Most code review advice is vague — "look for bugs" or "check the style." Here's the actual checklist I used, in priority order.
1. does it do what the ticket says?
Before reading a single line of code, I read the PR description and the linked ticket. Then I check: does this diff actually solve the stated problem? Not "does the code work" — does it solve the right problem?
I've approved technically flawless PRs that implemented the wrong thing. The code was clean, tested, and well-structured. It just didn't address the user's actual need. Now I check this first.
2. correctness at the boundaries
Most bugs live at boundaries: input validation, API responses, database queries, auth checks. I read these sections carefully and ask:
- What happens with empty input?
- What happens with null or undefined?
- What if the database query returns zero rows?
- What if the API returns a 500?
- What if the user isn't authenticated?
If the code doesn't handle these cases explicitly, I flag it. "Happy path works" is necessary but insufficient.
// This handles only the happy path
const user = await getUser(userId);
return user.name;
// This handles the boundary
const user = await getUser(userId);
if (!user) {
throw new NotFoundError(`User ${userId} not found`);
}
return user.name;Simple example, but this pattern — unchecked return values from async operations — accounts for maybe 30% of the bugs I catch in review.
3. naming tells you everything
If I can't understand what a function does from its name, it's either named wrong or doing too much. Both are problems.
Bad names I see constantly:
handleData— what data? Handle how?processItem— what processing?utils.ts— the junk drawer of codebasestemp,data2,newArr— local variables that give zero context
Good naming is documentation. calculateAssessmentScore, validateCandidateResponse, formatTimestampForDisplay. You read the function name and you know what it does, what it takes, and what it returns. The implementation should confirm that expectation, not surprise you.
4. the test gap
I check what's tested and what isn't. Specifically:
- Are the boundary conditions from step 2 tested?
- If new behavior was added, are there tests for it?
- If behavior changed, were existing tests updated?
- Are the tests actually asserting something meaningful, or just checking that the function doesn't throw?
I watch out for tests that pass by accident:
// This test always passes — it asserts nothing
it("should process the data", async () => {
const result = await processData(input);
expect(result).toBeDefined(); // almost everything is "defined"
});A test that can't fail isn't a test. It's a comment.
5. complexity budget
Every function gets a complexity budget. If a function has more than one nested conditional, a loop inside a conditional, or more than 30 lines, I ask: can this be split?
Not because short functions are inherently better, but because long, nested functions are where bugs hide. They have more possible execution paths than a human can track during review. If I can't hold the entire function in my head, I can't review it confidently.
6. the "what if this is wrong?" test
For critical code paths — auth, payments, scoring, data deletion — I mentally invert the logic and ask: what's the worst thing that happens if this code has a bug?
If the answer is "wrong color on a button," I move on. If the answer is "unauthorized users see other people's data," I read every line twice and check the tests three times.
Risk-weighted reviewing. Spend your attention budget where the consequences are highest.
7. does it match the codebase?
Consistency matters more than personal preference. If the codebase uses camelCase, don't introduce snake_case. If there's a useQuery wrapper, don't raw-dog fetch in a new component. If there's an established error handling pattern, follow it.
Inconsistency in a codebase adds cognitive load for everyone who reads it later. The best code in a PR is code that looks like it was written by the same person who wrote everything else.
what I don't do
I don't nitpick formatting. That's what Prettier and ESLint are for. If the linter passes, the style is fine. I don't argue about tabs vs. spaces or trailing commas. Automate that and move on.
I don't demand explanatory comments on clear code. If the function name and type signature explain the behavior, a comment just adds noise to maintain.
I don't block PRs on preferences. If the code is correct, tested, handles edge cases, and matches the codebase patterns, it ships. Even if I would have written it differently.
More in Software Dev
Git Workflow for Solo Founders
Trunk-based dev, feature branches, conventional commits. What works when you're the only person pushing code.
Docker for Developers, Not Ops
Dev containers, multi-stage builds, compose for local dev. The Docker knowledge that actually matters when you're writing code, not managing infrastructure.
Debugging Production Like a Detective
Log correlation, structured traces, and the debugging stories that taught me how to find bugs in production without adding console.log.