Skip to content
Back to Tools
Tools5 min read

MCP Servers Explained

mcpaitoolingclaude-code
Share

Model Context Protocol is the thing that turned Claude Code from a smart autocomplete into an actual engineering platform. Instead of copying data between tools manually, MCP lets the AI connect directly to services and take actions in them.

I run 9 MCP servers daily. Here's what they do and why the architecture matters.

what MCP actually is

MCP is a standard protocol that lets AI models connect to external services. Instead of the AI only reading and writing files, it can query a database, create a task in Asana, deploy to Vercel, or search the web — all through a consistent interface.

Think of it like USB for AI tools. Each MCP server is a driver that exposes a service's capabilities in a way the model can use. The model doesn't need to know the Supabase API or the Asana API. It just knows the MCP tools: execute_sql, create_task, deploy.

my 9 servers

Supabase — database operations, auth management, edge functions. I describe what I want in the schema; Claude Code writes the migration and applies it. execute_sql for queries, apply_migration for schema changes, get_logs for debugging.

Asana — task management. Every feature, bug fix, and improvement is an Asana task. Claude Code creates tasks when starting work, updates them with progress, and marks them complete when done. This happens through auto-hooks — I don't manually manage tasks.

Notion — knowledge base. Specs, PRDs, research findings, meeting notes. When I need to reference a spec during implementation, Claude Code searches Notion and pulls the relevant content into context.

Chrome — browser automation. Navigate pages, fill forms, read content, take screenshots, execute JavaScript. This is how I test the frontend without switching to a browser. The AI navigates the candidate assessment flow, fills in responses, and verifies the UI behaves correctly.

Vercel — deployments. Check build logs, view deployment status, get runtime errors. When a deployment fails, Claude Code reads the build logs through MCP and diagnoses the issue without me logging into the Vercel dashboard.

Exa — web search. When I need to research a library, find a code pattern, or check how others solved a problem, Exa provides semantic search results with full content. Better than Google for technical research because it returns actual page content, not just links.

NotebookLM — research automation. Create notebooks from multiple sources, generate audio summaries, query synthesized information. I used this heavily during market research for AssessAI — feeding in competitor docs, research papers, and industry reports, then querying the notebook for specific insights.

Railway and Cloudflare — backend deployments and edge compute. Not as heavily used yet, but connected and ready.

the practical workflow

Here's what happens when I run /product-iterate to work on a feature:

  1. Claude Code reads the task from Asana (MCP: Asana)
  2. It checks the spec in Notion for context (MCP: Notion)
  3. It searches for existing patterns in the codebase and online (MCP: Exa)
  4. It writes the code and tests
  5. It runs the migrations if needed (MCP: Supabase)
  6. It opens Chrome and tests the feature visually (MCP: Chrome)
  7. It updates the Asana task with progress (MCP: Asana)
  8. It deploys to a preview environment (MCP: Vercel)

Eight steps. Four different services. All orchestrated from a single terminal session. I don't open a browser, log into a dashboard, or copy-paste between tools.

setting up an MCP server

Each server is configured in Claude Code's settings with a transport method (stdio, SSE, or socket) and auth credentials. Example for Supabase:

{
  "mcpServers": {
    "supabase": {
      "command": "npx",
      "args": ["-y", "@supabase/mcp-server"],
      "env": {
        "SUPABASE_ACCESS_TOKEN": "${SUPABASE_ACCESS_TOKEN}"
      }
    }
  }
}

The server runs as a subprocess. Claude Code communicates with it over stdin/stdout. The model sees the available tools (like execute_sql, apply_migration) and can call them with the right parameters.

Most MCP servers take under 5 minutes to set up. The Anthropic ecosystem has servers for ~50 services now. Supabase, Vercel, Notion, GitHub, Slack, Linear, Stripe — if you use a popular dev tool, there's probably an MCP server for it.

the gotchas

Auth scope. MCP servers inherit whatever permissions their auth token has. If your Supabase token has admin access, the AI has admin access. Be intentional about scoping permissions.

Rate limits. MCP calls count against the service's API rate limits. If the AI is making 50 Supabase queries while exploring a problem, that's 50 API calls. Not usually an issue, but worth knowing.

Latency. Each MCP call adds network latency. For a quick SQL query, that's fine. For browsing 10 pages in Chrome, the latency compounds. I use MCP for targeted operations, not for bulk data exploration.

why this matters

The direction is clear: AI development tools will talk to services directly, not through human intermediaries. MCP is the standard that makes this work. The model doesn't just write code — it manages tasks, queries databases, deploys applications, and tests UI flows.

For solo developers, this is a force multiplier. The 9 MCP connections mean I operate like a team with a project manager (Asana), a database admin (Supabase), a DevOps engineer (Vercel), a QA tester (Chrome), and a researcher (Exa). All in one terminal session.


Share

More in Tools