Back to blog
Tutorial

Building a custom MCP server: how I gave Claude superpowers in my workflow

By Flávio Emanuel · · 9 min read

I had a dumb problem that ate up my time constantly. Whenever I pulled Claude in to solve a complex Supabase query for a client, I had to:

  1. Copy the database schema
  2. Paste it in the conversation
  3. Describe the data structure
  4. Wait for Claude to generate the query
  5. Test it manually on Supabase
  6. Go back and tweak if it was wrong

On a three day sprint, that easily cost me 3 to 4 hours. Not completely wasted time, but time I should’ve been spending on something else.

Then I discovered MCP (Model Context Protocol) late 2024. After tinkering with a few pre-built servers, I decided to build my own. The result? Now Claude sees the client’s database in real time, understands the structure on his own, and writes correct queries. Gain: 40% less time on those moments.

What is MCP exactly

MCP is a protocol that lets you connect external applications (databases, APIs, tools) straight into Claude Desktop or via the API. Think of it like a plugin, but standardized and working both locally and in production.

Before MCP, if I wanted Claude to have real access to Supabase data, I had to:

  • Manual integration via prompts (repetitive)
  • Build a custom API just for that
  • Use webhooks (too complex)

Now? I write an MCP server, connect it to Claude Desktop in 2 minutes, done. Claude has direct access.

How it works in practice

MCP works with a client-server model. Claude Desktop is the client. You write a server (running locally or on your machine) that exposes “resources” and “tools” that Claude can use.

Resources are data Claude can read. Tools are actions he can execute.

In my case:

  • Resource: Supabase schema (tables, columns, types)
  • Tool: execute a SELECT query (with security validations)
  • Tool: see the structure of a specific table

When you open a conversation and want Claude to break his head over complex SQL, you mention the context and done. Claude sees the resources, understands the structure, and writes.

Building the MCP step by step

I’ll show how I built mine. I assume you know Node and TypeScript at a basic level.

First, install the MCP SDK:

npm init -y
npm install @modelcontextprotocol/sdk
npm install dotenv typescript @types/node
npm install @supabase/supabase-js

Create a server.ts file:

import Anthropic from "@anthropic-sdk/sdk";
import {
  Server,
  TextContent,
  Tool,
} from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { createClient } from "@supabase/supabase-js";
import * as dotenv from "dotenv";

dotenv.config();

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_ANON_KEY!
);

const server = new Server({
  name: "supabase-mcp",
  version: "1.0.0",
});

server.setRequestHandler(ListResourcesRequestSchema, async () => {
  return {
    resources: [
      {
        uri: "supabase://schema",
        name: "Supabase Database Schema",
        description: "Current database schema and structure",
        mimeType: "application/json",
      },
    ],
  };
});

server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
  if (request.params.uri === "supabase://schema") {
    const { data: tables } = await supabase
      .from("information_schema.tables")
      .select("*")
      .eq("table_schema", "public");

    return {
      contents: [
        {
          uri: request.params.uri,
          mimeType: "application/json",
          text: JSON.stringify(tables, null, 2),
        },
      ],
    };
  }

  throw new Error(`Unknown resource: ${request.params.uri}`);
});

const queryTool: Tool = {
  name: "execute_query",
  description:
    "Execute a safe SQL SELECT query on the Supabase database. Use only for SELECT, no mutations.",
  inputSchema: {
    type: "object",
    properties: {
      query: {
        type: "string",
        description: "SQL query to execute",
      },
    },
    required: ["query"],
  },
};

server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [queryTool],
  };
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "execute_query") {
    const query = request.params.arguments.query as string;

    if (!query.trim().toUpperCase().startsWith("SELECT")) {
      throw new Error("Only SELECT queries are allowed");
    }

    const { data, error } = await supabase.rpc("execute_query", {
      query,
    });

    if (error) {
      throw new Error(`Query error: ${error.message}`);
    }

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(data, null, 2),
        },
      ],
    };
  }

  throw new Error(`Unknown tool: ${request.params.name}`);
});

const transport = new StdioServerTransport();
server.connect(transport);

That’s the basics. The server exposes a resource (the database schema) and a tool (execute safe queries).

Connecting to Claude Desktop

In your ~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or equivalent on Windows, add:

{
  "mcpServers": {
    "supabase": {
      "command": "node",
      "args": ["/path/to/your/server.js"],
      "env": {
        "SUPABASE_URL": "your-url",
        "SUPABASE_ANON_KEY": "your-key"
      }
    }
  }
}

Restart Claude Desktop. Done.

The real gain I saw

Before I had this MCP, when I needed a complex query on Supabase (like, “return all customers who never bought and became leads more than 60 days ago”), I’d pass Claude:

  • The description in natural language
  • The schema (3 tables: clients, leads, purchases)
  • Explanation of relationships

Claude would write the query. I’d test it manually. Sometimes it was wrong.

Now, I open Claude, mention the MCP, and say:

“I need a query that brings customers with no purchases in more than 60 days”

Claude sees the actual database structure. Understands data types. Writes a query with correct JOINs. Shows it to me first try. I save 10-15 minutes per complex query. In a month, I picked up about 8 different clients. About 2 days of time saved.

Security: what you can’t ignore

An MCP with database access is dangerous without validations. I put in:

  1. Only SELECT is allowed (in tool handler)
  2. All queries run on a Supabase user with limited permissions
  3. Rate limiting (max 10 queries per minute)
  4. Logging everything (query, result, timestamp)
  5. No exposure of internal schemas (information_schema stays local)
const RATE_LIMIT = 10;
const TIME_WINDOW = 60 * 1000;

const queryLog: Array<{ timestamp: number; query: string }> = [];

function checkRateLimit(): boolean {
  const now = Date.now();
  const recent = queryLog.filter((log) => now - log.timestamp < TIME_WINDOW);

  if (recent.length >= RATE_LIMIT) {
    return false;
  }

  queryLog.push({ timestamp: now, query: "" });
  return true;
}

Simple, but effective.

Use cases beyond Supabase

I’ve built two different MCPs so far:

  1. Supabase (this one)
  2. One that pulls data from a custom CRM a client uses

The second was useful for generating reports. Claude understood the CRM structure, gave me insights I’d spend hours digging for with manual queries.

You can build MCP for anything with an API: Stripe, GitHub, your own database, custom webhooks.

Cost and performance

A Supabase query via MCP has no extra cost. The MCP runs locally (or on your server). Each execution through the Claude API counts as a normal call.

Performance: a query that used to take 5 minutes for me to describe, test, and fix now takes 30 seconds.

What I learned

Three things that aren’t obvious:

  1. User inputs still need validation in the tool handler. Claude will try to use the tool within guidelines, but human error happens. Always validate.

  2. MCP doesn’t replace real authentication. If the resource you expose is sensitive, put a security layer before MCP works.

  3. Documentation is critical. The better you describe a resource or tool, the better Claude uses it. If you say “database schema”, Claude understands little. If you say “clients table with columns: id (uuid), email (text), created_at (timestamp), subscription_status (enum: active, inactive, cancelled)”, Claude navigates better.

Next steps

Now I’m thinking about building MCPs for:

  • Push information to Slack directly (when Claude needs to alert something)
  • Connect to Stripe to see payment metrics
  • One that reads Google Drive files (so Claude has document context)

The idea is to turn Claude not into a chatbot, but into a tool that works integrated in my own dev workflow.

Checklist: building an MCP

  • Install MCP SDK and dependencies (@modelcontextprotocol/sdk, dotenv, typescript)
  • Create server.ts with basic handlers (ListResources, ReadResource)
  • Expose resources (schema, data, structure)
  • Create tools with clear descriptions
  • Implement security validations (query types, rate limit, permissions)
  • Test locally with Claude Desktop
  • Configure in claude_desktop_config.json
  • Add logging to audit usage
  • Document each resource and tool clearly
  • Consider performance (schema cache, indexes)

Deployment gotchas

Running MCP server locally works fine. Deploying to production? New problems emerge. Your server might be stateless locally but rely on file system in production. It might assume certain environment variables exist.

Solution: build with production assumptions from day one. Use environment variables, not hardcoded paths. Use external databases, not local JSON files. Test on a real server before deploying to production.

Another gotcha: logging. Add logs to understand what’s happening in production. But not too many logs (slows down everything). Log errors and important state changes.

Error handling in MCP servers

A robust MCP server handles errors gracefully. What happens if the database is down? If an API call fails? If input validation fails?

Solution: wrap everything in try-catch. Return meaningful error messages. Log the error server-side.

try {
  result = await processRequest(input);
  return { success: true, data: result };
} catch (error) {
  logger.error("Request failed", error);
  return { success: false, error: "Processing failed. Please try again." };
}

Never expose internal errors to the client (they could leak secrets). Always return a generic error message to the client. Log the actual error server-side.

Read also: 5 prompts that save my week | Vibe coding vs AI pair programming | Deliver projects faster with AI

Next step

Need a dev who truly delivers?

Whether it's a one-time project, team reinforcement, or a long-term partnership. Let's talk.

Chat on WhatsApp

I reply within 2 hours during business hours.