Snippets.

January 13, 2026

AI14 min read

Connecting Claude to External Tools with MCP: A Real-World Example

Connecting Claude to external tools with MCP

I wrote a tutorial on building MCP servers a while back. It covered the fundamentals: how MCP works, how to define tools and resources, how to set up transports. But it was a GitHub Issues integration, useful for learning but not exactly a client project.

This article is different. We're going to build something you'd actually deploy for a client: a Claude-powered customer analytics agent that connects to a real database and a real CRM via MCP. The agent can query customer data, analyze purchase patterns, generate reports, and update CRM records, all through natural language.

This is the kind of project that sells Claude integrations to businesses. The CEO types "show me which customers are at risk of churning" and gets an actual answer backed by real data.

The architecture

Before writing code, let's sketch the system:

┌─────────────────┐
│  Claude Desktop  │  (or your custom chat UI)
│  / Custom Agent  │
└────────┬────────┘
         │ MCP Protocol
         ▼
┌─────────────────┐
│   MCP Server    │  (our TypeScript server)
│                 │
│  ┌───────────┐  │
│  │ Tools     │  │  query_customers, analyze_churn,
│  │           │  │  get_revenue_report, update_crm_status
│  └───────────┘  │
│  ┌───────────┐  │
│  │ Resources │  │  customer segments, KPI dashboard
│  └───────────┘  │
└────────┬────────┘
         │
    ┌────┴────┐
    ▼         ▼
┌────────┐ ┌────────┐
│ Postgres│ │  CRM   │
│   DB    │ │  API   │
└────────┘ └────────┘

The MCP server sits between Claude and your data layer. Claude discovers the available tools, decides which ones to use based on the user's question, and chains them together to produce a complete answer.

Setting up the project

mkdir customer-analytics-mcp
cd customer-analytics-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod pg
npm install -D typescript @types/node @types/pg
npx tsc --init

Project structure:

src/
├── index.ts                # Server entry point
├── tools/
│   ├── query-customers.ts
│   ├── analyze-churn.ts
│   ├── revenue-report.ts
│   └── update-crm.ts
├── resources/
│   └── segments.ts
├── database.ts             # PostgreSQL client
├── crm-client.ts           # CRM API wrapper
└── types.ts

Types first

// src/types.ts

export interface Customer {
  id: string;
  name: string;
  email: string;
  plan: 'free' | 'starter' | 'pro' | 'enterprise';
  monthlyRevenue: number;
  signupDate: string;
  lastActiveDate: string;
  totalOrders: number;
}

export interface ChurnRiskAssessment {
  customerId: string;
  customerName: string;
  riskLevel: 'low' | 'medium' | 'high' | 'critical';
  daysSinceActive: number;
  factors: string[];
}

export interface RevenueReport {
  period: string;
  totalRevenue: number;
  customerCount: number;
  averageRevenue: number;
  topCustomers: Array<{ name: string; revenue: number }>;
  growthRate: number;
}

export interface ServerConfig {
  databaseUrl: string;
  crmApiKey: string;
  crmBaseUrl: string;
}

The database layer

We wrap PostgreSQL in a clean client class. In a real project, you'd probably use an ORM like Prisma or Drizzle, but raw pg keeps the example focused.

// src/database.ts

import { Pool } from 'pg';
import type { Customer, ChurnRiskAssessment, RevenueReport } from './types.js';

export class DatabaseClient {
  private readonly pool: Pool;

  constructor(connectionString: string) {
    this.pool = new Pool({ connectionString });
  }

  async queryCustomers(filters: {
    plan?: string;
    minRevenue?: number;
    maxRevenue?: number;
    inactiveDays?: number;
    limit?: number;
  }): Promise<Customer[]> {
    const conditions: string[] = [];
    const params: unknown[] = [];
    let paramIndex = 1;

    if (filters.plan) {
      conditions.push(`plan = $${paramIndex++}`);
      params.push(filters.plan);
    }
    if (filters.minRevenue !== undefined) {
      conditions.push(`monthly_revenue >= $${paramIndex++}`);
      params.push(filters.minRevenue);
    }
    if (filters.maxRevenue !== undefined) {
      conditions.push(`monthly_revenue <= $${paramIndex++}`);
      params.push(filters.maxRevenue);
    }
    if (filters.inactiveDays !== undefined) {
      conditions.push(
        `last_active_date < NOW() - INTERVAL '${filters.inactiveDays} days'`
      );
    }

    const whereClause =
      conditions.length > 0 ? `WHERE ${conditions.join(' AND ')}` : '';
    const limitClause = `LIMIT ${filters.limit ?? 50}`;

    const query = `
      SELECT id, name, email, plan, monthly_revenue, signup_date,
             last_active_date, total_orders
      FROM customers
      ${whereClause}
      ORDER BY monthly_revenue DESC
      ${limitClause}
    `;

    const result = await this.pool.query(query, params);

    return result.rows.map((row: Record<string, unknown>) => ({
      id: row.id as string,
      name: row.name as string,
      email: row.email as string,
      plan: row.plan as Customer['plan'],
      monthlyRevenue: Number(row.monthly_revenue),
      signupDate: String(row.signup_date),
      lastActiveDate: String(row.last_active_date),
      totalOrders: Number(row.total_orders),
    }));
  }

  async getChurnRiskCustomers(
    inactiveDays: number = 30
  ): Promise<ChurnRiskAssessment[]> {
    const query = `
      SELECT id, name, email, plan, monthly_revenue,
             last_active_date, total_orders,
             EXTRACT(DAY FROM NOW() - last_active_date) as days_inactive
      FROM customers
      WHERE last_active_date < NOW() - INTERVAL '${inactiveDays} days'
      ORDER BY monthly_revenue DESC
      LIMIT 50
    `;

    const result = await this.pool.query(query);

    return result.rows.map((row: Record<string, unknown>) => {
      const daysInactive = Number(row.days_inactive);
      const revenue = Number(row.monthly_revenue);

      const factors: string[] = [];
      if (daysInactive > 90) factors.push('Inactive for 90+ days');
      if (daysInactive > 60) factors.push('Inactive for 60+ days');
      if (daysInactive > 30) factors.push('Inactive for 30+ days');
      if (revenue > 500) factors.push('High-value customer');
      if (Number(row.total_orders) < 3) factors.push('Low engagement (few orders)');

      let riskLevel: ChurnRiskAssessment['riskLevel'] = 'low';
      if (daysInactive > 90) riskLevel = 'critical';
      else if (daysInactive > 60) riskLevel = 'high';
      else if (daysInactive > 30) riskLevel = 'medium';

      return {
        customerId: row.id as string,
        customerName: row.name as string,
        riskLevel,
        daysSinceActive: daysInactive,
        factors,
      };
    });
  }

  async getRevenueReport(periodMonths: number = 1): Promise<RevenueReport> {
    const summaryQuery = `
      SELECT
        COUNT(*) as customer_count,
        SUM(monthly_revenue) as total_revenue,
        AVG(monthly_revenue) as avg_revenue
      FROM customers
      WHERE last_active_date > NOW() - INTERVAL '${periodMonths} months'
    `;

    const topCustomersQuery = `
      SELECT name, monthly_revenue
      FROM customers
      ORDER BY monthly_revenue DESC
      LIMIT 5
    `;

    const [summaryResult, topResult] = await Promise.all([
      this.pool.query(summaryQuery),
      this.pool.query(topCustomersQuery),
    ]);

    const summary = summaryResult.rows[0] as Record<string, unknown>;

    return {
      period: `Last ${periodMonths} month(s)`,
      totalRevenue: Number(summary.total_revenue) || 0,
      customerCount: Number(summary.customer_count) || 0,
      averageRevenue: Number(summary.avg_revenue) || 0,
      topCustomers: topResult.rows.map((row: Record<string, unknown>) => ({
        name: row.name as string,
        revenue: Number(row.monthly_revenue),
      })),
      growthRate: 0, // Would compare with previous period in production
    };
  }

  async close(): Promise<void> {
    await this.pool.end();
  }
}

The CRM client

This wraps a generic CRM API (think HubSpot, Salesforce, or a custom system). The important thing is that the MCP server can both read and write CRM data.

// src/crm-client.ts

export class CrmClient {
  private readonly apiKey: string;
  private readonly baseUrl: string;

  constructor(apiKey: string, baseUrl: string) {
    this.apiKey = apiKey;
    this.baseUrl = baseUrl;
  }

  async updateCustomerStatus(
    customerId: string,
    status: string,
    notes: string
  ): Promise<{ success: boolean; updatedAt: string }> {
    const response = await fetch(
      `${this.baseUrl}/contacts/${customerId}/status`,
      {
        method: 'PATCH',
        headers: {
          Authorization: `Bearer ${this.apiKey}`,
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ status, notes }),
      }
    );

    if (!response.ok) {
      throw new Error(`CRM API error: ${response.status} ${response.statusText}`);
    }

    const data = await response.json();
    return {
      success: true,
      updatedAt: data.updated_at as string,
    };
  }

  async addNote(
    customerId: string,
    note: string,
    noteType: string
  ): Promise<{ noteId: string }> {
    const response = await fetch(
      `${this.baseUrl}/contacts/${customerId}/notes`,
      {
        method: 'POST',
        headers: {
          Authorization: `Bearer ${this.apiKey}`,
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ content: note, type: noteType }),
      }
    );

    if (!response.ok) {
      throw new Error(`CRM API error: ${response.status} ${response.statusText}`);
    }

    const data = await response.json();
    return { noteId: data.id as string };
  }
}

Defining the MCP tools

Now the fun part. Each tool connects Claude to a specific capability. Notice how the tool descriptions tell Claude when and why to use each tool, not just what it does.

// src/tools/query-customers.ts

import { z } from 'zod';

export const queryCustomersSchema = z.object({
  plan: z
    .enum(['free', 'starter', 'pro', 'enterprise'])
    .optional()
    .describe('Filter by subscription plan.'),
  minRevenue: z
    .number()
    .optional()
    .describe('Minimum monthly revenue in dollars.'),
  maxRevenue: z
    .number()
    .optional()
    .describe('Maximum monthly revenue in dollars.'),
  inactiveDays: z
    .number()
    .optional()
    .describe('Only include customers inactive for at least this many days.'),
  limit: z
    .number()
    .max(100)
    .optional()
    .describe('Maximum number of results. Defaults to 50.'),
});
// src/tools/analyze-churn.ts

import { z } from 'zod';

export const analyzeChurnSchema = z.object({
  inactiveDays: z
    .number()
    .min(7)
    .default(30)
    .describe(
      'Consider customers inactive for this many days as potential churn risks. Defaults to 30.'
    ),
});
// src/tools/revenue-report.ts

import { z } from 'zod';

export const revenueReportSchema = z.object({
  periodMonths: z
    .number()
    .min(1)
    .max(24)
    .default(1)
    .describe('Number of months to include in the report. Defaults to 1.'),
});
// src/tools/update-crm.ts

import { z } from 'zod';

export const updateCrmSchema = z.object({
  customerId: z
    .string()
    .describe('The customer ID to update.'),
  status: z
    .enum(['active', 'at_risk', 'churned', 'win_back'])
    .describe('The new status to set in the CRM.'),
  notes: z
    .string()
    .max(1000)
    .describe('Notes explaining why the status was changed. Be specific.'),
});

Wiring it all together

The server entry point registers all tools and resources, then starts listening. This is where everything connects.

// src/index.ts

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { DatabaseClient } from './database.js';
import { CrmClient } from './crm-client.js';
import { queryCustomersSchema } from './tools/query-customers.js';
import { analyzeChurnSchema } from './tools/analyze-churn.js';
import { revenueReportSchema } from './tools/revenue-report.js';
import { updateCrmSchema } from './tools/update-crm.js';
import type { ServerConfig } from './types.js';

function loadConfig(): ServerConfig {
  const databaseUrl = process.env.DATABASE_URL;
  const crmApiKey = process.env.CRM_API_KEY;
  const crmBaseUrl = process.env.CRM_BASE_URL;

  if (!databaseUrl || !crmApiKey || !crmBaseUrl) {
    throw new Error(
      'Missing required environment variables: DATABASE_URL, CRM_API_KEY, CRM_BASE_URL'
    );
  }

  return { databaseUrl, crmApiKey, crmBaseUrl };
}

function createServer(database: DatabaseClient, crm: CrmClient): McpServer {
  const server = new McpServer({
    name: 'customer-analytics',
    version: '1.0.0',
  });

  // Tools
  server.tool(
    'query_customers',
    'Search and filter customers by plan, revenue, and activity. Use this to find specific customer segments or answer questions about the customer base.',
    queryCustomersSchema.shape,
    async ({ plan, minRevenue, maxRevenue, inactiveDays, limit }) => {
      const customers = await database.queryCustomers({
        plan,
        minRevenue,
        maxRevenue,
        inactiveDays,
        limit,
      });
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(
              { count: customers.length, customers },
              null,
              2
            ),
          },
        ],
      };
    }
  );

  server.tool(
    'analyze_churn_risk',
    'Identify customers at risk of churning based on inactivity. Returns a risk assessment for each customer with risk level, days inactive, and contributing factors. Use this when asked about churn, retention, or at-risk customers.',
    analyzeChurnSchema.shape,
    async ({ inactiveDays }) => {
      const assessments = await database.getChurnRiskCustomers(inactiveDays);
      const summary = {
        total: assessments.length,
        critical: assessments.filter((a) => a.riskLevel === 'critical').length,
        high: assessments.filter((a) => a.riskLevel === 'high').length,
        medium: assessments.filter((a) => a.riskLevel === 'medium').length,
        assessments,
      };
      return {
        content: [{ type: 'text', text: JSON.stringify(summary, null, 2) }],
      };
    }
  );

  server.tool(
    'get_revenue_report',
    'Generate a revenue report for a given period. Includes total revenue, customer count, average revenue per customer, top customers by revenue, and growth rate. Use this for financial questions or business reviews.',
    revenueReportSchema.shape,
    async ({ periodMonths }) => {
      const report = await database.getRevenueReport(periodMonths);
      return {
        content: [{ type: 'text', text: JSON.stringify(report, null, 2) }],
      };
    }
  );

  server.tool(
    'update_crm_status',
    'Update a customer\'s status in the CRM and add a note. Use this after analyzing churn risk to mark customers as at_risk, or after a win-back campaign to update their status. Always include a clear note explaining the change.',
    updateCrmSchema.shape,
    async ({ customerId, status, notes }) => {
      const result = await crm.updateCustomerStatus(customerId, status, notes);
      return {
        content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
      };
    }
  );

  // Resources
  server.resource(
    'customer-segments',
    'analytics://segments',
    { description: 'Overview of customer segments by plan tier with counts and revenue' },
    async () => {
      const segments = await Promise.all(
        (['free', 'starter', 'pro', 'enterprise'] as const).map(
          async (plan) => {
            const customers = await database.queryCustomers({ plan });
            const totalRevenue = customers.reduce(
              (sum, customer) => sum + customer.monthlyRevenue,
              0
            );
            return {
              plan,
              count: customers.length,
              totalMonthlyRevenue: totalRevenue,
              averageRevenue:
                customers.length > 0 ? totalRevenue / customers.length : 0,
            };
          }
        )
      );

      return {
        contents: [
          {
            uri: 'analytics://segments',
            mimeType: 'application/json',
            text: JSON.stringify(segments, null, 2),
          },
        ],
      };
    }
  );

  return server;
}

async function main(): Promise<void> {
  const config = loadConfig();
  const database = new DatabaseClient(config.databaseUrl);
  const crm = new CrmClient(config.crmApiKey, config.crmBaseUrl);
  const server = createServer(database, crm);
  const transport = new StdioServerTransport();

  await server.connect(transport);
}

main().catch((error: unknown) => {
  process.stderr.write(`Fatal error: ${String(error)}\n`);
  process.exit(1);
});

Connecting to Claude Desktop

Add the server to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "customer-analytics": {
      "command": "node",
      "args": ["/path/to/customer-analytics-mcp/dist/index.js"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost:5432/analytics",
        "CRM_API_KEY": "your-crm-api-key",
        "CRM_BASE_URL": "https://api.your-crm.com/v1"
      }
    }
  }
}

Restart Claude Desktop. Now you can have conversations like:

You: "Which enterprise customers haven't been active in the last 60 days?"

Claude calls query_customers with { plan: "enterprise", inactiveDays: 60 }, then presents the results in a readable format.

You: "Run a churn analysis and flag the high-risk ones in the CRM"

Claude calls analyze_churn_risk, reviews the results, then calls update_crm_status for each high-risk customer with a clear note explaining why.

You: "Give me this month's revenue breakdown"

Claude calls get_revenue_report and formats the data into a clean summary with highlights.

The power here is that Claude chains these tools together autonomously. Ask "Which high-value customers are at risk and what should we do about them?" and Claude will query customers, analyze churn, cross-reference the data, and give you actionable recommendations, all in one conversation.

Safety: read vs. write operations

Notice that three of our four tools are read-only (query, analyze, report) and only one writes data (update CRM). This is intentional.

For production MCP servers, follow this principle: be generous with read access, cautious with write access.

For write operations, consider adding confirmation steps:

server.tool(
  'update_crm_status',
  'Update a customer\'s status in the CRM. This is a WRITE operation — always confirm with the user before executing.',
  updateCrmSchema.shape,
  async ({ customerId, status, notes }) => {
    // The tool description tells Claude to confirm first.
    // Claude will typically ask "Should I update customer X to status Y?"
    // before actually calling this tool.
    const result = await crm.updateCustomerStatus(customerId, status, notes);
    return {
      content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
    };
  }
);

By putting "always confirm with the user before executing" in the tool description, Claude will ask for approval before making changes. The user stays in control.

Deploying for a client

When you're ready to deploy this for a client, here's the production checklist:

Authentication. Don't hardcode database credentials. Use environment variables, and for the MCP HTTP transport, add Bearer token authentication.

Rate limiting. Add rate limits to prevent the agent from hammering your database. A simple token bucket or sliding window works fine.

Audit logging. Log every tool call with the user identity, timestamp, tool name, inputs, and outputs. This is critical for compliance and debugging.

function createAuditLog(
  userId: string,
  toolName: string,
  input: unknown,
  output: unknown
): void {
  const logEntry = {
    timestamp: new Date().toISOString(),
    userId,
    toolName,
    input,
    output,
    serverVersion: '1.0.0',
  };
  // In production: write to a database, send to a logging service, etc.
  process.stderr.write(JSON.stringify(logEntry) + '\n');
}

Error boundaries. Wrap each tool handler in a try-catch that returns a useful error message instead of crashing the server:

async function safeToolHandler<T>(
  handler: () => Promise<T>,
  toolName: string
): Promise<{ content: Array<{ type: 'text'; text: string }> }> {
  try {
    const result = await handler();
    return {
      content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
    };
  } catch (error: unknown) {
    const message = error instanceof Error ? error.message : String(error);
    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify(
            { error: `${toolName} failed: ${message}` },
            null,
            2
          ),
        },
      ],
    };
  }
}

Testing. Use the MCP Inspector (npx @modelcontextprotocol/inspector) to test every tool manually before connecting Claude. This catches schema issues, database errors, and edge cases without burning API tokens.

Why clients love this

Here's the pitch: instead of building a traditional analytics dashboard with charts and filters that nobody uses, you give the client a natural language interface to their data. They ask questions in plain English and get answers backed by real queries.

The MCP server is the backend. Claude is the frontend. The client never sees SQL, never clicks through dropdowns, never waits for a dashboard to load. They just ask.

And because it's MCP, the same server works with Claude Desktop, a custom chat UI, or any other MCP-compatible client. Build once, connect anywhere.

Conclusion

Connecting Claude to external tools with MCP is where AI integration gets real. Instead of generating text about data, Claude works with actual data. Instead of suggesting actions, Claude takes them (with appropriate safeguards).

The patterns in this article apply to any domain:

  • E-commerce: Query orders, analyze return rates, update inventory
  • Healthcare: Look up patient records, check medication interactions, schedule appointments
  • Finance: Run portfolio analysis, check compliance, generate reports
  • DevOps: Query logs, check service health, trigger deployments

The architecture is always the same: MCP server with typed tools, a database or API layer underneath, read operations for analysis, and carefully gated write operations for actions.

Start with read-only tools. Get the queries right. Then add write operations one at a time, with confirmation steps and audit logging. That's how you build AI integrations that clients trust with their real data.

Today's AI Briefing5 stories
Mar 28, 2026

Summary

Anthropic accidentally leaked its most powerful model ever through an unsecured database - then confirmed it's real. Meanwhile, Claude is literally driving a Mars rover, Google just gave every US user a personal AI that reads their email, and Yann LeCun bet a billion dollars that LLMs are a dead end.

Read full summary & take a quiz →

Top Stories

Anthropic Accidentally Leaks 'Claude Mythos' - Its Most Powerful AI Model Yet

NASA's Perseverance Rover Completes First AI-Planned Drives on Mars Using Claude

Google Rolls Out Gemini Personal Intelligence to All Free US Users

Yann LeCun's AMI Labs Raises Record $1.03B Seed to Build World Models

OpenAI Hits $25B Revenue, Eyes Largest IPO in History at $1T Valuation

9 quiz questions inside