Skip to main content

MCP Servers Explained: The Protocol Connecting AI to Everything

March 17, 2026

The Model Context Protocol (MCP) is the missing layer between AI models and the tools they need to be useful. Instead of building custom integrations for every tool and every model, MCP gives you a single protocol that lets any AI host talk to any tool server. Think of it as USB-C for AI — one standard connector, infinite peripherals.

What MCP Actually Is

MCP is an open protocol that standardizes how AI applications provide context to language models. Before MCP, every integration was bespoke. Want Claude to read your files? Custom code. Want GPT to query your database? Different custom code. Want either of them to do both? Double the work.

MCP replaces this with a client-server architecture where:

  • Hosts are AI applications (Claude Desktop, IDEs, custom apps) that want to access external tools
  • Clients maintain 1:1 connections with MCP servers, handling protocol negotiation
  • Servers expose specific capabilities — tools, resources, and prompts — through a standardized interface

The protocol runs over JSON-RPC 2.0, supporting both stdio (local processes) and HTTP with Server-Sent Events (remote servers).

The Architecture

┌─────────────────────────────────────┐
           Host Application          
         (Claude Desktop, IDE)       
                                     
  ┌──────────┐  ┌──────────┐        
   Client A    Client B   ...   
  └────┬─────┘  └────┬─────┘        
└───────┼──────────────┼──────────────┘
                      
   ┌────▼─────┐   ┌───▼──────┐
    Server A     Server B 
   (filesystem)  (database)
   └──────────┘   └──────────┘

Each server exposes three types of capabilities:

  1. Tools — Functions the AI can call (e.g., read_file, query_database, send_message)
  2. Resources — Data the AI can read (e.g., file contents, database schemas, API docs)
  3. Prompts — Reusable prompt templates the server provides

The host application discovers what each server offers through capability negotiation during the initial handshake.

Building a Custom MCP Server

Let's build a practical MCP server that provides weather data. This demonstrates the core patterns you'll use for any integration.

First, install the SDK:

npm install @modelcontextprotocol/sdk

Now create the server:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "weather-server",
  version: "1.0.0",
});

// Define a tool
server.tool(
  "get_weather",
  "Get current weather for a city",
  {
    city: z.string().describe("City name"),
    units: z
      .enum(["celsius", "fahrenheit"])
      .default("celsius")
      .describe("Temperature units"),
  },
  async ({ city, units }) => {
    const response = await fetch(
      `https://api.weatherapi.com/v1/current.json?key=${process.env.WEATHER_API_KEY}&q=${encodeURIComponent(city)}`
    );

    if (!response.ok) {
      return {
        content: [
          { type: "text", text: `Failed to fetch weather for ${city}` },
        ],
        isError: true,
      };
    }

    const data = await response.json();
    const temp =
      units === "celsius" ? data.current.temp_c : data.current.temp_f;
    const unit = units === "celsius" ? "C" : "F";

    return {
      content: [
        {
          type: "text",
          text: `Weather in ${data.location.name}: ${temp}°${unit}, ${data.current.condition.text}`,
        },
      ],
    };
  }
);

// Define a resource
server.resource(
  "supported-cities",
  "weather://cities",
  async (uri) => ({
    contents: [
      {
        uri: uri.href,
        mimeType: "application/json",
        text: JSON.stringify([
          "London",
          "New York",
          "Tokyo",
          "Sydney",
          "Berlin",
        ]),
      },
    ],
  })
);

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);

Register it in your Claude Desktop config (claude_desktop_config.json):

{
  "mcpServers": {
    "weather": {
      "command": "node",
      "args": ["path/to/weather-server.js"],
      "env": {
        "WEATHER_API_KEY": "your-key-here"
      }
    }
  }
}

That's it. Claude can now check the weather by calling your tool.

Real-World MCP Servers

The power of MCP becomes obvious when you look at the ecosystem of servers already available.

Filesystem Server

Gives AI controlled access to your local files. You define which directories are allowed, and the server exposes tools like read_file, write_file, list_directory, and search_files. The AI can read your project structure, edit files, and search for patterns — all within the boundaries you set.

Database Server

Exposes your database schema as a resource and provides a query tool. The AI can inspect tables, write SQL queries, and return results. Works with PostgreSQL, MySQL, SQLite — whatever driver you wire up. The schema resource means the AI understands your data model before writing any queries.

GitHub Server

Connects AI to your repositories. Tools include create_issue, create_pull_request, search_code, list_commits, and more. You can ask the AI to review recent commits, create issues from bug reports, or search your codebase for specific patterns across repos.

Slack Server

Lets AI read and send messages in Slack. Tools for listing channels, reading message history, posting messages, and searching. Useful for building AI assistants that can participate in team conversations or summarize channel activity.

Browser Automation Server

Uses Puppeteer or Playwright under the hood. The AI can navigate to URLs, take screenshots, click elements, fill forms, and extract data. This turns any AI host into a web automation platform.

Chaining Servers Together

The real power comes from combining multiple servers. With filesystem, GitHub, and a database server all connected, you can ask an AI to:

  1. Read your project's database migration files
  2. Compare them against the current database schema
  3. Generate a pull request with any missing migrations

Each step uses a different server, but the AI orchestrates the flow naturally through its reasoning.

Security Considerations

MCP servers are powerful, which makes security critical.

Principle of least privilege. Each server should have the minimum permissions it needs. A filesystem server should only access specific directories. A database server should use a read-only connection unless writes are explicitly needed.

Input validation. Always validate and sanitize inputs from the AI. The zod schemas in tool definitions help, but you should also validate at the application level. Never pass AI-generated input directly to shell commands or SQL queries without sanitization.

Environment variables for secrets. API keys and credentials should come from environment variables, never hardcoded in server code. The MCP config supports an env block for this.

Transport security. For remote servers using HTTP+SSE, always use HTTPS. Consider authentication tokens to prevent unauthorized access to your MCP endpoints.

Audit logging. Log every tool invocation with timestamps, inputs, and outputs. When an AI agent is taking actions on your behalf — sending messages, modifying files, running queries — you need a trail.

Sandboxing. Run MCP servers with restricted permissions. Use Docker containers, restricted user accounts, or OS-level sandboxing to limit blast radius if something goes wrong.

When MCP Makes Sense (and When It Doesn't)

MCP is the right choice when you want to give AI applications structured access to external systems. It's particularly strong for:

  • Building tool-use capabilities into AI applications
  • Standardizing integrations across multiple AI providers
  • Creating reusable, shareable tool packages
  • Maintaining security boundaries between AI and sensitive systems

It's overkill if you just need to pass some text to an API. A simple function call or REST endpoint is fine for one-off integrations. MCP shines when you're building an ecosystem of tools that multiple AI applications can share.

Getting Started

The fastest path to understanding MCP is to use it:

  1. Install Claude Desktop or another MCP-compatible host
  2. Add the filesystem server to your config
  3. Point it at a project directory
  4. Ask the AI to explore and explain your codebase

Once you see how naturally the AI uses tools through MCP, you'll start thinking about what other systems you want to connect. That's the protocol doing its job — fading into the background while making AI genuinely useful.

The MCP specification and SDKs are available on GitHub. TypeScript and Python SDKs are the most mature, with community SDKs for Rust, Go, and other languages growing quickly. Whatever you're building, MCP probably has a place in your stack.

Recommended Posts