The Full Developer's Guide to Model Context Protocol

Yusuf Ishola's headshotYusuf Ishola· March 13, 2025

Anthropic's Model Context Protocol (MCP) is gaining massive traction as a game-changing standard for connecting Large Language Models (LLMs) to external data sources and tools.

Claude MCP

Let's take a deep dive into just exactly what MCP is, and how you can begin using it in your projects.

Table of Contents

What is Model Context Protocol (MCP)?

MCP is an open protocol that standardizes how applications provide context to LLMs.

Think of MCP like a "USB-C port for AI applications". Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to external data sources and tools.

MCP Architecture

The protocol follows a client-server architecture with three main components:

  1. Hosts: LLM applications (like Claude Desktop or IDEs) that initiate connections
  2. Clients: Components within hosts that maintain connections with servers
  3. Servers: Lightweight programs that expose specific capabilities through the protocol

MCP Architecture

For example, in a workflow using Claude to analyze company sales data stored in a PostgreSQL database:

  • The host is the Claude Desktop app
  • The client is the MCP client module within Claude Desktop that manages connections to MCP servers
  • The server is the PostgreSQL MCP server that securely connects to the company database

For details, check out the official documentation.

Why MCP Exists

Before MCP, developers had to build custom integrations from scratch whenever they wanted LLMs to access external systems (files, APIs, databases).

Each implementation was different, required significant code, and wouldn't work with desktop apps. MCP provides a standardized framework for these integrations that works consistently across implementations, with a common communication protocol between systems and AI tools.

Core Capabilities of MCP

MCP offers three primary capabilities:

  1. Resources: File-like data that LLMs can read (API responses, file contents)
  2. Tools: Functions that LLMs can call to perform actions
  3. Prompts: Pre-written templates that help users accomplish specific tasks

What is the difference between MCP vs function calling? 💡

Function calling is how LLMs decide what code to run when you ask them to perform tasks. Every LLM provider implements function calling differently in their APIs. MCP creates a standardized layer that sits on top of function calling—it handles discovering available functions, executing them securely, and returning results consistently regardless of which LLM you're using.

Getting Started with MCP

We will walk you through 3 implementation approaches to get started with MCP based on your needs and experience level.

  • Basic 🟢: Using pre-built MCP servers with Claude Desktop
  • Intermediate 🟡: Building custom MCP servers with Cloudflare Workers
  • Advanced 🔴: Creating custom MCP clients and servers from scratch

Basic Setup: Connecting Pre-built MCP Servers (🟢)

Let's set up a simple MCP configuration to access file system and web search capabilities with Claude Desktop.

Step 1: Install Claude Desktop

Download and install Claude Desktop.

Step 2: Configure MCP Servers

Create or edit the configuration file at:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add the following configuration:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
    },
    "brave-search": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-brave-search"],
      "env": {
        "BRAVE_API_KEY": "your_brave_api_key_here"
      }
    }
  }
}

Step 3: Restart Claude Desktop

After saving the configuration, restart Claude Desktop for the changes to take effect.

Step 4: Test Your Setup

Now you can ask Claude to use these tools:

  • "Search the web for the latest news on Anthropic and save a summary to my documents folder."
  • "List all files in my allowed files directory."

When Claude needs to use an MCP tool, it will request your permission before proceeding.

Intermediate Setup: Building Custom MCP Servers with Cloudflare Workers (🟡)

Cloudflare Workers offer a simplified approach to building MCP servers. Let's create an image generation server:

Step 1: Set Up Cloudflare Worker and Configure to Support MCP

# Create a new Cloudflare project
npx create-cloudflare@latest mcp-imagegen
cd mcp-imagegen

# Install the workers-mcp package
npm install workers-mcp

# Configure to support MCP
npx workers-mcp setup

Step 2: Implement the Image Generation Server

Replace the content of src/index.ts with:

import { WorkerEntrypoint } from 'cloudflare:workers'
import { ProxyToSelf } from 'workers-mcp'

export default class ImageGenerator extends WorkerEntrypoint<Env> {
  /**
   * Generate an image using an AI model.
   * @param prompt {string} A text description of the image you want to generate.
   * @param steps {number} The number of diffusion steps; higher values can improve quality but take longer.
   * @return {string} URL to the generated image.
   */
  async generateImage(prompt: string, steps: number = 30): Promise<Response> {
    const response = await this.env.AI.run('@cf/black-forest-labs/flux-1-schnell', {
      prompt,
      steps,
    });
    
    // Convert from base64 string
    const binaryString = atob(response.image);
    // Create byte representation
    const img = Uint8Array.from(binaryString, (m) => m.codePointAt(0)!);
    
    return new Response(img, {
      headers: {
        'Content-Type': 'image/jpeg',
      },
    });
  }

  /**
   * @ignore
   */
  async fetch(request: Request): Promise<Response> {
    return new ProxyToSelf(this).fetch(request)
  }
}

Step 3: Deploy Your Worker

npm run deploy

Step 4: Configure Claude Desktop (Optional)

To add your newly-created MCP to Claude Desktop for example, update your claude_desktop_config.json to include your Cloudflare Worker:

{
  "mcpServers": {
    "image-generator": {
      "command": "npx",
      "args": ["wrangler", "dev", "--local"]
    }
  }
}

Now you can ask Claude to generate images, and it will use your Cloudflare Worker to do so!

For more information on configuring MCPs with other Clients, such as Cursor, check out the documentation.

💡 Hot tip: Let LLMs do the heavy lifting

You can use AI tools like Claude to build MCPs. You can just dumnp useful info such as docs and code samples into Claude and have it create your MCP for you!

Advanced Setup: Creating Custom MCP Clients (🔴)

Building an MCP client allows you to create a custom application to connect to MCP servers. For detailed instructions, visit Anthropic's official guide to learn how to build a client in Python, NodeJS, or Java.

Common MCPs and Their Use Cases

MCPDescriptionExample CommandsAdoption Level
FileSystemAccess and manipulate files securely.- "Create a summary of the text files in my project folder."
- "Find all Python files containing database connections."
High
GitHub IntegrationAccess repositories and manage issues.- "Find all open issues related to authentication in my repository."
- "Show me the recent commits to the main branch."
Medium
Database IntegrationQuery databases securely.- "Show me the schema of the users table."
- "Find all customers who made purchases in the last month."
Medium
Web SearchSearch the web for information.- "Find recent articles about AI safety."
- "Research the latest developments in quantum computing."
High

Where to Find MCPs

Here are several sources for finding pre-built MCP servers for your LLM applications:

UPDATE: OpenAI joins the MCP ecosystem‼️

OpenAI has announced MCP support for the OpenAI Agents SDK, with plans to extend this to the OpenAI API and ChatGPT desktop app in the 'coming months'. This significant development that brings MCP's standardized approach to the leading LLM's ecosystem, making it easier to build servers that work seamlessly with OpenAI models and perhaps nudging other providers to follow suit.

How to Debug and Troubleshooting MCPs

  1. Use the MCP Inspector - An interactive tool for directly testing MCP servers, their resources, tools, and prompts outside of any client application

  2. Use Claude Desktop's Developer Tools - Access Chrome DevTools within Claude Desktop to inspect client-side behavior, network requests, and console logs

  3. Analyse Logs - View detailed logs generated by MCP servers and Claude Desktop to identify connection issues and runtime errors

  4. Use the MCP CLI - Use command-line tools to inspect and test MCP servers and their capabilities

  5. Implement Server-Side Logging - Implement custom logging in your MCP server to track execution flow, input validation, and error states

  6. Use Network Analysis Tools - For HTTP-based MCP servers, use proxies and network analyzers to inspect the communication between clients and servers

  7. Perform Standalone Testing - Test MCP servers in isolation before integrating them with clients to identify server-specific issues

  8. Environment Validation - Verify that environment variables, file paths, and permissions are correctly configured

For details, check out the official documentation.

Easily Debug LLM Workflows with Helicone ⚡️

With MCP enabling powerful agent development, bugs are inevitable. Helicone helps you trace your agent’s actions effortlessly, pinpointing potential issues. Start debugging smarter today.

Conclusion

MCP represents a significant step forward in connecting AI models to external systems.

By providing a standardized way for LLMs to interact with data sources and tools, MCP makes it easier to build powerful, context-aware AI applications—including, of course, powerful agentic systems.

You might also like

Frequently Asked Questions

What is MCP and how does it differ from function calling?

MCP (Model Context Protocol) is a standardized way for AI models to connect to data sources and tools. While function calling is about LLMs translating prompts into structured instructions, MCP standardizes how those instructions are executed across different systems.

Does MCP only work with Claude?

Currently, Claude is the primary AI assistant that supports MCP natively, but the protocol is open and designed to work with any LLM. More providers may adopt it in the future.

Can I build my own MCP servers?

Yes, you can build custom MCP servers using either the standard SDKs from Anthropic (available in TypeScript and Python) or simplified approaches like Cloudflare Workers.

Is MCP secure?

MCP includes security features like user-in-the-loop approval for tool execution and resource access. Server implementations should also implement proper validation and access controls.

Where can I find pre-built MCP servers?

Pre-built servers are available on various sources including the MCP website NPM, PyPI, Glama AI, Smithery, and Cursor Directory.


Questions or feedback?

Are the information out of date? Please raise an issue or contact us, we'd love to hear from you!