Saturday, July 19, 2025

MCP : The basics

What is MCP ?

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.



Please note : there is a 1:1 connection between MCP client and server. So you cant have a MCP client that talks to multiple MCP servers.
 
In very simple terms MCP provides a standard way to expose tools , resources, prompts
 

Why MCP ?

Typical models (like claude, chatGPT) excel at responding to natural language, they’ve been constrained by their isolation from real-world data and systems. 

The Model Context Protocol (MCP) addresses this challenge by providing a standardized way for LLMs to connect with external data sources and tools—essentially a “universal remote” for AI apps. Released by Anthropic as an open-source protocol, MCP builds on existing function calling by eliminating the need for custom integration between LLMs and other apps. This means developers can build more capable, context-aware applications without reinventing the wheel for each combination of AI model and external system. 


So basically , The Model Context Protocol (MCP) aims to extend the reach of LLMs beyond their inherent limitations (of finite context) . Therefore, while LLMs do have a finite context window, MCP endeavors to mitigate those limitations by providing a standardized way for LLMs to access external context.

Architecture

At its core, MCP follows a client-server architecture where a host application can connect to multiple servers

  • MCP Hosts: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
  • MCP Clients: Protocol clients that maintain 1:1 connections with servers
  • MCP Servers: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
  • Local Data Sources: Your computer’s files, databases, and services that MCP servers can securely access
  • Remote Services: External systems available over the internet (e.g., through APIs) that MCP servers can connect to

Why it matters ?

At first glance, MCP might seem like infrastructure plumbing. But in reality, it unlocks some major shifts in how we build with LLMs:
  • Tool interoperability: Want your model to use a calendar, database, and CRM? With MCP, these tools can all speak the same context language.
  • Composable agents: Context is now modular. You can plug in a user profile, domain-specific knowledge, or even dynamic system prompts — all cleanly separated.
  • Portability: Applications that conform to MCP aren’t locked into a specific LLM. Whether it’s OpenAI, Anthropic, or a local model, the context wiring stays the same.
  • Safer, more predictable AI: With structured context, you get less hallucination, more traceability, and clearer guardrails.
 
  



References

  • https://modelcontextprotocol.io/introduction 
  • https://www.descope.com/learn/post/mcp
  • https://cloud.google.com/blog/topics/developers-practitioners/build-and-deploy-a-remote-mcp-server-to-google-cloud-run-in-under-10-minutes