29 octobre 2025
Practical Guide to the Model Context Protocol: Everything You Need to Know About Tools and MCP Servers
8 minutes reading

For nearly a year now, Anthropic has initiated a communication protocol for delivering contextual information to language models, called the Model Context Protocol (MCP). Over time, it has become a standard in the AI ecosystem for applications wanting to leverage language models more efficiently, as well as for people aiming to enhance their AI models' capabilities.
However, as the protocol is steadily adopted, its diffusion also brings along several questions and some confusion. In this article, we'll clarify its various use cases and highlight the best practices to follow depending on your needs.
What is the MCP protocol for? What are the possible use cases? How do you use this protocol in your applications? What are its limitations? Which needs does it address?
Introduction
There's no denying that the use of Large Language Models (LLMs) is growing rapidly in every sector, ranging from basic public access to AI chatbots like ChatGPT (currently holding an almost monopolistic position) to highly customized integration in complex applications, including coding assistants like Cursor, Zed, and others.
Yet, these powerful tools have long struggled with two major problems:
- Lack of context (e.g., database data, documents, etc.)
- Inability to perform actions (e.g., send an email, edit a document, create an event, etc.)
To address the firstâcontextâsolutions such as Retrieval-Augmented Generation (RAG) were introduced. You can think of RAG as a library that gives LLMs access to additional contextual information, alongside fine-tuning methods that train LLMs to specialize in a particular field. We won't cover those topics here, but if you're interested in how these solutions work, check out How Does RAG Work?.
Note: There is no standard for RAG implementations. Each integration must be tailored to its specific platform.
Finally, to tackle the inability to perform actions, the idea of tools has emerged.
What is a tool?
A tool is a function that an LLM can call to accomplish a specific task. For instance, a tool could send an email, update a document, create a calendar event, and so on. Calling a tool lets the model execute a specific function on a server and return a resultâeither to be displayed in the conversation or used to provide additional context to the LLMâs reply.
- The user makes a request to the LLM
- The LLM analyzes the request and decides whether a tool can handle it
- If not: The LLM replies directly
- If so: The LLM calls the appropriate tool
- The server running the tool performs the requested action
- The server returns the result to either the LLM or the user, which completes the response
This diagram is simplified to illustrate how a tool works. In practice, a tool may involve more complex steps, multiple types of responses, or even interpretation by the LLM.
Where does a tool run?
As shown in the illustration above, a tool is invoked by an LLM and runs on the server that implements it. That server handles the specific functionality and returns a result.
This is why "classic" tools cannot be directly executed by a remote agentâby "remote," we mean an agent that is neither hosted on the same server nor running in the same process as the toolâs server. Tools are therefore extremely useful for applications that implement their own AI agent.
What is the MCP protocol?
As we saw earlier, the tool infrastructure is quite straightforward, but it's limited by the fact that their execution is restricted to their environmentâthey can only be called and run where theyâre hosted.
To expose tools (and more) to an AI agent, the Model Context Protocol (MCP) was created. MCP is a communication protocol that bridges AI agents to tools provided by an MCP server.
The protocol is structured around two main components:
- The MCP server, which implements and publishes the available
tools - The MCP client, which calls these
toolsand receives their results
MCP servers can also expose resources and prompts in addition to tools. However, many remote
MCP hosts don't support all of these features.
Communication between client and server follows the JSON-RPC standard, a lightweight way to send instructions (actions/procedures) between the two.
Separating things in this way gives us flexibility: we can use the MCP protocol in various contexts, and access our tools from different environments.
MCP Hosts
In this article, weâll use the term MCP host for any entity that communicates both with an LLM and an MCP client. We'll distinguish between a remote MCP host and a local MCP hostâan important distinction, as they have different constraints and capabilities.
Remote MCP host
A remoteâor remoteâMCP host is often a web service offering an AI chatbot, like ChatGPT or Claude AI (web versions). To know if an AI agent supports the MCP protocol, you'll need to check its capabilities in the web UIâlook for mentions of Connected apps or Connectors.
While MCP is becoming a standard in the AI ecosystem, not every web AI chatbot supports it yet.
Local MCP host
A local MCP host is an agent running in your own environmentâon your own machineâso it can take actions there. Examples include coding assistants like Cursor, Zed, or desktop versions of conversational agents such as Claude Desktop.
The Different Types of Transports
There are various transport methods for MCP client-server communication. The best transport depends on your application's requirements and constraints.
Stdio
The stdio transport uses standard input/output (stdin/stdout) to enable local communication between the MCP client and server. With this approach, everything runs locally: the client and server are on the same machine, so the server can, for example, read or write files on the user's device.
This fits naturally with a local MCP host. Remote MCP hosts do not use this transport.
The stdio option is ideal for development assistants or desktop applications that require direct access to a user's local resources.
The MCP server is launched locally as a subprocess of the local MCP host.
The stdio transport exposes native APIs to the local MCP host. Make sure your local MCP host
comes from a trustworthy and reputable source.
If you're interested in implementing an MCP server with stdio transport, see Integrating MCP into a React App.
Streamable HTTP
The streamable HTTP transport uses the HTTP protocol for MCP client-server communication. This lets you decouple the host and the serverâhosting them separatelyâso both remote and local MCP hosts can use MCP servers.
The MCP server should be deployed on a web server and accessible via a URL; the host can then call its tools using HTTP requests. Streamable HTTP is ideal for remote MCP hosts.
For instance, in a SaaS context, you might host the MCP server on your own infrastructure (or create a /mcp endpoint on your API), allowing users to add a connector to their remote MCP hostâsuch as Claude AI or ChatGPTâand use your tools.
Security
Sometimes, you may need to authenticate or authorize access to your exposed tools. In these cases, you can use an authentication token or API key to secure access to your MCP server. This transport lets you secure tool access by basic methods or more advanced ones such as OAuth 2.
How can you use the MCP protocol in your applications?
Before working with MCP, it's essential to understand how it works and the options it offers. Next, define your application's needs and choose the transport that best suits your case.
If your app is a SaaS, you likely want to make your tools accessible to the general public. In this scenario, use the streamable HTTP transport to let users connect their remote MCP hostâlike Claude AI or ChatGPTâto your server.
Users with local MCP hosts can also use your service with this transport method.
If you build a script or standalone application, youâll want to use the stdio transport, allowing users to connect their local MCP host directly to your MCP server.
If your solution is strictly for internal use, you may not need MCP at allâunless you want to share tools across several internal AI agents. In this case, use the stdio transport for agent-server communication (when they're on the same machine).
Protocol adoption
The effectiveness of an MCP setup depends on adoption by remote MCP hosts. If a remote host doesnât support the protocol, the MCP client canât use your server's tools. More and more remote MCP hosts are adopting the protocol, though typically the connectors feature is paywalledâas in ChatGPT or Claude AI. This paywall remains a key barrier for general public adoption.
Registry
Thereâs an official registry of available MCP servers, which you can find on the Model Context Protocol website. This helps you discover MCP servers and use them within your AI agents.
If more and more companies adopt MCP servers, we can expect further registries to emerge, making it easier for users to find and connect to MCP servers much like traditional search engines. If this happens, being listed on those registries could become a significant business concern.
Conclusion
To sum up: the MCP protocol is becoming a standard thatâs changing the AI landscape. By enabling LLMs to access contextual data and perform real-world actions, it transforms language models from mere text generators into true assistants capable of acting on your behalf.
Choosing the right transport depends on your use caseâstdio for development assistants and desktop apps, streamable HTTP for SaaS solutions and integration with remote hosts like Claude AI or ChatGPT.
Though remote MCP host support remains limited (and often behind a paywall), adoption is clearly accelerating.
At Premier Octet, we closely follow these developments. If you'd like to explore this technology for your business or have questions about implementation, feel free to contact us.


