Kritim Yantra
Aug 27, 2025
Ever shipped your first PHP project, only to watch it crawl in production because… caching, config, and surprise “works on my machine” moments? Been there. 👋
Now imagine a different headache: you’re wiring your app to a dozen APIs (search, DB, files, Slack, weather). Each one has its own auth, SDK, and edge cases. You glue it all together…and then a product manager asks to “also make it work inside chat.” 😵💫
Here’s the little secret: instead of hardwiring every service directly into your app or chatbot, you can expose them once—cleanly—through the Model Context Protocol (MCP). Then any MCP-aware client (CLI, desktop chat app, your own bot) can use them securely and consistently. Think “USB for AI tools”: plug in a server, any client can use it.
MCP (Model Context Protocol) is a standard for connecting clients (like chat UIs or bots) to servers that expose:
Clients list what a server offers, call tools with JSON inputs, and receive structured results. No more bespoke, per-app plugins. One protocol, many integrations.
Why it’s awesome
- Standardized: a single way to list and call tools.
- Composable: connect multiple servers at once.
- Portable: the same server works in a CLI, a desktop chat app, or your own UI.
- Secure by design: clients decide what a server can access.
We’ll create:
You’re on Windows—perfect. The steps below include Windows commands.
You can use pip or the super-fast uv (recommended).
uv
(fastest)# 1) Install uv if you don’t have it
pip install uv
# 2) Create a project
uv init mcp-starter
cd mcp-starter
uv venv
.venv\Scripts\activate
# 3) Add MCP + HTTP client + dotenv
uv add "mcp[cli]" httpx python-dotenv
pip
py -m venv .venv
.venv\Scripts\activate
pip install "mcp[cli]" httpx python-dotenv
Pro Tip:
mcp[cli]
gives you handy dev tools likemcp dev
for quick testing in the Inspector or terminal.
Create server.py
:
# server.py
from __future__ import annotations
import os, json, time
from pathlib import Path
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("DailyHelper")
# DATA = Path.home() / ".mcp_demo_todos.json"
DATA = Path(__file__).parent / ".mcp_demo_todos.json"
if not DATA.exists():
DATA.write_text("[]", encoding="utf-8")
def _load():
return json.loads(DATA.read_text(encoding="utf-8"))
def _save(items):
DATA.write_text(json.dumps(items, indent=2), encoding="utf-8")
@mcp.tool()
def time_now() -> str:
"""Get the current Unix time (seconds)."""
return str(int(time.time()))
@mcp.tool()
def todo_add(text: str) -> dict:
"""Add a todo item."""
items = _load()
items.append({"id": len(items) + 1, "text": text, "done": False})
_save(items)
return {"status": "ok", "count": len(items)}
@mcp.tool()
def todo_list() -> list[dict]:
"""List all todo items."""
return _load()
@mcp.tool()
def todo_done(id: int) -> dict:
"""Mark a todo as done by id."""
items = _load()
for it in items:
if it["id"] == id:
it["done"] = True
_save(items)
return {"status": "ok", "id": id}
return {"status": "not_found", "id": id}
if __name__ == "__main__":
mcp.run()
That’s a real MCP server with three tools you can use every day (time + simple todos). It uses FastMCP, the official Python SDK’s high-level server.
Create client.py
(super simple, text-based):
# client.py
import asyncio, json, sys
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
HELP = """Commands:
tools - list tools
call <tool> <json> - call a tool with JSON, e.g.
call todo_add {"text":"buy milk"}
quit - exit
"""
class MiniClient:
def __init__(self):
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
async def connect(self, server_script: str):
params = StdioServerParameters(command="python", args=[server_script])
(read, write) = await self.exit_stack.enter_async_context(stdio_client(params))
self.session = await self.exit_stack.enter_async_context(ClientSession(read, write))
await self.session.initialize()
async def list_tools(self):
res = await self.session.list_tools()
for t in res.tools:
print(f"- {t.name}: {t.description}")
async def call_tool(self, name: str, payload: dict):
res = await self.session.call_tool(name, payload)
# Convert each content part into plain JSON-serializable data
output = []
for c in res.content:
if hasattr(c, "text"): # for TextContent
output.append(c.text)
else:
# fallback: try to jsonify the whole object
output.append(str(c))
print(json.dumps(output, indent=2, ensure_ascii=False))
async def run(self):
print("Mini MCP Client. Type 'help' for commands.")
while True:
try:
line = input("> ").strip()
except EOFError:
break
if not line:
continue
if line == "help":
print(HELP)
elif line == "quit":
break
elif line == "tools":
await self.list_tools()
elif line.startswith("call "):
try:
_, tool, rest = line.split(" ", 2)
payload = json.loads(rest)
except Exception as e:
print(f"Usage: call <tool> <json> ({e})")
continue
await self.call_tool(tool, payload)
else:
print("Unknown command. Type 'help'.")
async def close(self):
await self.exit_stack.aclose()
async def main():
if len(sys.argv) < 2:
print("Usage: python client.py server.py")
sys.exit(1)
c = MiniClient()
try:
await c.connect(sys.argv[1])
await c.run()
finally:
await c.close()
if __name__ == "__main__":
asyncio.run(main())
Run them in two terminals:
# Terminal 1
.venv\Scripts\activate
python server.py
# Terminal 2
.venv\Scripts\activate
python client.py server.py
# Try:
tools
call todo_add {"text":"buy milk"}
call todo_list {}
call todo_done {"id":1}
Why this matters: You just proved the protocol end-to-end with a text-based chat. The same server can be plugged into other MCP clients (e.g., GUI chat apps) without rewriting your tools.
Got a Perplexity API key? Let’s add a tool that asks the web and returns a concise answer.
Important note (as of Aug 27, 2025): Perplexity’s API is OpenAI-compatible for chat completions, but it doesn’t expose function-calling/tool-calls the way OpenAI/Anthropic do. We’ll just call Perplexity inside our MCP tool and return the text.
pip install openai
(the official OpenAI SDK works with Perplexity by setting a different base_url
).server.py
:import os
from openai import OpenAI
PPLX_KEY = os.getenv("PPLX_API_KEY")
client = OpenAI(api_key=PPLX_KEY, base_url="https://api.perplexity.ai") if PPLX_KEY else None
@mcp.tool()
def ask_web(q: str, model: str = "sonar-pro") -> str:
"""Ask the web via Perplexity and return a short answer."""
if not client:
return "Set PPLX_API_KEY in your environment first."
resp = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": q}]
)
return resp.choices[0].message.content
Then:
setx PPLX_API_KEY "your_key_here"
# restart terminal so the env var is visible, then:
python server.py
# In the client:
call ask_web {"q":"What is MCP in one sentence?"}
Perplexity is great for fresh, web-grounded answers, and this pattern lets any MCP client query the web through your server—without baking Perplexity logic into every UI.
Warning: Don’t put long-lived secrets directly in code. Use environment variables or a secrets manager and pass them in via the client’s install/dev tooling.
There’s a friendly MCP Inspector that lets you poke your server, see tools/resources, and run them interactively (great for debugging). You can launch dev mode via the MCP CLI and iterate quickly.
# Run your server in dev with helper tooling
uv run mcp dev server.py
Think of your app as a smart assistant in a workshop. MCP is the standardized pegboard on the wall: every tool (drill, saw, glue gun = API/database/LLM) hangs in a predictable spot. Any assistant (CLI, desktop chat, web bot) can grab the tool it needs without rummaging through boxes of incompatible adapters.
A realistic starter project:
todo_*
, snippet_save/list
, ask_web
.deploy_status
, log_tail
, pagerduty_ack
.sql_query_readonly
, s3_list
, s3_get
.All exposed via your MCP server. Use a desktop chat client for casual queries, and keep the text client around for quick terminal checks. As you add tools, every client instantly benefits.
pip install "mcp[cli]"
python server.py
python client.py server.py
uv run mcp dev server.py
(if using uv)1) Do I need Claude or OpenAI to use MCP?
No. MCP is the protocol. You can test with the text client or the Inspector. If you want the model to choose tools automatically, you’ll typically use a model that supports tool calls (e.g., Anthropic’s Claude).
2) Can I use Perplexity?
Yes—for Q&A via your own tool (like ask_web
). But Perplexity’s API doesn’t expose function-calling/tool-calls, so it won’t natively “decide” to call your tools for you. You can still route calls manually or build simple rules.
3) Docker on Windows?
Absolutely. Put server.py
in a lightweight Python image, expose it with MCP’s supported transports (stdio is simplest from the host; HTTP transport is also available). Keep secrets as environment variables.
Your turn: try adding one more tool—maybe weather(city)
—and call it from the text client. What would you add next to your MCP toolbox?
No comments yet. Be the first to comment!
Please log in to post a comment:
Sign in with Google