Model Context Protocol (MCP) - 10 Module Tutorial for AI Developers
This tutorial introduces undergraduates to building AI agents and tool servers using the Model Context Protocol (MCP). It includes full code examples and a live Python demo for a simple AI use case.
Module 1: What is Model Context Protocol?
MCP defines how AI models communicate with external tools, memory, or other agents. It’s similar to how a web API allows two programs to interact, but tailored for AI systems that reason with context.
Module 2: MCP Architecture
MCP defines entities like models, tools, and schemas using structured JSON. Each model can call tools through a protocol that defines requests, responses, and context sharing.
{
"tool": {
"name": "calculator",
"input_schema": {
"a": "number",
"b": "number"
},
"output_schema": {
"result": "number"
}
}
}
Module 3: Creating a Tool Server
# ==============================
# File: calculator_server.py
# Description: Simple MCP Tool Server Example
# ==============================
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/tool', methods=['POST'])
def tool():
data = request.json
a, b = data.get('a'), data.get('b')
return jsonify({'result': a + b})
if __name__ == '__main__':
app.run(port=5000)
Module 4: Model Registration
The model advertises available tools to the MCP runtime using a registry file.
{
"model": "SimpleModel",
"tools": ["calculator"],
"version": "1.0.0"
}
Module 5: Connecting AI Model to Tool
# ==============================
# File: test_connection.py
# ==============================
import requests
payload = {"a": 5, "b": 3}
response = requests.post("http://localhost:5000/tool", json=payload)
print(response.json())
Module 6: AI Real Use Case — Sentiment Analysis Tool
We will train a lightweight model to classify text as positive or negative and expose it via MCP.
# ==============================
# File: sentiment_train.py
# ==============================
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
import joblib
texts = ["I love this!", "This is terrible", "Amazing work", "Horrible experience"]
labels = [1, 0, 1, 0]
vec = CountVectorizer()
X = vec.fit_transform(texts)
model = MultinomialNB()
model.fit(X, labels)
joblib.dump((vec, model), 'sentiment.pkl')
Module 7: Serving the Model via MCP Tool
# ==============================
# File: sentiment_server.py
# ==============================
from flask import Flask, request, jsonify
import joblib
vec, model = joblib.load('sentiment.pkl')
app = Flask(__name__)
@app.route('/analyze', methods=['POST'])
def analyze():
text = request.json.get('text')
X = vec.transform([text])
pred = model.predict(X)[0]
label = 'Positive' if pred == 1 else 'Negative'
return jsonify({'sentiment': label})
if __name__ == '__main__':
app.run(port=6000)
Module 8: Running the MCP Demo in Python
Follow these steps to run your full MCP demo locally:
- Install dependencies:
pip install flask scikit-learn joblib requests - Run
python sentiment_train.pyto train and save the model. - Start the server:
python sentiment_server.py - Send a POST request from Python:
# ==============================
# File: test_sentiment.py
# ==============================
import requests
text = {"text": "I love AI"}
response = requests.post("http://localhost:6000/analyze", json=text)
print(response.json())
Output should look like:
{"sentiment": "Positive"}
Module 9: Integrating MCP with a Larger AI System
You can now link this MCP tool to a larger AI assistant framework, allowing agents to analyze text sentiment dynamically based on context.
Module 10: Summary & Next Steps
- Learn to expose new AI capabilities through MCP tools.
- Register and test tool endpoints via JSON schemas.
- Experiment by connecting this to LLM frameworks like Ollama or LangChain.
No comments:
Post a Comment