Thursday, 30 October 2025

#2 ABC's of MPC [Model Based Context Protocol]

MCP 12-Module Tutorial for AI Agents

Model Context Protocol (MCP) - 12 Module Tutorial for AI Developers

This tutorial introduces undergraduates to building AI agents and tool servers using the Model Context Protocol (MCP). It includes full code examples, a live Python demo, and an Ollama LLM integration.

(Existing 10 modules content here)

Module 11: Connecting MCP Sentiment Tool to Ollama

We’ll connect the MCP Sentiment Tool to Ollama so that a local LLM (like llama3 or mistral) can automatically query our sentiment API for context-aware analysis.

File: ollama_mcp_integration.py
# ==============================
# File: ollama_mcp_integration.py
# Description: Connect Ollama LLM with MCP Sentiment Tool
# ==============================
import requests

def query_ollama(prompt):
    response = requests.post(
        "http://localhost:11434/api/generate",
        json={"model": "llama3", "prompt": prompt}
    )
    output = response.json()
    return output.get('response', 'No response')

def analyze_sentiment(text):
    sentiment_resp = requests.post("http://localhost:6000/analyze", json={"text": text})
    return sentiment_resp.json().get('sentiment')

if __name__ == '__main__':
    user_input = "I love studying AI systems!"
    sentiment = analyze_sentiment(user_input)
    ollama_prompt = f"User said: '{user_input}' (Sentiment: {sentiment}). Reply empathetically."
    reply = query_ollama(ollama_prompt)
    print("LLM Response:", reply)

This Python script first calls the MCP sentiment API, then sends the text (and sentiment context) to the Ollama LLM running locally.

Module 12: Building an MCP-Aware Conversational Agent

This agent extends the integration to automatically decide when to invoke the MCP sentiment tool and when to reply directly through the LLM.

File: conversational_agent.py
# ==============================
# File: conversational_agent.py
# Description: LLM-driven conversational agent using MCP sentiment API
# ==============================
import requests

def query_ollama(prompt):
    res = requests.post("http://localhost:11434/api/generate", json={"model": "llama3", "prompt": prompt})
    return res.json().get('response', '')

def get_sentiment(text):
    res = requests.post("http://localhost:6000/analyze", json={"text": text})
    return res.json().get('sentiment', 'Neutral')

def chat_with_agent():
    print("🤖 MCP-Aware Agent Online. Type 'exit' to quit.")
    while True:
        user_input = input("You: ")
        if user_input.lower() == 'exit':
            break
        sentiment = get_sentiment(user_input)
        context_prompt = f"The user message: '{user_input}' (Sentiment: {sentiment}). Respond helpfully."
        response = query_ollama(context_prompt)
        print(f"Agent: {response}\n")

if __name__ == '__main__':
    chat_with_agent()

How it works:

  • Each message is first analyzed by the MCP Sentiment Tool.
  • The LLM receives both the message and the sentiment label.
  • The LLM responds more naturally by considering emotional tone.

To Run:

  1. Start your sentiment server: python sentiment_server.py
  2. Ensure Ollama is running locally with an LLM like llama3
  3. Run: python conversational_agent.py
  4. Chat interactively — the agent will classify and respond contextually!

© 2025 Model Context Protocol Tutorial. Created by ChatGPT (GPT-5).

#1 ABC's of MPC [Model Based Context Protocol]

MCP 10-Module Tutorial for AI Agents

Model Context Protocol (MCP) - 10 Module Tutorial for AI Developers

This tutorial introduces undergraduates to building AI agents and tool servers using the Model Context Protocol (MCP). It includes full code examples and a live Python demo for a simple AI use case.

Module 1: What is Model Context Protocol?

MCP defines how AI models communicate with external tools, memory, or other agents. It’s similar to how a web API allows two programs to interact, but tailored for AI systems that reason with context.

Module 2: MCP Architecture

MCP defines entities like models, tools, and schemas using structured JSON. Each model can call tools through a protocol that defines requests, responses, and context sharing.

File: calculator_schema.json
{
  "tool": {
    "name": "calculator",
    "input_schema": {
      "a": "number",
      "b": "number"
    },
    "output_schema": {
      "result": "number"
    }
  }
}

Module 3: Creating a Tool Server

File: calculator_server.py
# ==============================
# File: calculator_server.py
# Description: Simple MCP Tool Server Example
# ==============================
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/tool', methods=['POST'])
def tool():
    data = request.json
    a, b = data.get('a'), data.get('b')
    return jsonify({'result': a + b})

if __name__ == '__main__':
    app.run(port=5000)

Module 4: Model Registration

The model advertises available tools to the MCP runtime using a registry file.

File: model_registry.json
{
  "model": "SimpleModel",
  "tools": ["calculator"],
  "version": "1.0.0"
}

Module 5: Connecting AI Model to Tool

File: test_connection.py
# ==============================
# File: test_connection.py
# ==============================
import requests

payload = {"a": 5, "b": 3}
response = requests.post("http://localhost:5000/tool", json=payload)
print(response.json())

Module 6: AI Real Use Case — Sentiment Analysis Tool

We will train a lightweight model to classify text as positive or negative and expose it via MCP.

File: sentiment_train.py
# ==============================
# File: sentiment_train.py
# ==============================
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
import joblib

texts = ["I love this!", "This is terrible", "Amazing work", "Horrible experience"]
labels = [1, 0, 1, 0]
vec = CountVectorizer()
X = vec.fit_transform(texts)
model = MultinomialNB()
model.fit(X, labels)
joblib.dump((vec, model), 'sentiment.pkl')

Module 7: Serving the Model via MCP Tool

File: sentiment_server.py
# ==============================
# File: sentiment_server.py
# ==============================
from flask import Flask, request, jsonify
import joblib

vec, model = joblib.load('sentiment.pkl')
app = Flask(__name__)

@app.route('/analyze', methods=['POST'])
def analyze():
    text = request.json.get('text')
    X = vec.transform([text])
    pred = model.predict(X)[0]
    label = 'Positive' if pred == 1 else 'Negative'
    return jsonify({'sentiment': label})
if __name__ == '__main__':
    app.run(port=6000)

Module 8: Running the MCP Demo in Python

Follow these steps to run your full MCP demo locally:

  1. Install dependencies: pip install flask scikit-learn joblib requests
  2. Run python sentiment_train.py to train and save the model.
  3. Start the server: python sentiment_server.py
  4. Send a POST request from Python:
File: test_sentiment.py
# ==============================
# File: test_sentiment.py
# ==============================
import requests
text = {"text": "I love AI"}
response = requests.post("http://localhost:6000/analyze", json=text)
print(response.json())

Output should look like:

Output (JSON)
{"sentiment": "Positive"}

Module 9: Integrating MCP with a Larger AI System

You can now link this MCP tool to a larger AI assistant framework, allowing agents to analyze text sentiment dynamically based on context.

Module 10: Summary & Next Steps

  • Learn to expose new AI capabilities through MCP tools.
  • Register and test tool endpoints via JSON schemas.
  • Experiment by connecting this to LLM frameworks like Ollama or LangChain.

© 2025 Model Context Protocol Tutorial. Created by ChatGPT (GPT-5).

#2 ABC's of MPC [Model Based Context Protocol]

MCP 12-Module Tutorial for AI Agents Model Context Protocol (MCP) - 12 Module Tutorial for AI Developers ...