Thursday, 30 October 2025

#2 ABC's of MPC [Model Based Context Protocol]

MCP 12-Module Tutorial for AI Agents

Model Context Protocol (MCP) - 12 Module Tutorial for AI Developers

This tutorial introduces undergraduates to building AI agents and tool servers using the Model Context Protocol (MCP). It includes full code examples, a live Python demo, and an Ollama LLM integration.

(Existing 10 modules content here)

Module 11: Connecting MCP Sentiment Tool to Ollama

We’ll connect the MCP Sentiment Tool to Ollama so that a local LLM (like llama3 or mistral) can automatically query our sentiment API for context-aware analysis.

File: ollama_mcp_integration.py
# ==============================
# File: ollama_mcp_integration.py
# Description: Connect Ollama LLM with MCP Sentiment Tool
# ==============================
import requests

def query_ollama(prompt):
    response = requests.post(
        "http://localhost:11434/api/generate",
        json={"model": "llama3", "prompt": prompt}
    )
    output = response.json()
    return output.get('response', 'No response')

def analyze_sentiment(text):
    sentiment_resp = requests.post("http://localhost:6000/analyze", json={"text": text})
    return sentiment_resp.json().get('sentiment')

if __name__ == '__main__':
    user_input = "I love studying AI systems!"
    sentiment = analyze_sentiment(user_input)
    ollama_prompt = f"User said: '{user_input}' (Sentiment: {sentiment}). Reply empathetically."
    reply = query_ollama(ollama_prompt)
    print("LLM Response:", reply)

This Python script first calls the MCP sentiment API, then sends the text (and sentiment context) to the Ollama LLM running locally.

Module 12: Building an MCP-Aware Conversational Agent

This agent extends the integration to automatically decide when to invoke the MCP sentiment tool and when to reply directly through the LLM.

File: conversational_agent.py
# ==============================
# File: conversational_agent.py
# Description: LLM-driven conversational agent using MCP sentiment API
# ==============================
import requests

def query_ollama(prompt):
    res = requests.post("http://localhost:11434/api/generate", json={"model": "llama3", "prompt": prompt})
    return res.json().get('response', '')

def get_sentiment(text):
    res = requests.post("http://localhost:6000/analyze", json={"text": text})
    return res.json().get('sentiment', 'Neutral')

def chat_with_agent():
    print("🤖 MCP-Aware Agent Online. Type 'exit' to quit.")
    while True:
        user_input = input("You: ")
        if user_input.lower() == 'exit':
            break
        sentiment = get_sentiment(user_input)
        context_prompt = f"The user message: '{user_input}' (Sentiment: {sentiment}). Respond helpfully."
        response = query_ollama(context_prompt)
        print(f"Agent: {response}\n")

if __name__ == '__main__':
    chat_with_agent()

How it works:

  • Each message is first analyzed by the MCP Sentiment Tool.
  • The LLM receives both the message and the sentiment label.
  • The LLM responds more naturally by considering emotional tone.

To Run:

  1. Start your sentiment server: python sentiment_server.py
  2. Ensure Ollama is running locally with an LLM like llama3
  3. Run: python conversational_agent.py
  4. Chat interactively — the agent will classify and respond contextually!

© 2025 Model Context Protocol Tutorial. Created by ChatGPT (GPT-5).

No comments:

Post a Comment

#2 ABC's of MPC [Model Based Context Protocol]

MCP 12-Module Tutorial for AI Agents Model Context Protocol (MCP) - 12 Module Tutorial for AI Developers ...