Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft Project Overview #10

Open
dazzaji opened this issue Dec 11, 2024 · 0 comments
Open

Draft Project Overview #10

dazzaji opened this issue Dec 11, 2024 · 0 comments

Comments

@dazzaji
Copy link
Owner

dazzaji commented Dec 11, 2024

Project Overview

The mcp-agent-router project demonstrates a multi-agent system using the Model Context Protocol (MCP). It features a gateway agent that routes tasks to specialized downstream servers based on user input. The current implementation includes two downstream servers: server-a (personal health trainer) and server-b (professional work assistant). All servers and agents leverage Claude as their LLM.

How It Works

  1. User Input: The user provides a query or task related to either personal health or professional productivity.

  2. Gateway Agent: The gateway agent (gateway-agent/service.py) receives the user input. It determines which downstream server is most appropriate based on keywords present in the input. The current logic directs health-related queries (containing keywords like "weight," "sleep," "exercise") to server-a and professional productivity queries (containing keywords like "work," "meeting," "deadline") to server-b.

  3. Downstream Servers: The selected downstream server (server-a or server-b) receives the task from the gateway agent. Each server loads a pre-defined user profile (user_profile.json) and incorporates it as context in its Claude prompt. The server then sends the prompt to Claude.

  4. LLM Response: Claude generates a response based on the user input and the user profile.

  5. Return to User: The downstream server's response is returned to the gateway agent, and finally back to the user.

Running the Project

Prerequisites

  • Python 3.11: Ensure Python 3.11 is installed and used for the virtual environment.
  • Required Packages: Install necessary packages using:
pip install flask anthropic requests python-dotenv uv mcp
  • Anthropic API Key: Set your Anthropic API key in .env files in the root, server-a, and server-b directories.
ANTHROPIC_API_KEY=<your-key>
  • User Profile Data: Create user_profile.json files in both the server-a and server-b directories (see examples in the repository).

Running the Servers

  1. Start Servers: Open separate terminal windows or tabs for each server. Activate the virtual environment in each and run:
# Server A (port 5000)
python server-a/server.py

# Server B (port 5001)
python server-b/server.py
  1. Start Gateway Agent: In another terminal with an activated virtual environment, run:
uv run gateway-agent/service.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant