You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The mcp-agent-router project demonstrates a multi-agent system using the Model Context Protocol (MCP). It features a gateway agent that routes tasks to specialized downstream servers based on user input. The current implementation includes two downstream servers: server-a (personal health trainer) and server-b (professional work assistant). All servers and agents leverage Claude as their LLM.
How It Works
User Input: The user provides a query or task related to either personal health or professional productivity.
Gateway Agent: The gateway agent (gateway-agent/service.py) receives the user input. It determines which downstream server is most appropriate based on keywords present in the input. The current logic directs health-related queries (containing keywords like "weight," "sleep," "exercise") to server-a and professional productivity queries (containing keywords like "work," "meeting," "deadline") to server-b.
Downstream Servers: The selected downstream server (server-a or server-b) receives the task from the gateway agent. Each server loads a pre-defined user profile (user_profile.json) and incorporates it as context in its Claude prompt. The server then sends the prompt to Claude.
LLM Response: Claude generates a response based on the user input and the user profile.
Return to User: The downstream server's response is returned to the gateway agent, and finally back to the user.
Running the Project
Prerequisites
Python 3.11: Ensure Python 3.11 is installed and used for the virtual environment.
Project Overview
The
mcp-agent-router
project demonstrates a multi-agent system using the Model Context Protocol (MCP). It features a gateway agent that routes tasks to specialized downstream servers based on user input. The current implementation includes two downstream servers:server-a
(personal health trainer) andserver-b
(professional work assistant). All servers and agents leverage Claude as their LLM.How It Works
User Input: The user provides a query or task related to either personal health or professional productivity.
Gateway Agent: The gateway agent (
gateway-agent/service.py
) receives the user input. It determines which downstream server is most appropriate based on keywords present in the input. The current logic directs health-related queries (containing keywords like "weight," "sleep," "exercise") toserver-a
and professional productivity queries (containing keywords like "work," "meeting," "deadline") toserver-b
.Downstream Servers: The selected downstream server (
server-a
orserver-b
) receives the task from the gateway agent. Each server loads a pre-defined user profile (user_profile.json
) and incorporates it as context in its Claude prompt. The server then sends the prompt to Claude.LLM Response: Claude generates a response based on the user input and the user profile.
Return to User: The downstream server's response is returned to the gateway agent, and finally back to the user.
Running the Project
Prerequisites
.env
files in the root,server-a
, andserver-b
directories.user_profile.json
files in both theserver-a
andserver-b
directories (see examples in the repository).Running the Servers
The text was updated successfully, but these errors were encountered: