Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making gateway agent run LLM call to judge routing instead of hard-wired decision #9

Open
dazzaji opened this issue Dec 11, 2024 · 0 comments

Comments

@dazzaji
Copy link
Owner

dazzaji commented Dec 11, 2024

Adding Claude API Call for Routing

To use Claude for routing, implement a new MCP tool in the gateway agent that calls Claude with a prompt to decide the routing. Here's how:

  1. New Tool in Gateway Agent: Add a new tool to gateway-agent/service.py. This tool will receive the user input and call Claude to determine the routing.

  2. Prompt for Claude: Create a prompt to send to Claude. The prompt should ask Claude to classify the user input as either "personal-trainer" or "work-assistant".

  3. Process Claude's Response: Receive Claude's response and extract the routing decision.

  4. Route the Task: Based on Claude's classification, call the appropriate downstream server using the method in your current code.

Example Code Snippet (gateway-agent/service.py):

# ... existing code

@server.call_tool()  # new route tool
async def claude_route(name: str, arguments: dict) -> str:
    if name != "route_by_claude":
        raise ValueError(f"Unknown tool: {name}")
        
    user_input = arguments.get("user_input")
    if not user_input:
        raise ValueError("Missing 'user_input' argument.")

    claude_prompt = f"""Classify the following user query as either "personal-trainer" or "work-assistant":
    
    User Query: {user_input}"""

    # Replace with your actual call to Claude
    try:
        claude_response = get_claude_response(claude_prompt) 
        print(f"Claude response: {claude_response}", file=sys.stderr) # Log Claude's decision

        if "personal-trainer" in claude_response.lower():
            target_server = SERVER_A
        elif "work-assistant" in claude_response.lower():
            target_server = SERVER_B
        else:
            # Default or error handling
            target_server = SERVER_B

        result = ask_mcp_server(target_server["url"], target_server["tool_name"], user_input)
        return [TextContent(type="text", text=result)]
        
    except Exception as e:
        print(f"Error: {e}")
        return [TextContent(type="text", text="Error during routing")]

# function to interface with claude
def get_claude_response(prompt: str) -> str:
    client = anthropic.Anthropic()
    completion = client.completions.create(
        model="claude-instant-v1", # Or whichever model you want
        max_tokens_to_sample=30,
        prompt=f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}",
    )
    return completion.completion


# ... rest of existing code

This outlines the core functionalities of the mcp-agent-router project. Remember to consult the MCP documentation for more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant