mcp-eval includes specialized Claude subagents that help you write, debug, and optimize tests. These subagents are AI assistants with deep knowledge of mcp-eval patterns and best practices.

Available Subagents

mcp-eval ships with several specialized subagents in src/mcp_eval/data/subagents/. You can view and copy the complete definitions below. Save these as .md files in your .claude/agents directory:

Test Writer

View the complete MCP-Eval Test Writer subagent definition.Expert at writing comprehensive mcp-eval tests in all styles (decorator, pytest, dataset).

Test Generator

View the complete MCP-Eval Test Generator subagent definition.Generates complete test suites with diverse scenarios and comprehensive coverage.

Debugger

View the complete MCP-Eval Debugger subagent definition.Expert at debugging test failures, analyzing OTEL traces, and troubleshooting configuration issues.

Config Expert

View the complete MCP-Eval Config Expert subagent definition.Expert at configuring mcp-eval and managing mcpeval.yaml files for optimal performance.

Setup

For Claude Code:
mkdir -p .claude/agents
cp path/to/mcp_eval/data/subagents/*.md .claude/agents/

Using Subagents in Claude Code

Once configured, Claude Code will automatically discover and use these subagents when appropriate. You can also explicitly request them:

Writing Tests

"Use the mcp-eval-test-writer subagent to create comprehensive tests for my fetch server"

Debugging Failures

"Use the mcp-eval-debugger subagent to help me understand why my tests are failing"

Configuration Help

"Use the mcp-eval-config-expert subagent to set up my mcpeval.yaml correctly"

Using Subagents for Test Generation

The test generation subagents work together to create high-quality tests:
  1. test-scenario-designer - Designs comprehensive test scenarios
  2. test-assertion-refiner - Enhances assertions for better coverage
  3. test-code-emitter - Generates syntactically correct Python code
These can be used manually or integrated into the mcp-eval generate workflow.

Subagent Examples

Test Writer Example

The mcp-eval-test-writer subagent can help create tests in any style:
# Decorator style
@task("Fetch and validate")
async def test_fetch_validate(agent: Agent, session: Session):
    response = await agent.generate_str("Fetch example.com")
    await session.assert_that(
        Expect.tools.was_called("fetch"),
        response=response
    )
# Pytest style
@pytest.mark.asyncio
async def test_fetch_with_error(mcp_agent, mcp_session):
    response = await mcp_agent.generate_str("Fetch invalid-url")
    await mcp_session.assert_that(
        Expect.content.contains("error"),
        response=response
    )

Debugger Example

The mcp-eval-debugger helps diagnose issues:
  • Analyzes OTEL traces to find performance bottlenecks
  • Identifies assertion failures and suggests fixes
  • Troubleshoots configuration problems
  • Explains error messages and stack traces

Config Expert Example

The mcp-eval-config-expert helps with configuration:
# Optimized configuration for parallel execution
execution:
  max_concurrency: 10
  timeout_seconds: 60
  fail_fast: true

agents:
  definitions:
    - name: "fetch_agent"
      provider: anthropic
      model: claude-3-5-sonnet-20241022
      instruction: "You are a helpful assistant that can fetch URLs"
      server_names: ["fetch"]

Best Practices

  1. Use the right subagent for the task - Each subagent is specialized for specific aspects of mcp-eval
  2. Combine subagents - Use multiple subagents together for complex tasks
  3. Provide context - Give subagents information about your server’s capabilities
  4. Review generated code - Subagents provide excellent starting points, but review and customize as needed
  5. Keep subagents updated - Pull the latest mcp-eval version for improved subagents

Integration with mcp-agent

If you’re using mcp-agent, these subagents are compatible with its agent loading system. Configure your mcp-agent.config.yaml to include the mcp-eval subagents search path.

Contributing Subagents

To contribute new subagents:
  1. Create a markdown file following the format in src/mcp_eval/data/subagents/
  2. Include the frontmatter with name, description, and tools
  3. Write clear instructions for the subagent’s expertise
  4. Test the subagent with real mcp-eval tasks
  5. Submit a pull request