mcp-eval is built on mcp-agent — a simple, composable framework for building effective agents with Model Context Protocol using patterns from Anthropic’s Building Effective Agents guide.
Understanding mcp-agent
mcp-agent provides the core agent infrastructure that powers mcp-eval’s testing capabilities:- Simple & Composable: Build agents using proven patterns like Parallel, Router, Evaluator-Optimizer, and Swarm workflows
- Full MCP Support: Agents can use any MCP tools, resources, and prompts from connected servers
- Production-Ready: The same agent patterns you test can be deployed to production
- Model-Agnostic: Works with OpenAI, Anthropic, and other LLM providers
- Multi-server connections
- Complex workflow orchestration
- Human-in-the-loop interactions
- Durable execution with state management
- mcp-agent Documentation
- mcp-agent Examples - Production-ready agent patterns
- Building Effective Agents - Anthropic’s guide
Ways to define
- Config‑defined AgentSpec (root agents or discovered subagents)
- Programmatic AgentSpec
- Programmatic Agent instance
- Programmatic AugmentedLLM
- Factory (safe for parallel tests)
Decorator order
When combining, place@with_agent(...)
above @task(...)
.
Discovery
Define AgentSpecs inmcp-agent.config.yaml
or enable subagents
search paths. Reference by name with use_agent("SpecName")
or @pytest.mark.mcp_agent("SpecName")
.
Examples: agent_definition_examples.py