- Weave MCP Server + Client Linked Traces:
Weave MCP Server + Client Linked Traces:
This repo is taken from the example in Arize-ai/phoenix and adapted to export to wandb/weave. Note: There is a race condition which sometimes causes the tool to fail to run at the OpenAI call step. This bug was also present in the original and was not introduced by weave.
Set up your environment:
First run cp .env.example .env
Follow the instructions and set the relevant keys in your new env file.
Install Dependencies:
- uv:
uv sync - pip:
pip install -r requirements.txt
Run the client and export traces
- uv:
uv run client.py - python:
python client.py
(From Arize) How to Implement End-to-End Tracing for MCP Client-Server Applications
This tutorial shows you how to propagate OpenTelemetry (OTEL) context between an MCP client and server for complete observability. The openinference-instrumentation-mcp package makes this possible by providing instrumentation for both client and server components.
What is MCP and Why Do You Need Distributed Tracing?
One of the main benefits of Anthropic's Model Context Protocol (MCP) architecture is connecting AI models with information across different services, machines, and programming languages. This distributed approach delivers several advantages:
- Expanded AI Capabilities: Connect models to specialized knowledge and data sources beyond their training data
- Plug-and-Play Components: Add new context providers without retraining your models
- Multi-Language Support: Implement context providers in any programming language while maintaining compatibility
The challenge? When requests flow through multiple services, debugging becomes difficult. How do you track a request's complete journey to identify where problems occur?
How to Use OpenTelemetry for MCP Tracing
OpenTelemetry solves cross-service tracing challenges by:
- Preserving Context Across Services: Maintaining trace IDs and relationships between different components
- Working Across Network Boundaries: Automatically handling context in network requests
- Supporting Multiple Languages: Using standardized formats compatible with any programming language
When your client calls the server with proper instrumentation:
- The client creates a span to track the operation
- OTEL context automatically travels with the MCP request
- The server continues the same trace without interruption
- All context providers inherit this trace context
- You see the complete interaction as one connected trace in Phoenix
This visibility is essential for troubleshooting complex AI systems, optimizing performance bottlenecks, and understanding how different components affect your application's behavior.
Setup
When properly instrumented, trace context is automatically propagated across the MCP client-server boundary, allowing you to:
- Track requests from client to server in a single trace
- Observe latency at different stages of the request lifecycle
- Debug issues that span across service boundaries
Env Setup
-
Navigate to this directory:
cd tutorials/mcp/tracing_between_mcp_client_and_server -
Install the required dependencies:
pip install -r requirements.txt
Running the Example
-
Run Phoenix locally, or connect to an instance online
-
Update your .env file with
OPENAI_API_KEY, and yourPHOENIX_COLLECTOR_ENDPOINT. If you're using an online Phoenix instance or have auth enabled, also set yourPHOENIX_API_KEY. -
Run the MCP client. The client code will spin up the server at run time in a separate process.
python client.py -
Ask questions of the agent.
-
View the traces in Phoenix:

How It Works
The openinference-instrumentation-mcp package automatically:
- Creates spans for MCP client operations
- Injects trace context into MCP requests
- Extracts and continues the trace context on the server side
- Associates the context with any OTEL spans created on the server side
This allows you to see the complete request flow as a single trace, even though it crosses service boundaries.