- TRMX MCP Playground
TRMX MCP Playground
The Postman for MCPs
A powerful, open-source desktop tool for exploring, debugging, and monitoring Model Context Protocol (MCP) servers—with built-in LLM integration.
Learn more and join our community at trmx.ai.
Table of Contents
- What is TRMX MCP Playground?
- Why We Built It
- Welcome: Open Source & Developer Community
- Key Features
- Installation
- Usage Guide
- Integration Examples
- Use Cases
- Troubleshooting
- Command-Line Exploration
- License & Acknowledgments
What is TRMX MCP Playground?
TRMX MCP Playground is your go-to tool for building, testing, debugging, and monitoring MCP servers.
Think of it as "Postman, but for MCPs."
It’s designed for developers working with the Model Context Protocol (MCP)—a standard that lets LLMs (Large Language Models) discover and use external tools, resources, and prompts in a consistent way.
- Learn more about MCP: pai.dev/model-context-protocol-and-why-it-matters-for-ai-agents-88e0e0a7bb73
Why We Built It
We noticed the developer experience for MCP was broken—testing, debugging, and monitoring MCP servers was painful.
Our vision is twofold:
- Stage 1: Build a local MCP playground for developers to easily build, test, debug, and monitor their local MCPs.
- Stage 2: Launch a serverless MCP platform so anyone can deploy, scale, and share MCPs without managing infrastructure.
Read more about our motivation and roadmap:
Welcome: Open Source & Developer Community
We made TRMX MCP Playground open source because we believe the future of MCP should be built by the community, for the community.
We’re not just looking for users—we’re looking for collaborators, contributors, and pioneers.
- Official Working Group:
We run an open, official working group to guide the project’s direction, set priorities, and build the next generation of MCP tools together. - Weekly Meetings:
Join our regular sessions to discuss features, share feedback, and help shape the roadmap. Calendar updates coming soon! - Open Collaboration:
All contributions are welcome—code, docs, ideas, and feedback.
If you want to help define how MCP servers are built and used, you’re in the right place.
Join us:
Key Features
- MCP Server Debugging: Connect to any MCP server and explore its tools, resources, and prompts.
- Built-in LLM Integration: Connect directly with LLM providers like Fireworks AI and Groq (more coming soon).
- Tool & Resource Exploration: List and inspect all available tools, resources, and prompts from MCP servers.
- Multiple Parallel Connections: Connect to and monitor multiple MCP servers at once.
- Comprehensive Logging: Detailed local logs for all operations and responses.
- Modern Interface: Clean, intuitive UI for easy interaction.
- Open Source: 100% open, MIT-licensed, and built with community feedback.
Installation
Prerequisites
- Node.js (v20.16.0 or higher)
- npm (10.8.1 or higher)
Installation Steps
# Clone the repository
git clone https://github.com/rosaboyle/mcp-playground.git
# Navigate to the project directory
cd mcp-playground
# Install dependencies
npm install
# Build the project
npm run build
# Start the application
npm start
Usage Guide
Setting Up API Keys
You can easily set up your API keys through the application's user interface:
- Open the application and navigate to the "Providers" section
- Click "Set API Key" for the provider you want to configure
- Enter your API key in the dialog box and save

Adding MCP Servers
To add a new MCP server through the user interface:
- Navigate to "MCP Playground" in the application
- Click "Add Server" and fill in the server details
- Click "Save" to store the server configuration

Adding New MCP Servers
To add a new MCP server:
- Use the "Add Server" button in the MCP Playground section
- Specify the server name, command, arguments, and environment variables
- Save the configuration
Development
# Start the application in development mode
npm run dev
Testing
Run the test suite:
npm test
For detailed information on testing, see TESTING.md.
Integration Examples
Fireworks AI Integration
MCP Playground allows Fireworks AI models to:
- Discover available tools from MCP servers
- Call these tools when appropriate
- Process the results from tool calls
- Generate coherent responses based on the tool outputs
For more details, see FIREWORKS_MCP_INTEGRATION.md.
Groq Integration
The Groq integration enables:
- Initializing the Groq client with an API key
- Making real API calls to the Groq API
- Streaming chat completions from Groq AI models
- Forwarding events from the stream to the renderer process
For more details, see GROQ_INTEGRATION.md.
Use Cases
- API Testing: Debug and test your MCP server implementations
- Tool Development: Develop and test new tools for MCP servers
- LLM Integration: Test how different LLMs interact with MCP tools
- Education: Learn about the Model Context Protocol
- Development: Build applications that leverage MCP and LLMs
Troubleshooting
Common Issues
- Connection Errors: Ensure your MCP server is running and the command/args in mcp.json are correct
- API Key Issues: Verify that you've set the correct API keys in your .env file
- Tool Call Failures: Check the server logs for errors in tool implementation
For specific integration issues:
- See FIREWORKS_INTEGRATION.md for Fireworks-specific help
- See GROQ_INTEGRATION.md for Groq-specific help
For a list of known issues and limitations, see KNOWN_ISSUES.md.
Command-Line Exploration
For advanced users or troubleshooting, you can also explore MCP servers via command line:
npm run mcp-client
This will:
- Connect to the configured MCP server
- List all available tools
- List all available resources
- List all available prompts
Testing LLM Integrations via Command Line
The application supports testing various LLM providers with MCP servers via command line:
Fireworks AI + MCP
# Test the Fireworks MCP integration
npx ts-node src/test_fireworks_mcp.ts
# Run in interactive mode
npx ts-node src/test_fireworks_mcp.ts --interactive
Groq + MCP
# Test the Groq integration
npx ts-node src/test_groq.ts
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
See CONTRIBUTING.md for detailed guidelines on how to contribute to this project.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
MIT License - see the LICENSE file for details.
Acknowledgments
- Thanks to all contributors who have helped build and improve this tool
- Special thanks to the MCP community for developing and promoting this standard
