Sponsored by Deepsite.site

Yet Sparql Mcp Server

Created By
Temkit Sid-Ali7 months ago
MCP SPARQL Server is a high-performance, configurable server that connects to any SPARQL endpoint and provides enhanced functionality including result formatting and caching. It's built on top of the MCP (Message Carrying Protocol) framework to provide a seamless, language-agnostic interface for querying semantic data.
Content

MCP SPARQL Server

License: AGPL-3.0 Python: 3.8+

A flexible and powerful SPARQL-enabled server for MCP (Message Carrying Protocol)

🌟 Overview

MCP SPARQL Server is a high-performance, configurable server that connects to any SPARQL endpoint and provides enhanced functionality including result formatting and caching. It's built on top of the MCP (Message Carrying Protocol) framework to provide a seamless, language-agnostic interface for querying semantic data.

✨ Features

  • Universal Endpoint Support: Connect to any SPARQL-compliant endpoint
  • Full SPARQL Support: Execute any valid SPARQL query (SELECT, ASK, CONSTRUCT, DESCRIBE)
  • Intelligent Result Formatting:
    • Standard JSON (compatible with standard SPARQL clients)
    • Simplified JSON (easier to work with in applications)
    • Tabular format (ready for display in UI tables)
  • High-Performance Caching:
    • Multiple cache strategies (LRU, LFU, FIFO)
    • Configurable TTL (time-to-live)
    • Cache management tools
  • Flexible Deployment Options:
    • Run in foreground mode
    • Run as a background daemon
    • Deploy as a systemd service
  • Comprehensive Configuration:
    • Command-line arguments
    • Environment variables
    • No hardcoded values

📋 Requirements

  • Python 3.8 or newer
  • SPARQLWrapper library
  • mcp framework
  • pydantic for configuration
  • python-daemon for background execution

🚀 Installation

From Source

# Clone the repository
git clone https://github.com/yet-ai/mcp-server-sparql.git
cd mcp-server-sparql

# Install the package
pip install -e .

From PyPI

pip install mcp-server-sparql

Using the Installation Script

For a full installation with systemd service setup:

# Download the repository
git clone https://github.com/yet-ai/mcp-server-sparql.git
cd mcp-server-sparql

# Run the installation script (as root for systemd service)
sudo ./install.sh

🔍 Usage

Basic Usage

Start the server by specifying a SPARQL endpoint:

mcp-server-sparql --endpoint https://dbpedia.org/sparql

Running as a Daemon

To run the server as a background process:

mcp-server-sparql --endpoint https://dbpedia.org/sparql --daemon \
  --log-file /var/log/mcp-sparql.log \
  --pid-file /var/run/mcp-sparql.pid

Using with Systemd

If installed with systemd support:

  1. Configure your endpoint in the environment file:

    sudo nano /etc/mcp-sparql/env
    
  2. Start the service:

    sudo systemctl start sparql-server
    
  3. Enable on boot:

    sudo systemctl enable sparql-server
    

Client Query Examples

After starting the server, you can query it using the MCP client:

Basic Query

echo '{"query_string": "SELECT * WHERE { ?s ?p ?o } LIMIT 5"}' | mcp claude

Query with Specific Format

echo '{"query_string": "SELECT * WHERE { ?s ?p ?o } LIMIT 5", "format": "tabular"}' | mcp claude

Complex Query Example

echo '{
  "query_string": "PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?name ?email WHERE { ?person foaf:name ?name . OPTIONAL { ?person foaf:mbox ?email } } LIMIT 5",
  "format": "simplified"
}' | mcp claude

Cache Management

# Get cache statistics
echo '{"action": "stats"}' | mcp cache

# Clear the cache
echo '{"action": "clear"}' | mcp cache

⚙️ Configuration

Command-line Arguments

ArgumentDescriptionDefault
--endpoint URLSPARQL endpoint URLRequired
--timeout SECONDSRequest timeout in seconds30
--format FORMATResult format (json, simplified, tabular)json
--cache-enabled BOOLEnable result cachingtrue
--cache-ttl SECONDSCache time-to-live in seconds300
--cache-max-size SIZEMaximum cache size100
--cache-strategy STRATEGYCache replacement strategy (lru, lfu, fifo)lru
--pretty-printPretty print JSON outputfalse
--include-metadata BOOLInclude query metadata in resultstrue
--daemonRun as a background daemonfalse
--log-file FILELog file location when running as a daemon/var/log/mcp-sparql-server.log
--pid-file FILEPID file location when running as a daemon/var/run/mcp-sparql-server.pid

Environment Variables

VariableDescriptionDefault
SPARQL_ENDPOINTSPARQL endpoint URLNone (required)
SPARQL_TIMEOUTRequest timeout in seconds30
SPARQL_FORMATDefault result formatjson
SPARQL_CACHE_ENABLEDEnable cachingtrue
SPARQL_CACHE_TTLCache time-to-live in seconds300
SPARQL_CACHE_MAX_SIZEMaximum cache size100
SPARQL_CACHE_STRATEGYCache replacement strategylru
SPARQL_PRETTY_PRINTPretty print JSON outputfalse
SPARQL_INCLUDE_METADATAInclude query metadata in resultstrue

📊 Result Formats

The server supports three different output formats:

1. JSON Format (default)

Returns the standard SPARQL JSON results format with optional metadata.

{
  "head": {
    "vars": ["s", "p", "o"]
  },
  "results": {
    "bindings": [
      {
        "s": { "type": "uri", "value": "http://example.org/resource" },
        "p": { "type": "uri", "value": "http://example.org/property" },
        "o": { "type": "literal", "value": "Example Value" }
      }
    ]
  },
  "metadata": {
    "variables": ["s", "p", "o"],
    "count": 1,
    "query": "SELECT * WHERE { ?s ?p ?o } LIMIT 1"
  }
}

2. Simplified Format

Returns a simplified JSON structure that's easier to work with, converting variable bindings into simple key-value objects.

{
  "type": "SELECT",
  "results": [
    {
      "s": "http://example.org/resource",
      "p": "http://example.org/property",
      "o": "Example Value"
    }
  ],
  "metadata": {
    "variables": ["s", "p", "o"],
    "count": 1,
    "query": "SELECT * WHERE { ?s ?p ?o } LIMIT 1"
  }
}

3. Tabular Format

Returns results in a tabular format with columns and rows, suitable for table display.

{
  "type": "SELECT",
  "columns": [
    { "name": "s", "label": "s" },
    { "name": "p", "label": "p" },
    { "name": "o", "label": "o" }
  ],
  "rows": [
    [
      "http://example.org/resource",
      "http://example.org/property",
      "Example Value"
    ]
  ],
  "metadata": {
    "variables": ["s", "p", "o"],
    "count": 1,
    "query": "SELECT * WHERE { ?s ?p ?o } LIMIT 1"
  }
}

🔄 Cache Strategies

The server supports three cache replacement strategies:

1. LRU (Least Recently Used)

Evicts the least recently accessed items first. This is the default strategy and works well for most scenarios, as it prioritizes keeping recently accessed items in the cache.

2. LFU (Least Frequently Used)

Evicts the least frequently accessed items first. This strategy is good for scenarios where some queries are much more common than others, as it prioritizes keeping frequently accessed items in the cache.

3. FIFO (First In First Out)

Evicts the oldest items first, regardless of access patterns. This strategy is simpler and can be useful when you want a purely time-based caching approach.

🔍 Advanced SPARQL Examples

The server supports all SPARQL features. Here are some example queries you can try:

Basic Triple Pattern

SELECT * WHERE { ?s ?p ?o } LIMIT 10

Filtering by Property Type

PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?subject ?label
WHERE {
    ?subject rdf:type rdfs:Class ;
             rdfs:label ?label .
}
LIMIT 10

Using Regular Expressions

PREFIX foaf: <http://xmlns.com/foaf/0.1/>

SELECT ?person ?name
WHERE {
    ?person foaf:name ?name .
    FILTER(REGEX(?name, "Smith", "i"))
}
LIMIT 10

Complex Query with Multiple Patterns

PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbp: <http://dbpedia.org/property/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?city ?name ?population ?country ?countryName
WHERE {
    ?city a dbo:City ;
          rdfs:label ?name ;
          dbo:population ?population ;
          dbo:country ?country .
    ?country rdfs:label ?countryName .
    FILTER(?population > 1000000)
    FILTER(LANG(?name) = 'en')
    FILTER(LANG(?countryName) = 'en')
}
ORDER BY DESC(?population)
LIMIT 10

⚠️ Troubleshooting

Common Issues

  • Connection refused: Check that the SPARQL endpoint URL is correct and accessible
  • Query timeout: Increase the timeout value with --timeout option
  • Memory issues with large result sets: Add LIMIT clause to your queries or reduce cache size
  • Permission denied for log/pid files: Check directory permissions or run with appropriate privileges

Logging

When running in foreground mode, logs are output to the console. When running as a daemon, logs are written to the specified log file (default: /var/log/mcp-sparql-server.log).

To increase verbosity, you can set the Python logging level in the source code.

🛠️ Development

Project Structure

mcp-server-sparql/
├── sparql_server/             # Main package
│   ├── core/                  # Core functionality
│   │   ├── __init__.py        # Package exports
│   │   ├── config.py          # Configuration management
│   │   └── server.py          # Main SPARQL server
│   ├── formatters/            # Result formatters
│   │   ├── __init__.py        # Package exports
│   │   ├── formatter.py       # Base formatter class
│   │   ├── json_formatter.py  # JSON formatter
│   │   ├── simplified_formatter.py # Simplified JSON formatter
│   │   └── tabular_formatter.py # Tabular formatter
│   ├── cache/                 # Caching implementation
│   │   ├── __init__.py        # Package exports
│   │   ├── query_cache.py     # Base cache interface
│   │   ├── lru_cache.py       # LRU cache implementation
│   │   ├── lfu_cache.py       # LFU cache implementation
│   │   └── fifo_cache.py      # FIFO cache implementation
│   └── __init__.py            # Package exports
├── server.py                  # Main entry point
├── setup.py                   # Package setup
├── install.sh                 # Installation script
├── requirements.txt           # Python dependencies
├── sparql-server.service      # Systemd service file
├── README.md                  # This file
└── LICENSE                    # License file

Running Tests

python test_server.py

🔒 Security Considerations

  • The server doesn't implement authentication or authorization - it relies on the security of the underlying SPARQL endpoint
  • For production use, consider deploying behind a secure proxy
  • Be careful with untrusted queries as they could potentially be resource-intensive

📄 License

This project is licensed under a dual-license model:

  • Open Source: GNU Affero General Public License v3.0 (AGPL-3.0) for open source use
  • Commercial: Proprietary commercial license available for commercial or proprietary use

See the LICENSE file for complete details.

This software was imagined and developed by Temkit Sid-Ali for Yet.lu.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📬 Contact


Built with ❤️ by Temkit Sid-Ali for Yet.lu
© 2025 Yet.lu - All rights reserved

Server Config

{
  "mcpServers": [
    {
      "name": "sparql",
      "command": "python3",
      "args": [
        "/path/to/server.py",
        "--endpoint",
        "https://data.legilux.public.lu/sparqlendpoint",
        "--format",
        "simplified",
        "--cache-enabled",
        "true",
        "--cache-ttl",
        "300",
        "--cache-strategy",
        "lru"
      ],
      "env": {
        "SPARQL_ENDPOINT": "https://data.legilux.public.lu/sparqlendpoint",
        "SPARQL_TIMEOUT": "30",
        "SPARQL_MAX_RESULTS": "1000",
        "SPARQL_CACHE_ENABLED": "true",
        "SPARQL_CACHE_TTL": "300",
        "SPARQL_CACHE_MAX_SIZE": "100",
        "SPARQL_CACHE_STRATEGY": "lru",
        "PYTHONPATH": "/path/to/project/directory"
      },
      "transport": "stdio"
    }
  ],
  "defaultServer": "sparql"
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Amap Maps高德地图官方 MCP Server
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
ChatWiseThe second fastest AI chatbot™
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Tavily Mcp
WindsurfThe new purpose-built IDE to harness magic
Serper MCP ServerA Serper MCP Server
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright McpPlaywright MCP server
CursorThe AI Code Editor
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
DeepChatYour AI Partner on Desktop
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code