- Scrapfly MCP Server
Scrapfly MCP Server
Key Features & Capabilities
- Live web data access
- Anti-bot handling & proxy infrastructure
- Structured data extraction
- Screenshots capture
- Secure Auth options
Works with your favorite AI tools: Claude Desktop, Cursor, Cline, Windsurf, LangChain, LlamaIndex, CrewAI, OpenAI function calling, n8n, Make, and Zapier.
Practical Use Cases
- Job aggregation
- Price monitoring
- Content research & aggregation
- Competitor intelligence & market analysis
- Real-estate analysis
- RAG & LLM data ingestion
- Automated multi-step workflows
Tools
instructions
Purpose: Provide guidance and required parameters for successful scraping calls.
Features: Supplies best-practice recommendations, parameter selection hints, error-handling guidance, and the mandatory pow value needed before calling web_get_page or web_scrape.
Example usage:
{ "tool": "scraping_instruction_enhanced" }
web_get_page
Purpose: Fetch a webpage quickly using optimized defaults.
Features: Automatic JavaScript rendering, anti-bot handling, clean text/markdown output, and sensible default parameters for most sites.
Example usage:
{
"tool": "web_get_page",
"parameters": {
"url": "https://news.ycombinator.com",
"pow": "obtained_from_instruction_tool",
"format": "markdown",
"format_options": ["only_content"]
}
}
web_scrape
Purpose: Provide full-control, customizable scraping for complex sites and workflows.
Features: Supports browser automation, login/auth flows, custom HTTP methods and headers, cookie management, multi-step interactions, and AI-powered data extraction.
Example usage:
{
"tool": "web_scrape",
"parameters": {
"url": "https://web-scraping.dev/login",
"pow": "obtained_from_instruction_tool",
"render_js": true,
"js_scenario": [
{ "fill": { "selector": "input[name='username']", "value": "myuser" } },
{ "fill": { "selector": "input[name='password']", "value": "mypass" } },
{ "click": { "selector": "button[type='submit']" } },
{ "wait_for_navigation": { "timeout": 5000 } }
]
}
}
screenshot
Purpose: Capture visual snapshots of webpages.
Features: Supports full-page screenshots or element-level captures using CSS selectors. Example usage:
{
"tool": "screenshot",
"parameters": {
"url": "https://web-scraping.dev/pricing",
"capture": ".pricing-table",
"format": "png",
"options": ["load_images", "block_banners"]
}
}
info_account
Purpose: Retrieve real-time details about your Scrapfly account and project.
Features: Returns account metadata, project configuration, subscription status, credit usage, remaining quota, and concurrency limits.
Example usage:
{ "tool": "info_account" }
Setup Guide
Claude Desktop
- Locate Your Configuration File
Claude Desktop stores its configuration in a JSON file. Open the file for your operating system:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows:
%APPDATA%\Claude\claude_desktop_config.json
- Choose Authentication Method
Select your preferred authentication method:
OAuth2 (recommended):
Add the following configuration to your claude_desktop_config.json file:
{
"mcpServers": {
"scrapfly": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.scrapfly.io/mcp"
]
}
}
}
API Key:
Add the following configuration to your claude_desktop_config.json file:
{
"mcpServers": {
"scrapfly": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.scrapfly.io/mcp?key=YOUR-API-KEY"
]
}
}
}
- Restart Claude Desktop
After saving the configuration file, completely quit and restart Claude Desktop to apply the changes.
- Verify the Integration
After restarting, check that the MCP tools are available:
- Look for the hammer icon (🔨) in the bottom right corner of the chat window
- Click the hammer icon to see available MCP tools - you should see Scrapfly tools listed
- Try a test prompt: "Can you scrape https://news.ycombinator.com and show me the top 5 posts?"
Documentation
Server Config
{
"mcpServers": {
"scrapfly": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.scrapfly.io/mcp"
]
}
}
}