Sponsored by Deepsite.site

Lacework Mcp Server

Created By
shashwat-sec18 days ago
Content

Lacework Alerts MCP Server

An MCP (Model Context Protocol) server built with FastMCP that exposes Lacework API v2 alert operations as tools for AI agents and LLM integrations.

Quick Start (New Machine Setup)

# 1. Clone the repo
git clone <repo-url>
cd lacework_mcp_server

# 2. Create a virtual environment (Python 3.10+)
python3 -m venv .venv
source .venv/bin/activate

# 3. Install dependencies
pip install -e .
# or manually:
# pip install fastmcp httpx

# 4. Configure Lacework credentials (pick one)

# Option A – Config file
cat > ~/.lacework.json <<'EOF'
{
  "account": "yourcompany.lacework.net",
  "keyId": "YOUR_ACCESS_KEY_ID",
  "secret": "YOUR_SECRET_KEY"
}
EOF

# Option B – Environment variables
export LACEWORK_ACCOUNT="yourcompany"
export LACEWORK_KEY_ID="YOUR_ACCESS_KEY_ID"
export LACEWORK_SECRET="YOUR_SECRET_KEY"

# Environment variables take precedence over the config file.

# 5. Run the server
python lacework_mcp_server.py

Tools

ToolDescription
list_alertsList alerts within an optional time range (supports relative times like 2h, last 2 hours)
search_alertsSearch alerts with filters (severity, status, alert type) and flexible time inputs (30m, last 2 hours, 2024-06-01)
get_alert_detailsGet detailed info for a specific alert (Details, Investigation, Events, RelatedAlerts, Integrations, Timeline, ObservationTimeline)
get_alert_timelineShortcut – get the timeline for an alert
get_alert_investigationShortcut – get investigation details for an alert
get_alert_entitiesList entities (machines, IPs) associated with an alert
get_alert_entity_detailsGet enriched context for a specific entity (VirusTotal, network activity, etc.)
post_alert_commentPost a comment on an alert's timeline
close_alertClose an alert with a reason code

Running

Standalone (stdio – local)

source .venv/bin/activate
python lacework_mcp_server.py

Remote (SSE / Streamable HTTP)

Run the server on a remote host so AI agents can connect over HTTP and pass credentials per-request:

# SSE transport (default host 0.0.0.0, port 8000)
python lacework_mcp_server.py --transport sse --port 8000

# Streamable HTTP transport
python lacework_mcp_server.py --transport streamable-http --host 0.0.0.0 --port 9000

When running remotely, callers pass Lacework credentials as tool parameters instead of relying on server-side config:

{
  "name": "search_alerts",
  "arguments": {
    "start_time": "last 2 hours",
    "severity": "Critical",
    "lacework_account": "mycompany",
    "lacework_key_id": "MY_KEY_ID",
    "lacework_secret": "MY_SECRET"
  }
}

All three credential fields (lacework_account, lacework_key_id, lacework_secret) are optional on every tool. When omitted, the server falls back to its local config (env vars / ~/.lacework.json). Clients for different Lacework accounts are cached so tokens are reused across calls.

With Claude Desktop / VS Code

Add to your MCP settings (e.g. ~/.claude/claude_desktop_config.json or .vscode/mcp.json):

Local (with ~/.lacework.json present):

{
  "mcpServers": {
    "lacework": {
      "command": "/path/to/lacework_mcp_server/.venv/bin/python",
      "args": [
        "/path/to/lacework_mcp_server/lacework_mcp_server.py"
      ]
    }
  }
}

Local (without ~/.lacework.json – pass creds via env):

{
  "mcpServers": {
    "lacework": {
      "command": "/path/to/lacework_mcp_server/.venv/bin/python",
      "args": [
        "/path/to/lacework_mcp_server/lacework_mcp_server.py"
      ],
      "env": {
        "LACEWORK_ACCOUNT": "yourcompany",
        "LACEWORK_KEY_ID": "YOUR_KEY_ID",
        "LACEWORK_SECRET": "YOUR_SECRET"
      }
    }
  }
}

Remote (server running elsewhere via SSE):

{
  "mcpServers": {
    "lacework": {
      "url": "http://your-server-host:8000/sse"
    }
  }
}

For remote servers, credentials are passed as tool parameters on each call (lacework_account, lacework_key_id, lacework_secret).

API Reference

Based on the Lacework API v2 documentation:

  • Authentication: Uses POST /api/v2/access/tokens with automatic token refresh
  • Alerts: Full CRUD via /api/v2/Alerts endpoints
  • Rate limits: 480 requests/hour per functionality
  • Time ranges: Max 7 days per request; default is last 24 hours

Server Config

{
  "mcpServers": {
    "lacework": {
      "command": "/path/to/lacework_mcp_server/.venv/bin/python",
      "args": [
        "/path/to/lacework_mcp_server/lacework_mcp_server.py"
      ]
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
RedisA Model Context Protocol server that provides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools.
Amap Maps高德地图官方 MCP Server
Playwright McpPlaywright MCP server
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Serper MCP ServerA Serper MCP Server
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
ChatWiseThe second fastest AI chatbot™
CursorThe AI Code Editor
Tavily Mcp
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Y GuiA web-based graphical interface for AI chat interactions with support for multiple AI models and MCP (Model Context Protocol) servers.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
WindsurfThe new purpose-built IDE to harness magic
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
DeepChatYour AI Partner on Desktop
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.