Sponsored by Deepsite.site

Luma AI MCP Server 🎥

Created By
bobtista9 months ago
Content

Luma AI MCP Server 🎥

A Model Context Protocol server for Luma AI's Dream Machine API.

Overview

This MCP server integrates with Luma AI's Dream Machine API (v1) to provide tools for generating, managing, and manipulating AI-generated videos and images via Large Language Models. It implements the Model Context Protocol (MCP) to enable seamless interaction between AI assistants and Luma's creative tools.

Features ✨

  • Text-to-video generation
  • Advanced video generation with keyframes
  • Image-to-video conversion
  • Video extension and interpolation
  • Image generation with reference images
  • Audio addition to videos
  • Video upscaling
  • Credit management
  • Generation tracking and status checking

Tools 🛠️

  1. ping

    • Check if the Luma API is running
    • No parameters required
  2. create_generation

    • Creates a new video generation
    • Input:
      • prompt (string, required): Text description of the video to generate
      • model (string, optional): Model to use (default: "ray-2")
        • Available models: "ray-1-6", "ray-2", "ray-flash-2"
      • resolution (string, optional): Video resolution (choices: "540p", "720p", "1080p", "4k")
      • duration (string, optional): Video duration (only "5s" and "9s" are currently supported)
      • aspect_ratio (string, optional): Video aspect ratio (e.g., "16:9", "1:1", "9:16", "4:3", "3:4", "21:9", "9:21")
      • loop (boolean, optional): Whether to make the video loop
      • keyframes (object, optional): Start and end frames for advanced video generation:
        • frame0 and/or frame1 with either:
          • {"type": "image", "url": "image_url"} for image keyframes
          • {"type": "generation", "id": "generation_id"} for video keyframes
  3. get_generation

    • Gets the status of a generation
    • Input:
      • generation_id (string, required): ID of the generation to check
    • Output includes:
      • Generation ID
      • State (queued, dreaming, completed, failed)
      • Failure reason (if failed)
      • Video URL (if completed)
  4. list_generations

    • Lists all generations
    • Input:
      • limit (number, optional): Maximum number of generations to return (default: 10)
      • offset (number, optional): Number of generations to skip
  5. delete_generation

    • Deletes a generation
    • Input:
      • generation_id (string, required): ID of the generation to delete
  6. upscale_generation

    • Upscales a video generation to higher resolution
    • Input:
      • generation_id (string, required): ID of the generation to upscale
      • resolution (string, required): Target resolution for the upscaled video (one of "540p", "720p", "1080p", or "4k")
    • Note:
      • The generation must be in a completed state to be upscaled
      • The target resolution must be higher than the original generation's resolution
      • Each generation can only be upscaled once
  7. add_audio

    • Adds AI-generated audio to a video generation
    • Input:
      • generation_id (required): The ID of the generation to add audio to
      • prompt (required): The prompt for the audio generation
      • negative_prompt (optional): The negative prompt for the audio generation
      • callback_url (optional): URL to notify when the audio processing is complete
  8. generate_image

    • Generates an image from a text prompt with optional reference images
    • Input:
      • prompt (string, required): Text description of the image to generate
      • model (string, optional): Model to use for image generation (default: "photon-1")
        • Available models: "photon-1", "photon-flash-1"
      • aspect_ratio (string, optional): Image aspect ratio (same options as video)
      • image_ref (array, optional): Reference images to guide generation
        • Each ref: {"url": "image_url", "weight": optional_float}
      • style_ref (array, optional): Style reference images
        • Each ref: {"url": "image_url", "weight": optional_float}
      • character_ref (object, optional): Character reference images
        • Format: {"identity_name": {"images": ["url1", "url2", ...]}}
      • modify_image_ref (object, optional): Image to modify
        • Format: {"url": "image_url", "weight": optional_float}
  9. get_credits

    • Gets credit information for the current user
    • No parameters required
    • Returns available credit balance in USD cents
  10. get_camera_motions

    • Gets all supported camera motions
    • No parameters required
    • Returns: List of available camera motion strings

Setup for Claude Desktop 🖥️

  1. Get your Luma API key from Luma AI (sign up or log in to get your API key)

  2. Add this to your Claude Desktop configuration file:

    • On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • On Windows: %APPDATA%\Claude\claude_desktop_config.json
    {
      "mcpServers": {
        "luma": {
          "command": "uv",
          "args": [
            "run",
            "--project",
            "/path/to/your/luma-ai-mcp-server",
            "-m",
            "luma_ai_mcp_server"
          ],
          "env": {
            "LUMA_API_KEY": "your-luma-api-key-here"
          }
        }
      }
    }
    

    Replace:

    • /path/to/your/luma-ai-mcp-server with the actual path to your server directory
    • your-luma-api-key-here with your actual Luma API key
  3. Restart Claude Desktop

  4. That's it! You can now use Luma AI tools directly in Claude Desktop conversations.

Quick Troubleshooting 🛠️

If you're having issues:

  1. Check your API key is correct
  2. Make sure the path to the server is correct
  3. View logs with: tail -n 20 -f ~/Library/Logs/Claude/mcp*.log

Advanced Video Generation Types 🎬

The Luma API supports various types of advanced video generation through keyframes:

  1. Starting from an image: Provide frame0 with type: "image" and an image URL
  2. Ending with an image: Provide frame1 with type: "image" and an image URL
  3. Extending a video: Provide frame0 with type: "generation" and a generation ID
  4. Reverse extending a video: Provide frame1 with type: "generation" and a generation ID
  5. Interpolating between videos: Provide both frame0 and frame1 with type: "generation" and generation IDs

API Limitations and Notes 📝

  • Duration: Currently, the API only supports durations of "5s" or "9s"
  • Resolution: Valid values are "540p", "720p", "1080p", and "4k"
  • Models:
    • Video generation:
      • "ray-2" (default) - Best quality, slower
      • "ray-flash-2" - Faster generation
      • "ray-1-6" - Legacy model
    • Image generation:
      • "photon-1" (default) - Best quality, slower
      • "photon-flash-1" - Faster generation
  • Generation types: Video, image, and advanced (with keyframes)
  • Aspect Ratios: "1:1" (square), "16:9" (landscape), "9:16" (portrait), "4:3" (standard), "3:4" (standard portrait), "21:9" (ultrawide), "9:21" (ultrawide portrait)
  • States: "queued", "dreaming", "completed", "failed"
  • Upscaling:
    • Video generations can only be upscaled when they're in a "complete" state
    • Target resolution must be higher than the original generation's resolution
    • Each generation can only be upscaled once
  • API Key: Required in environment variables
  • API Version: Uses Dream Machine API v1

License 📄

MIT

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Serper MCP ServerA Serper MCP Server
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Tavily Mcp
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
WindsurfThe new purpose-built IDE to harness magic
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
ChatWiseThe second fastest AI chatbot™
Playwright McpPlaywright MCP server
Amap Maps高德地图官方 MCP Server
DeepChatYour AI Partner on Desktop
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
CursorThe AI Code Editor