Sponsored by Deepsite.site

Enterprise Task Queue Mcp Server

Created By
wizd7 months ago
TaskQueue MCP是一个革命性的企业级Agent执行引擎,核心采用BullMQ作为任务队列基础设施,实现了突破性的双循环分离架构。通过解耦用户对话循环与代理执行循环,该引擎能够执行持续数小时甚至数天的复杂工作流,同时保持完整的执行透明度与控制力。 依托BullMQ的高性能分布式任务处理能力,TaskQueue MCP特别适合企业环境中的多系统集成、复杂数据处理和长时间自动化流程。任务执行全程可追踪、可审计,并在关键节点支持人工干预,完美平衡了AI自主性与企业级应用所需的可控性。 无论是跨系统业务流程自动化、大型代码库重构还是复杂数据分析项目,这一引擎都为AI代理在企业环境中的应用提供了坚实可靠的执行基础。
Content

MCP Task Manager

smithery badge

MCP Task Manager (npm package: taskqueue-mcp) is a Model Context Protocol (MCP) server for AI task management. This tool helps AI assistants handle multi-step tasks in a structured way, with optional user approval checkpoints.

Features

  • Task planning with multiple steps
  • Progress tracking
  • User approval of completed tasks
  • Project completion approval
  • Task details visualization
  • Task status state management
  • Enhanced CLI for task inspection and management

Basic Setup

Usually you will set the tool configuration in Claude Desktop, Cursor, or another MCP client as follows:

{
  "tools": {
    "taskqueue": {
      "command": "npx",
      "args": ["-y", "taskqueue-mcp"]
    }
  }
}

To use the CLI utility, you can install the package globally and then use the following command:

npx taskqueue --help

This will show the available commands and options.

Advanced Configuration

The task manager supports multiple LLM providers for generating project plans. You can configure one or more of the following environment variables depending on which providers you want to use:

  • OPENAI_API_KEY: Required for using OpenAI models (e.g., GPT-4)
  • GOOGLE_GENERATIVE_AI_API_KEY: Required for using Google's Gemini models
  • DEEPSEEK_API_KEY: Required for using Deepseek models

To generate project plans using the CLI, set these environment variables in your shell:

export OPENAI_API_KEY="your-api-key"
export GOOGLE_GENERATIVE_AI_API_KEY="your-api-key"
export DEEPSEEK_API_KEY="your-api-key"

Or you can include them in your MCP client configuration to generate project plans with MCP tool calls:

{
  "tools": {
    "taskqueue": {
      "command": "npx",
      "args": ["-y", "taskqueue-mcp"],
      "env": {
        "OPENAI_API_KEY": "your-api-key",
        "GOOGLE_GENERATIVE_AI_API_KEY": "your-api-key",
        "DEEPSEEK_API_KEY": "your-api-key"
      }
    }
  }
}

Available MCP Tools

The TaskManager now uses a direct tools interface with specific, purpose-built tools for each operation:

Project Management Tools

  • list_projects: Lists all projects in the system
  • read_project: Gets details about a specific project
  • create_project: Creates a new project with initial tasks
  • delete_project: Removes a project
  • add_tasks_to_project: Adds new tasks to an existing project
  • finalize_project: Finalizes a project after all tasks are done

Task Management Tools

  • list_tasks: Lists all tasks for a specific project
  • read_task: Gets details of a specific task
  • create_task: Creates a new task in a project
  • update_task: Modifies a task's properties (title, description, status)
  • delete_task: Removes a task from a project
  • approve_task: Approves a completed task
  • get_next_task: Gets the next pending task in a project
  • mark_task_done: Marks a task as completed with details

Task Status and Workflows

Tasks have a status field that can be one of:

  • not started: Task has not been started yet
  • in progress: Task is currently being worked on
  • done: Task has been completed (requires completedDetails)

Status Transition Rules

The system enforces the following rules for task status transitions:

  • Tasks follow a specific workflow with defined valid transitions:
    • From not started: Can only move to in progress
    • From in progress: Can move to either done or back to not started
    • From done: Can move back to in progress if additional work is needed
  • When a task is marked as "done", the completedDetails field must be provided to document what was completed
  • Approved tasks cannot be modified
  • A project can only be approved when all tasks are both done and approved

These rules help maintain the integrity of task progress and ensure proper documentation of completed work.

Usage Workflow

A typical workflow for an LLM using this task manager would be:

  1. create_project: Start a project with initial tasks
  2. get_next_task: Get the first pending task
  3. Work on the task
  4. mark_task_done: Mark the task as complete with details
  5. Wait for approval (user must call approve_task through the CLI)
  6. get_next_task: Get the next pending task
  7. Repeat steps 3-6 until all tasks are complete
  8. finalize_project: Complete the project (requires user approval)

CLI Commands

To use the CLI, you will need to install the package globally:

npm install -g taskqueue-mcp

Alternatively, you can run the CLI with npx using the --package=taskqueue-mcp flag to tell npx what package it's from.

npx --package=taskqueue-mcp taskqueue --help

Task Approval

By default, all tasks and projects will be auto-approved when marked "done" by the AI agent. To require manual human task approval, set autoApprove to false when creating a project.

Task approval is controlled exclusively by the human user through the CLI:

npx taskqueue approve-task -- <projectId> <taskId>

Options:

  • -f, --force: Force approval even if the task is not marked as done

Note: Tasks must be marked as "done" with completed details by the AI agent before they can be approved (unless using --force).

Listing Tasks and Projects

The CLI provides a command to list all projects and tasks:

npx taskqueue list-tasks

To view details of a specific project:

npx taskqueue list-tasks -- -p <projectId>

This command displays information about all projects in the system or a specific project, including:

  • Project ID and initial prompt
  • Completion status
  • Task details (title, description, status, approval)
  • Progress metrics (approved/completed/total tasks)

Data Schema and Storage

File Location

The task manager stores data in a JSON file that must be accessible to both the server and CLI.

The default platform-specific location is:

  • Linux: ~/.local/share/taskqueue-mcp/tasks.json
  • macOS: ~/Library/Application Support/taskqueue-mcp/tasks.json
  • Windows: %APPDATA%\taskqueue-mcp\tasks.json

Using a custom file path for storing task data is not recommended, because you have to remember to set the same path for both the MCP server and the CLI, or they won't be able to coordinate with each other. But if you do want to use a custom path, you can set the TASK_MANAGER_FILE_PATH environment variable in your MCP client configuration:

{
  "tools": {
    "taskqueue": {
      "command": "npx",
      "args": ["-y", "taskqueue-mcp"],
      "env": {
        "TASK_MANAGER_FILE_PATH": "/path/to/tasks.json"
      }
    }
  }
}

Then, before running the CLI, you should export the same path in your shell:

export TASK_MANAGER_FILE_PATH="/path/to/tasks.json"

Data Schema

The JSON file uses the following structure:

TaskManagerFile
├── projects: Project[]
    ├── projectId: string            # Format: "proj-{number}"
    ├── initialPrompt: string        # Original user request text
    ├── projectPlan: string          # Additional project details
    ├── completed: boolean           # Project completion status
    ├── autoApprove: boolean         # Set `false` to require manual user approval
    └── tasks: Task[]                # Array of tasks
        ├── id: string               # Format: "task-{number}"
        ├── title: string            # Short task title
        ├── description: string      # Detailed task description
        ├── status: string           # Task status: "not started", "in progress", or "done"
        ├── approved: boolean        # Task approval status
        ├── completedDetails: string # Completion information (required when status is "done")
        ├── toolRecommendations: string # Suggested tools that might be helpful for this task
        └── ruleRecommendations: string # Suggested rules/guidelines to follow for this task

License

MIT

Server Config

{
  "mcpServers": {
    "taskqueue-local": {
      "command": "npx",
      "args": [
        "-y",
        "taskqueue-mcp"
      ]
    },
    "taskqueue-public": {
      "url": "https://taskqueue-public.vcorp.ai/rest/your-unique-chat-id",
      "stream": true
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Tavily Mcp
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
WindsurfThe new purpose-built IDE to harness magic
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
ChatWiseThe second fastest AI chatbot™
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Amap Maps高德地图官方 MCP Server
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
DeepChatYour AI Partner on Desktop
CursorThe AI Code Editor
Playwright McpPlaywright MCP server
Serper MCP ServerA Serper MCP Server