Sponsored by Deepsite.site

Tag

#cli

576 results found

Agent Smith

Auto-generate AGENTS.md from your codebase Stop writing AGENTS.md by hand. Run agentsmith and it scans your codebase to generate a comprehensive context file that AI coding tools read automatically. What is AGENTS.md? AGENTS.md is an open standard for giving AI coding assistants context about your project. It's adopted by 60,000+ projects and supported by: Cursor GitHub Copilot Claude Code VS Code Gemini CLI And 20+ more tools AI tools automatically discover and read AGENTS.md files - no configuration needed. What agentsmith does Instead of writing AGENTS.md manually, agentsmith scans your codebase and generates it: npx @jpoindexter/agent-smith agentsmith Scanning /Users/you/my-project... ✓ Found 279 components ✓ Found 5 components with CVA variants ✓ Found 37 color tokens ✓ Found 14 custom hooks ✓ Found 46 API routes (8 with schemas) ✓ Found 87 environment variables ✓ Detected Next.js (App Router) ✓ Detected shadcn/ui (26 Radix packages) ✓ Found cn() utility ✓ Found mode/design-system ✓ Detected 6 code patterns ✓ Found existing CLAUDE.md ✓ Found .ai/ folder (12 files) ✓ Found prisma schema (28 models) ✓ Scanned 1572 files (11.0 MB, 365,599 lines) ✓ Found 17 barrel exports ✓ Found 15 hub files (most imported) ✓ Found 20 Props types ✓ Found 40 test files (12% component coverage) ✓ Generated AGENTS.md ~11K tokens (9% of 128K context) Install # Run directly (no install needed) npx @jpoindexter/agent-smith # Or install globally npm install -g @jpoindexter/agent-smith Usage # Generate AGENTS.md in current directory agentsmith # Generate for a specific directory agentsmith ./my-project # Preview without writing (dry run) agentsmith --dry-run # Custom output file agentsmith --output CONTEXT.md # Force overwrite existing file agentsmith --force Output Modes # Default - comprehensive output (~11K tokens) agentsmith # Compact - fewer details (~20% smaller) agentsmith --compact # Compress - signatures only (~40% smaller) agentsmith --compress # Minimal - ultra-compact (~3K tokens) agentsmith --minimal # XML format (industry standard, matches Repomix) agentsmith --xml # Include file tree visualization agentsmith --tree

Altinity Mcp

**Altinity MCP Server** is a production-ready MCP server designed to empower AI agents and LLMs to interact seamlessly with ClickHouse. It exposes your ClickHouse database as a set of standardized tools and resources that adhere to the MCP protocol, making it easy for agents built on OpenAI, Claude, or other platforms to query, explore, and analyse your data. ### Why use this server? * Seamless AI-agent integration: Designed so that agents built using OpenAI can call your database as if it were a tool. * Flexible transport support: STDIO for local workflows, HTTP for traditional REST-style calls + streaming support via SSE for interactive flows. * Full tooling and protocol support: Built-in tools for schema introspection, SQL execution, resource discovery. * Security and enterprise-grade: Supports JWE/JWT authentication, TLS for ClickHouse connection and MCP endpoints. * Open-source and extensible: You can customise, extend, embed into your stack. ### Key Features * **Transport Options**: * **STDIO**: Run locally via standard input/output — ideal for embedded agents or local workflows. * **HTTP**: Exposes MCP tools as HTTP endpoints, enabling Web, backend, agent access. * **SSE (Server-Sent Events)**: Enables streaming responses — useful when you want the agent to receive chunks of results, respond interactively, or present live data. * **OpenAPI Integration**: When HTTP or SSE mode is enabled, the server can generate a full OpenAPI specification (v3) describing all tools and endpoints. This makes it easy for OpenAI-based agents (or other LLM platforms) to discover and call your tools programmatically. * **Security & Authentication**: Optional JWE token authentication, JWT signing, TLS support both for the MCP server and the underlying ClickHouse connection. * **Dynamic Resource Discovery**: The server can introspect the ClickHouse schema and automatically generate MCP “resources” (tables, views, sample data) so agents understand your data context without manual intervention. * **Configuration Flexibility**: Configure via environment variables, YAML/JSON configuration file or CLI flags. Includes hot-reload support so you can adjust config without full restart. ### Use-Cases * AI assistant integrated with OpenAI: For example, you build an agent using OpenAI’s API which reads your schema via the OpenAPI spec, selects the right tool, calls the HTTP/SSE endpoint of the MCP server, and returns analytic results to the user. * Streaming analytics: Large result sets, or interactive analytics flows, where SSE streaming gives progressive results, keeps your UI or agent responsive. * Secure enterprise access: Instead of giving agents full DB credentials, you expose via the MCP server with fine-grained auth, limit enforcement, TLS, and tool-level control. * Schema-aware LLM workflows: Because the server exposes table and column metadata and sample rows as resources, the LLM can reason about your data structure, reducing errors and generating better SQL or queries.