Sponsored by Deepsite.site

Tag

#dex

57 results found

Intelligence Aeternum Data Portal

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735) The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Intelligence Aeternum Data Portal

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735). The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Cognitive Nutrition to Cure AI Model Collapse plus Advanced Image Enhancement Tools

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735). The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Instagit - Let Your Agents Instantly Understand Any Github Repo

Works with Claude Code, Claude Desktop, Cursor, OpenClaw, and any MCP-compatible client. The @latest tag ensures you always get the most recent version. Why Agents that integrate with external libraries are flying blind. They read docs (if they exist), guess at APIs, and hallucinate patterns that don't match the actual code. The result: broken integrations, wrong function signatures, outdated usage patterns, hours of debugging. When an agent can actually analyze the source code of a library or service it's integrating with, everything changes. It sees the real function signatures, the actual data flow, the patterns the maintainers intended. Integration becomes dramatically easier and less error-prone because the agent is working from ground truth, not guesses. What Agents Can Do With This - Integrate with any library correctly the first time — "How do I set up authentication with this SDK?" gets answered from the actual code, not outdated docs or training data. Your agent sees the real constructors, the real config options, the real error types. - Migrate between versions without the guesswork — Point your agent at both the old and new version of a library. It can diff the actual implementations and generate a migration plan that accounts for every breaking change. - Debug issues across repository boundaries — When a bug spans your code and a dependency, your agent can read both codebases and trace the issue to its root cause — even into libraries you've never opened. - Generate integration code that actually works — Instead of producing plausible-looking code that fails at runtime, your agent writes integration code based on the real API surface: actual method names, actual parameter types, actual return values. - Evaluate libraries before committing — "Should we use library A or B?" Your agent can analyze both implementations, compare their approaches to error handling, test coverage, and architectural quality, and give you a grounded recommendation. - Onboard to unfamiliar codebases in minutes — Point your agent at any repo and ask how things work. It answers from the code itself, with file paths and line numbers, not from memory that may be months out of date.

Mcp Server For Bitrix24

# mcp-bitrix24 MCP server for Bitrix24 Tasks, Workgroups, and Users. Implements MCP/JSON-RPC over STDIO. ## Features - Tasks: create, update, close, reopen, list - Workgroups: create, list - Users: list, current user, available fields - Task fields: available fields + validation for `create_task.fields` ## Requirements - Node.js >= 18 - Bitrix24 webhook URL ## Install / Build ```bash npm install npm run build ``` Run via npm: ```bash npx mcp-bitrix24 ``` ## Configuration Set the Bitrix24 webhook URL via environment variable: ``` BITRIX24_WEBHOOK_URL=https://<your-domain>/rest/<user_id>/<webhook>/ ``` Example Codex MCP config: ```toml [mcp_servers.bitrix24] command = "npx" args = ["-y", "mcp-bitrix24"] [mcp_servers.bitrix24.env] BITRIX24_WEBHOOK_URL = "https://<your-domain>/rest/<user_id>/<webhook>/" ``` ## Tools ### Tasks - `create_task` - Input: `title` (string, required), `description?` (string), `responsible_id?` (number), `group_id?` (number), `fields?` (object) - Output: `{ task_id: number }` - Note: if `fields` is provided, keys are validated against `get_task_fields`. - `update_task` - Input: `task_id` (number, required) + at least one of: `title?`, `description?`, `responsible_id?`, `group_id?` - Output: `{ task_id: number }` - `close_task` - Input: `task_id` (number, required) - Output: `{ task_id: number }` - `reopen_task` - Input: `task_id` (number, required) - Output: `{ task_id: number }` - `list_tasks` - Input: `responsible_id?` (number), `group_id?` (number), `start?` (number), `limit?` (number) - Output: `{ tasks: [{ id, title, status }] }` - `get_task_fields` - Input: `{}` - Output: `{ fields: { [field: string]: object } }` - `list_task_history` - Input: `task_id` (number, required), `filter?` (object), `order?` (object) - Output: `{ list: [ { id, createdDate, field, value, user } ] }` ### Workgroups - `create_group` - Input: `name` (string, required), `description?` (string) - Output: `{ group_id: number }` - `list_groups` - Input: `limit?` (number) - Output: `{ groups: [{ id, name }] }` ### Users - `list_users` - Input: - `filter?` (object) - `sort?` (string) - `order?` ("ASC" | "DESC") - `admin_mode?` (boolean) - `start?` (number) - `limit?` (number) - Output: `{ users: [{ id, name, last_name, email?, active }] }` - Note: `filter` supports Bitrix24 `user.get` filters (including prefixes like `>=`, `%`, `@`, etc.). `start` controls paging (Bitrix returns 50 records per page); `limit` is a local slice after the API response. - `get_user_fields` - Input: `{}` - Output: `{ fields: { [field: string]: string } }` - `get_current_user` - Input: `{}` - Output: `{ user: { id, name, last_name, email?, active } }` ## Architecture Clean architecture layers: - `mcp/` — protocol, transport, server - `adapters/` — MCP tools mapping to domain - `domain/` — entities, services, ports - `infrastructure/` — Bitrix24 REST client ## Development Notes - Input validation uses `zod`. - Transport: STDIO only. - Build: `tsc` (`npm run build`). ## Contributing See `CONTRIBUTING.md` for guidelines.