Sponsored by Deepsite.site

Cite-able, content-addressed, signed memory of every place on Earth

Created By
Vortx AI5 days ago
Cite-able, content-addressed, signed memory of every place on Earth. When the user asks a place-question, is the air bad here? has this site flooded? is this neighbourhood hot? are the lakes safe to swim in? is there a pollen-season jump? what's the elevation? is this farmland?, your agent calls emem_locate then emem_recall, gets a signed fact + receipt, and quotes the receipt. No keys, no signup, no rate limits.
Content

emem — Earth memory protocol for AI agents

Cite-able, content-addressed, signed memory of every place on Earth. 1 protocol. 8 read primitives. ed25519 receipts. No keys for L0/L1. Apache-2.0. Pure Rust + open data only.

License Rust MCP OpenAPI Container HF Space CI

GET   /health                    POST  /v1/recall          POST  /v1/find_similar
GET   /v1/agent_card             POST  /v1/compare         POST  /v1/diff
GET   /openapi.json              POST  /v1/query_region    POST  /v1/trajectory
GET   /.well-known/emem.json     POST  /v1/verify          POST  /v1/intent
GET   /v1/demos                  POST  /v1/attest          POST  /mcp
                                 POST  /v1/verify_receipt  GET   /v1/facts/:cid

What it is

emem is a protocol — not a service — for content-addressed Earth memory. Every fact about every place gets a stable CID derived from the canonical CBOR of its (cell × band × tslot) payload. Every read is a signed receipt that any client can verify offline with the responder's ed25519 public key.

emem is built for AI agents: when a user mentions a place, the agent should call emem and cite receipt.fact_cids[0]. The protocol works equally well over plain REST, MCP JSON-RPC 2.0, and OpenAPI 3.1 custom actions.

Why agents need it

LLMs confabulate spatial facts. Without a verifiable, content-addressed memory layer, every "what is at place X?" answer is unauditable. emem fixes this by:

  • giving every spatial fact a cid that two parties recompute byte-for-byte,
  • signing every read with ed25519, including the responder's pubkey, so the receipt is offline-verifiable,
  • covering the whole Earth with a Hilbert-ordered cell64 address that costs ≤ 4 BPE tokens per cell.

Quickstart

Option A — Docker (no Rust toolchain needed)

docker run --rm -p 5051:5051 -v emem-data:/var/emem \
  ghcr.io/vortx-ai/emem:latest
curl -s http://localhost:5051/health

Option B — HuggingFace Space

A hosted instance lives at huggingface.co/spaces/vortx-ai/emem. Hit ${SPACE_URL}/mcp from any MCP client to talk to it.

Option C — Build from source

# 1) Build the workspace.
cargo build --release --workspace

# 2) Run the server (defaults: 0.0.0.0:5051, persistent storage at ./var/emem).
EMEM_BIND=0.0.0.0:5051 EMEM_DATA=./var/emem ./target/release/emem-server

# 3) Hit it.
curl -s http://localhost:5051/health
curl -s -X POST http://localhost:5051/v1/recall \
  -H 'content-type: application/json' \
  -d '{"cell":"damO.zb000.xUti.zde78"}'   # Mt Fuji

MCP / Claude Desktop / Cursor / Cline

Paste-ready configs live under examples/:

platformfile
Claude Desktopexamples/claude-desktop.json
Claude Codeexamples/claude-code.mcp.json
Cursorexamples/cursor.mcp.json
Cline (VS Code)examples/cline.mcp.json
OpenAI GPTexamples/openai-gpt-action.json
LangChainexamples/langchain.py
LlamaIndexexamples/llamaindex.py

The full agent integration walkthrough is at docs/AGENTS.md.

Live end-to-end demos

Two CLI binaries exercise the full protocol against a running server and write per-step request + response + receipt files to var/demos/<UTC>/:

./target/release/emem-livedemo        # synthetic data, every primitive
./target/release/emem-realdemo        # real Copernicus DEM 30m S3 tiles

The server exposes the trace artifacts at GET /v1/demos.

How it works

                ┌──────────────┐                  ┌────────────────────┐
   user ──────► │ AI agent     │ ──────► /v1/    │ emem responder     │
                │ (Claude /    │  /mcp           │  ┌──────────────┐  │
                │  Cursor /    │  /openapi.json  │  │ ed25519 key  │  │
                │  GPT / etc)  │                 │  └──────────────┘  │
                └──────┬───────┘                 │  ┌──────────────┐  │
                       │                         │  │ sled cache   │  │
                       │  signed receipt         │  └──────────────┘  │
                       ▼                         │  ┌──────────────┐  │
                ┌──────────────┐                 │  │ merkle log   │  │
                │ user reply   │                 │  └──────────────┘  │
                │ + cid        │                 │  ┌──────────────┐  │
                └──────────────┘                 │  │ vsicurl COG  │ ──► open data
                                                 │  └──────────────┘  │   (Cop-DEM, JRC,
                                                 └────────────────────┘    Hansen, ESA…)

Address algebra (token cost)

fieldbitswire formtokens
cell644 BPE bigrams≤ 4
tslot64base32 short≤ 2
vec1792 D fp1612-byte prefix≤ 3
cid32 B8-byte prefix≤ 3

Crypto: blake3 hashing, ed25519 signatures, base32-nopad-lowercase CIDs. Receipts are signed over blake3(request_id || served_at || primitive || cells || fact_cids) so any client offline-verifies with the responder pubkey in /.well-known/emem.json.

Full math + architecture in docs/WHITEPAPER.md. Wire-format spec in docs/SPEC.md.

Open source, open data

emem ships with only open-source dependencies and reads only from open-data providers in its default build. No API keys, no operator credentials, no SaaS lock-in.

concernhow it's handled
code licenseApache-2.0 (this repo)
crate licensesAll deps are MIT / Apache-2.0 / BSD / ISC — see NOTICE
data licensesCopernicus DEM (open), JRC GSW (CC-BY 4.0), Hansen GFC (open), ESA WorldCover (CC-BY 4.0), GHSL / WorldPop (CC-BY 4.0), OSM (ODbL) — see NOTICE
authnone for L0/L1 reads; ed25519 attester key for L2 writes
transportHTTPS via in-process rustls + Let's Encrypt ACME (no Cloudflare, no proxies)

Workspace layout

emem/
├── Cargo.toml                # workspace root
├── crates/
│   ├── emem-core/            # types, manifests, errors
│   ├── emem-codec/           # cell64, cid64, vec64, hilbert
│   ├── emem-fact/            # canonical CBOR + facts + receipts
│   ├── emem-claim/           # structured claims, verify outcomes
│   ├── emem-cache/           # sled hot cache (cell64 → cid64 → fact)
│   ├── emem-fetch/           # vsicurl Range reads, source connectors
│   ├── emem-storage/         # Storage trait, append-only merkle log
│   ├── emem-cubes/           # 1792-D voxel cube loader (legacy AgriSynth bootstrap)
│   ├── emem-primitives/      # recall, compare, find_similar, …
│   ├── emem-attest/          # merkle root, batch verify
│   ├── emem-intent/          # intent → plan
│   ├── emem-mcp/             # MCP tool surface
│   ├── emem-api-rest/        # axum router + OpenAPI + content nego
│   └── emem-cli/             # emem-server, emem-livedemo, emem-realdemo
├── docs/                     # SPEC, WHITEPAPER, AGENTS, DEPLOY
├── examples/                 # paste-ready MCP configs
└── web/                      # landing surface (HTML, JSON, llms.txt)

Deploying

For a full multi-channel rollout (GitHub public, GHCR, Docker Hub mirror, HuggingFace Space, MCP Server Registry, awesome-mcp-servers PR), follow docs/GO_LIVE.md.

See docs/DEPLOY.md for the full deploy story for a self-hosted bare-metal emem.dev-style instance. TL;DR for emem.dev:

  1. EMEM_TLS_DOMAINS=emem.dev,www.emem.dev EMEM_TLS_CONTACT=mailto:avijeet@vortx.ai ./target/release/emem-server
  2. open :443 in your cloud security list,
  3. setcap 'cap_net_bind_service=+ep' ./target/release/emem-server,
  4. point emem.dev's A record at the host's public IP — done.

The server does its own TLS + Let's Encrypt ACME via rustls-acme / TLS-ALPN-01 (only :443 is needed; no :80, no Cloudflare, no Caddy).

Contributing

Issues and PRs welcome — see CONTRIBUTING.md for the dev loop, CODE_OF_CONDUCT.md, and SECURITY.md for vulnerability disclosure.

License

Apache License 2.0 — see LICENSE and NOTICE.

Server Config

{
  "mcpServers": {
    "emem": {
      "type": "streamable-http",
      "url": "https://emem.dev/mcp"
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
RedisA Model Context Protocol server that provides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools.
Amap Maps高德地图官方 MCP Server
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Serper MCP ServerA Serper MCP Server
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
CursorThe AI Code Editor
DeepChatYour AI Partner on Desktop
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
ChatWiseThe second fastest AI chatbot™
Y GuiA web-based graphical interface for AI chat interactions with support for multiple AI models and MCP (Model Context Protocol) servers.
Tavily Mcp
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Playwright McpPlaywright MCP server
WindsurfThe new purpose-built IDE to harness magic