Agent Knowledge Database

Let AI agents browse knowledge
like a file system

OpenContext decomposes documents into L0/L1/L2 summary tiers. Agents drill down as needed and never waste a token. Built-in session memory lets knowledge accumulate across conversations.

How it works

Documents in, knowledge out — agents fetch exactly what they need, nothing more

📄

Ingest documents

PDF / DOCX / code / web pages

🗃

Three-tier decomposition

L0 one-liner / L1 paragraph / L2 full text

🤖

Agents pull on demand

Browse via ctx:// URIs

🧠

Session memory

Preferences and knowledge auto-extracted

Three-tier summary system

The core idea: trade a small token budget for much denser information

L0

One-line summary

One sentence capturing the core of each document. Agents can scan hundreds of docs with minimal tokens.

~20 tokens

L1

Paragraph overview

Structured summary with key points, method and conclusion. The ideal granularity for agents deciding whether to drill deeper.

~200 tokens

L2

Full text

The original document in full. Agents only read this when they've confirmed they need the detail, avoiding token waste.

Full content

How agents use it

File-system semantics with native MCP protocol support

Agents interact with OpenContext via MCP or CLI
# Browse the knowledge base (like ls)
oc ls ctx://resources/docs

# Quick scan: one-line summary per paper
oc read ctx://resources/docs/paper.pdf --level L0
# → "Proposes diffusion decoding as a unified alternative to autoregressive OCR."

# Need more detail? Read L1
oc read ctx://resources/docs/paper.pdf --level L1

# Semantic search
oc find "attention mechanism"

# Keyword search
oc grep "transformer"

Core capabilities

Designed from scratch for agents — not a veneer over a vector database

🗃

ctx:// file-system semantics

Every document has a unique URI (e.g. ctx://docs/paper.pdf). Agents navigate knowledge the way they navigate a file system — no vector IDs to remember.

🔎

Semantic + keyword search

Vector search for semantic matches, regex for exact matches. Hierarchical drill-down walks automatically from L0 down to L2.

🧠

Session memory

Agent conversations auto-extract user preferences and execution experience and persist them to the knowledge base — loaded automatically next time.

🔌

Native MCP protocol support

A built-in MCP server plugs directly into Claude Desktop, Cursor and other MCP clients — agents need no extra integration code.

📄

8+ format parsers

PDF, DOCX, EPUB, code (AST-aware), HTML, images (VLM) — ingest once and unify into the L0/L1/L2 structure.

👁

Retrieval tracing and audit

Every retrieval is traced: query, candidate count, drill-down depth, reranker deltas — complete observability.

Give your agent a smart knowledge base

Free and open source. One-command Docker deploy. 5 minutes to integrate.

Explore more products