OpenContext decomposes documents into L0/L1/L2 summary tiers. Agents drill down as needed and never waste a token. Built-in session memory lets knowledge accumulate across conversations.
Documents in, knowledge out — agents fetch exactly what they need, nothing more
PDF / DOCX / code / web pages
L0 one-liner / L1 paragraph / L2 full text
Browse via ctx:// URIs
Preferences and knowledge auto-extracted
The core idea: trade a small token budget for much denser information
One sentence capturing the core of each document. Agents can scan hundreds of docs with minimal tokens.
Structured summary with key points, method and conclusion. The ideal granularity for agents deciding whether to drill deeper.
The original document in full. Agents only read this when they've confirmed they need the detail, avoiding token waste.
File-system semantics with native MCP protocol support
# Browse the knowledge base (like ls) oc ls ctx://resources/docs # Quick scan: one-line summary per paper oc read ctx://resources/docs/paper.pdf --level L0 # → "Proposes diffusion decoding as a unified alternative to autoregressive OCR." # Need more detail? Read L1 oc read ctx://resources/docs/paper.pdf --level L1 # Semantic search oc find "attention mechanism" # Keyword search oc grep "transformer"
Designed from scratch for agents — not a veneer over a vector database
Every document has a unique URI (e.g. ctx://docs/paper.pdf). Agents navigate knowledge the way they navigate a file system — no vector IDs to remember.
Vector search for semantic matches, regex for exact matches. Hierarchical drill-down walks automatically from L0 down to L2.
Agent conversations auto-extract user preferences and execution experience and persist them to the knowledge base — loaded automatically next time.
A built-in MCP server plugs directly into Claude Desktop, Cursor and other MCP clients — agents need no extra integration code.
PDF, DOCX, EPUB, code (AST-aware), HTML, images (VLM) — ingest once and unify into the L0/L1/L2 structure.
Every retrieval is traced: query, candidate count, drill-down depth, reranker deltas — complete observability.
Free and open source. One-command Docker deploy. 5 minutes to integrate.
Automatically collects trending HuggingFace papers, analyzes them with LLMs to produce summaries and categories, and refreshes daily.
Upload a PDF and AI extracts key information into a mind map — ideal for deep reading of papers and skimming technical docs.
17 AI agents working together across HR, finance, and IT ops — natural language drives every office workflow.
Transparent proxy for LLM API calls, auto-dedup of context and retrieval trimming — cut usage costs by 30-60%.
Import videos from Bilibili or upload locally; AI transcription, smart Q&A, and auto-notes make video learning far more effective.