Skip to content

Instantly share code, notes, and snippets.

@sing1ee
Created March 30, 2026 04:09
Show Gist options
  • Select an option

  • Save sing1ee/1b735978cd346f6d195d789ff9923c48 to your computer and use it in GitHub Desktop.

Select an option

Save sing1ee/1b735978cd346f6d195d789ff9923c48 to your computer and use it in GitHub Desktop.

Openclaw Lossless Context Management (LCM) Plugin Guide

A comprehensive guide to installing, configuring, and using the lossless-claw plugin for OpenClaw — a DAG-based context engine that never throws away your conversation history.

What Problem Does It Solve?

By default, OpenClaw uses a legacy context engine that truncates or slides old messages out of the context window when conversations get long. Once those messages are gone, the agent loses access to earlier context entirely.

Lossless-claw replaces this with a fundamentally different approach:

  • Every message is persisted to a local SQLite database — nothing is ever deleted
  • Old messages are summarized into a DAG (Directed Acyclic Graph) of layered summaries
  • The agent can drill back into any summary to recover full details on demand
  • Context assembly is budget-aware, fitting the most relevant information into the model's window

The result: conversations that can run for hundreds or thousands of turns without the agent "forgetting" what happened earlier.

Installation

From npm

openclaw plugins install @martian-engineering/lossless-claw

From a Local Clone (for development)

git clone https://github.com/Martian-Engineering/lossless-claw.git
openclaw plugins install --link ./lossless-claw

Activate as the Context Engine

This step is required. Without it, the plugin loads but does not run — the default legacy engine remains active.

openclaw config set plugins.slots.contextEngine lossless-claw

Verify

openclaw plugins list

You should see lossless-claw listed as enabled, with the contextEngine slot assigned to it.

Update

openclaw plugins update @martian-engineering/lossless-claw

Or update all plugins at once:

openclaw plugins update --all

How It Works

The DAG Model

Traditional context management is linear: keep the latest N messages, discard the rest. LCM builds a tree instead:

Raw messages:   [m1] [m2] [m3] ... [m20] [m21] ... [m40] ... [m80] ... [m100]
                 ↓ chunk                  ↓ chunk            ↓ chunk
Leaf (d0):     [leaf_1: m1-m20]      [leaf_2: m21-m40]   [leaf_3: ...]  [leaf_4: ...]
                 ↓                        ↓
Condensed (d1): [cond_1: leaf_1 + leaf_2]                 [cond_2: leaf_3 + leaf_4]
                 ↓                                            ↓
Condensed (d2): [cond_3: cond_1 + cond_2]
                                                    ↑
                                            still expandable

Each node in the DAG carries metadata: time range, token counts, descendant counts, and references to its sources. The agent sees summaries in the context window, and can use retrieval tools to drill into any node for full detail.

Lifecycle

The engine hooks into four points in OpenClaw's conversation flow:

Phase What Happens
Bootstrap On session startup, reconciles the JSONL session file with the SQLite database. Imports any messages that appeared since the last checkpoint.
Assemble Before each model call, builds the message array within the token budget: recent raw messages (the "fresh tail") plus selected summaries from the DAG.
After Turn After the model responds, persists new messages and evaluates whether compaction is needed.
Compact When the context exceeds the threshold, runs leaf and/or condensed summarization passes to compress older content.

Compaction: Three Escalation Levels

Every summarization attempt follows a fallback chain to guarantee progress:

  1. Normal — Full-fidelity prompt, temperature 0.2, target ~1200 tokens
  2. Aggressive — Tighter prompt with fewer details, temperature 0.1, lower token target
  3. Deterministic fallback — Truncates to ~512 tokens with a [Truncated for context management] marker

Even if the summarization model is down or returns garbage, compaction still succeeds.

Large File Handling

When a message contains a file (code paste, log dump, etc.) exceeding the largeFileTokenThreshold (default 25,000 tokens):

  1. The file content is extracted and stored on disk (~/.openclaw/lcm-files/)
  2. A ~200-token structural summary replaces the file in the message
  3. The agent can retrieve the full file via lcm_describe

This prevents a single large paste from consuming the entire context window.

Configuration

Open your config with openclaw config edit and add settings under plugins.entries.lossless-claw.config:

{
  "plugins": {
    "slots": {
      "contextEngine": "lossless-claw"
    },
    "entries": {
      "lossless-claw": {
        "enabled": true,
        "config": {
          // All fields are optional — defaults are sensible
        }
      }
    }
  }
}

All settings can also be overridden via environment variables (prefix LCM_, e.g. LCM_FRESH_TAIL_COUNT=32). Environment variables take highest precedence.

Key Parameters

Parameter Default Description
contextThreshold 0.75 Fraction of the model's context window that triggers compaction. At 0.75, compaction fires when 75% of the budget is consumed.
freshTailCount 20 Number of most recent raw messages that are always included and never compacted. This is the agent's "working memory."
incrementalMaxDepth -1 How deep incremental (per-turn) condensation goes. 0 = leaf passes only, 1 = one condensation level, -1 = unlimited.
dbPath ~/.openclaw/lcm.db Path to the SQLite database.
summaryModel (session model) Model override for summarization. Use a cheaper/faster model to reduce costs (e.g., anthropic/claude-haiku-4-5). Supports cross-provider refs like openai-resp/gpt-5.4.
summaryProvider (auto) Provider override, used only when summaryModel is a bare model name.
expansionModel (session model) Model override for the lcm_expand_query sub-agent.
expansionProvider (auto) Provider override for the expansion sub-agent.
largeFileTokenThreshold 25000 Files above this token count are externalized to disk.

Session Filtering

Parameter Description
ignoreSessionPatterns Glob patterns for sessions to exclude entirely. Example: ["agent:*:cron:**"] excludes all cron sessions.
statelessSessionPatterns Glob patterns for sessions that can read from the database but never write. Example: ["agent:*:subagent:**"] lets sub-agents access parent context without polluting the DB.
skipStatelessSessions When true, stateless sessions skip all LCM persistence. When false, they participate in reads only.

Recommended Configurations

General use (balanced):

{
  "contextThreshold": 0.75,
  "freshTailCount": 32,
  "incrementalMaxDepth": -1
}

Long-running sessions (hundreds of turns):

{
  "contextThreshold": 0.8,
  "freshTailCount": 32,
  "incrementalMaxDepth": 2
}

Cost-sensitive (minimize summarization calls):

{
  "contextThreshold": 0.85,
  "freshTailCount": 16,
  "summaryModel": "anthropic/claude-haiku-4-5"
}

Agent Tools

Once active, LCM registers three tools that the agent can call to retrieve compressed context:

lcm_grep

Full-text and regex search across all persisted messages and summaries.

lcm_grep({ pattern: "database migration", mode: "full_text" })
lcm_grep({ pattern: "error.*timeout", mode: "regex", scope: "messages" })
lcm_grep({ pattern: "deployment", since: "2026-03-01", limit: 20 })
  • Fast (<100ms) — direct SQLite query
  • Supports FTS5 when available, with automatic LIKE-based fallback for CJK text
  • Scope to messages, summaries, or both
  • Filter by time range with since / before

lcm_describe

Inspect a specific summary or large file in full detail.

lcm_describe({ id: "sum_abc123" })
lcm_describe({ id: "file_xyz789" })
  • Fast (<100ms) — direct lookup
  • For summaries: returns full content, metadata, parent/child links, source message IDs, and subtree structure
  • For files: returns full file content and exploration summary

lcm_expand_query

Answer a focused question by walking the DAG through a bounded sub-agent.

lcm_expand_query({
  prompt: "What were the exact SQL migrations we discussed for the users table?",
  summaryIds: ["sum_abc123"]
})
  • Slow but powerful (~30-120 seconds) — spawns a sub-agent that traverses the DAG
  • The sub-agent has read-only access scoped to the current conversation
  • Access is time-limited (5-minute TTL) and automatically revoked
  • Best used when lcm_grep or lcm_describe are not specific enough

How the Agent Uses These Tools

The plugin injects guidance into the system prompt that teaches the agent when to use each tool:

Need Tool Why
"Did we discuss X?" lcm_grep Fast keyword/regex scan
"What does this summary contain?" lcm_describe Direct metadata lookup
"What exactly did we decide about X three days ago?" lcm_expand_query Deep recall with evidence

The guidance is depth-aware: for shallow DAGs (few summaries), it is minimal. For deep DAGs (many layers of condensation), it adds explicit instructions about precision and evidence requirements.

Architecture Diagram

                        ┌─────────────────────┐
                        │   OpenClaw Gateway   │
                        └──────────┬──────────┘
                                   │
                          ┌────────▼────────┐
                          │  Agent Runtime   │
                          └────────┬────────┘
                                   │
               ┌───────────────────┼───────────────────┐
               │                   │                   │
       ┌───────▼───────┐  ┌───────▼───────┐  ┌───────▼───────┐
       │   Bootstrap    │  │   Assemble    │  │  After Turn   │
       │ (session sync) │  │ (build prompt)│  │ (persist +    │
       │                │  │               │  │  compact?)    │
       └───────┬───────┘  └───────┬───────┘  └───────┬───────┘
               │                  │                   │
               └──────────────────┼───────────────────┘
                                  │
                     ┌────────────▼────────────┐
                     │    SQLite Database       │
                     │  ┌──────────────────┐   │
                     │  │ messages          │   │
                     │  │ summaries (DAG)   │   │
                     │  │ context_items     │   │
                     │  │ large_files       │   │
                     │  └──────────────────┘   │
                     └─────────────────────────┘
                                  │
                    ┌─────────────┼─────────────┐
                    │             │             │
              ┌─────▼─────┐ ┌────▼────┐ ┌─────▼──────┐
              │ lcm_grep  │ │lcm_desc │ │lcm_expand  │
              │ (search)  │ │(inspect)│ │(sub-agent) │
              └───────────┘ └─────────┘ └────────────┘

Advantages

Nothing Is Lost

Every message is persisted. Summaries link back to source messages. The agent can always recover full details through lcm_expand_query. This is fundamentally different from sliding-window truncation where old context is gone forever.

Intelligent Compression

Depth-aware summarization prompts produce different summary styles at each level:

  • Leaf summaries preserve specific decisions, commands, errors, and rationale
  • Mid-level summaries extract themes, key decisions, and unresolved tensions
  • High-level summaries capture session arcs, major turning points, and long-term constraints

Cost Control

You can use a cheaper model for summarization (e.g., Haiku) while keeping the main conversation on a more capable model (e.g., Opus). The summaryModel and expansionModel settings make this explicit.

Crash Recovery

The bootstrap system tracks reconciliation progress with byte offsets and entry hashes. If OpenClaw crashes mid-session, the next startup picks up exactly where it left off — no duplicate ingestion, no lost messages.

Sub-agent Isolation

The expansion system uses scoped delegation grants with TTL and explicit revocation. Sub-agents get read-only access to exactly the conversations they need, with automatic cleanup on completion or timeout.

Session Filtering

Glob patterns let you exclude noisy sessions (cron jobs, heartbeats) from storage, and mark sub-agent sessions as stateless so they benefit from parent context without polluting the database.

Limitations

Summarization Quality Depends on the Model

The summaries are only as good as the model producing them. Using a very cheap or small model for summarization may lose nuance. Important details can be compressed away even with good models — the lcm_expand_query tool mitigates this but adds latency.

Expansion Is Slow

lcm_expand_query spawns a sub-agent, which takes 30-120 seconds. For quick recall, lcm_grep and lcm_describe are far faster but less capable. In time-sensitive workflows, the agent may skip expansion and work from summaries alone.

Storage Growth

The SQLite database grows with every message. Long-running heavy sessions (thousands of turns with large tool outputs) can produce databases in the hundreds of megabytes. Large files externalized to disk add to this. There is no built-in garbage collection or retention policy — old conversations persist indefinitely.

Single-Model Summarization

Each summarization pass uses one model call. There is no ensemble or verification step. If the model hallucinates or misinterprets context during summarization, that error propagates into the DAG and may affect future assembly. The three-level escalation (normal, aggressive, deterministic fallback) handles failures but not subtle quality issues.

No Cross-Session Context

Each conversation is independent in the database. LCM does not automatically share context between different sessions or agents. The allConversations flag on retrieval tools allows cross-conversation search, but there is no automatic cross-pollination during assembly.

CJK Full-Text Search Limitations

FTS5 (SQLite's full-text search engine) does not tokenize Chinese, Japanese, or Korean text well. LCM falls back to LIKE-based search for CJK queries, which is slower and less precise for large databases.

Compaction Latency

Each compaction pass requires an LLM call (typically 5-15 seconds per leaf or condensed pass). During heavy compaction (many accumulated messages), this can add noticeable delay after a turn completes. The afterTurn hook serializes compaction per-session, so it does not block other sessions.

Troubleshooting

Plugin is installed but not active

Check that the context engine slot is set:

openclaw config get plugins.slots.contextEngine

It must return lossless-claw. If it returns legacy or is empty, set it:

openclaw config set plugins.slots.contextEngine lossless-claw

Summarization auth errors

If you see LcmProviderAuthError, the model used for summarization cannot authenticate. Check:

  • Is summaryModel set to a model you have access to?
  • Does the provider require a separate API key?
  • Try unsetting summaryModel to fall back to the session model.

Database location

Default: ~/.openclaw/lcm.db. Override with the dbPath config or LCM_DB_PATH environment variable.

To inspect the database directly:

sqlite3 ~/.openclaw/lcm.db ".tables"
sqlite3 ~/.openclaw/lcm.db "SELECT COUNT(*) FROM messages"
sqlite3 ~/.openclaw/lcm.db "SELECT id, kind, depth, token_count FROM summaries ORDER BY created_at DESC LIMIT 10"

Resetting LCM state

To start fresh (removes all persisted context):

rm ~/.openclaw/lcm.db
rm -rf ~/.openclaw/lcm-files/

The database and file store will be recreated on next session startup.

Further Reading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment