lossless-claw
Slot: contextEngine
Storage: ~/.openclaw/lcm.db
Preserves: verbatim turns + summary DAG
Search: FTS5 / BM25 + DAG navigation
lossless-claw
is an OpenClaw plugin that replaces native conversation compaction with
a SQLite-backed message archive plus a hierarchical summary DAG. Remnic
ships its own Lossless Context Management mode with a
near-isomorphic schema. They occupy different OpenClaw slots, so you
can run both at once — or migrate fully into Remnic with the
remnic import-lossless-claw command.
OpenClaw plugins register against slots. lossless-claw
occupies the contextEngine slot — it replaces OpenClaw's
built-in conversation compaction. Remnic occupies the
memory slot — it intercepts before_agent_start
and agent_end hooks to inject recalled facts and buffer
turns for extraction. The two surfaces are orthogonal.
Slot: contextEngine
Storage: ~/.openclaw/lcm.db
Preserves: verbatim turns + summary DAG
Search: FTS5 / BM25 + DAG navigation
Slot: memory
Storage: <memoryDir>/facts/ + LCM SQLite
Preserves: extracted facts, entities, lifecycle, optional LCM
Search: hybrid BM25 + vector + reranking
Useful when you want lossless-claw to keep handling compaction while Remnic builds up extracted-fact memory. A single OpenClaw config sets both slots:
{
"plugins": {
"slots": {
"memory": "openclaw-remnic",
"contextEngine": "lossless-claw"
}
}
} Each subsystem reads and writes its own SQLite store. They never share storage and never race for the same hook surface.
Remnic ships its own LCM mode: a SQLite archive of every turn plus a hierarchical summary DAG, exposed through Remnic's recall pipeline. The schemas are near-isomorphic, so migration is a direct SQLite→SQLite transform — not a lossy distillation.
{
"plugins": {
"entries": {
"openclaw-remnic": {
"config": {
"lcmEnabled": true
}
}
}
}
} npm install -g @remnic/import-lossless-claw
# or
pnpm add @remnic/import-lossless-claw # Preview without writing
remnic import-lossless-claw --src ~/.openclaw/lcm.db --dry-run
# Real run
remnic import-lossless-claw --src ~/.openclaw/lcm.db
# Restrict to specific resolved sessions
remnic import-lossless-claw --src ~/.openclaw/lcm.db \
--session-filter sess-A --session-filter sess-B
The importer is idempotent: re-running it inserts zero new rows for
already-imported turns. The source database is opened
read-only with fileMustExist: true, so
a half-baked source can never be mutated mid-import.
Once the import succeeds, drop
contextEngine: "lossless-claw" from
plugins.slots. OpenClaw falls back to its built-in
compaction; Remnic LCM is designed to complement, not replace, that.
| lossless-claw | Remnic LCM |
|---|---|
messages.role, content, token_count, created_at | lcm_messages.role, content, token_count, created_at |
messages.seq (per-conversation) | lcm_messages.turn_index (session-global,
reflects chronology when conversations interleave); original
seq preserved in metadata.source_seq |
summaries.summary_id, depth, content, token_count | lcm_summary_nodes.id, depth, summary_text, token_count |
summary_messages (M:N) | lcm_summary_nodes.msg_start, msg_end (derived from joined turn-index range) |
summary_parents (multi-parent DAG) | lcm_summary_nodes.parent_id (single FK — collapsed
to lowest-ordinal parent; collapse count surfaced in run summary)
|
conversations.session_id (or conversation_id fallback) | lcm_messages.session_id |
ordinal parent and reports
the collapse count. With the default summaryRollupFanIn = 4
this is rare in practice.
message_parts table — fine-grained tool I/O, patches,
file references, step-start/finish markers — has no Remnic LCM analog.
Only the rendered messages.content survives.
facts/; they aren't part of the LCM transform.
For each session that gains data, the importer writes one row to
lcm_compaction_events with
tokens_before == tokens_after. That equality encodes
"import boundary, not a real compaction event" — Remnic's own
compaction telemetry will start from this anchor on the next run.
Token totals come from SUM(token_count) on the destination
at boundary-write time, so a partial-retry where only summaries land
new this run still records the correct anchor.
Dedup is keyed on source identity
(metadata.conversation_id +
metadata.source_seq) via SQLite
json_extract, with an in-memory pre-fetch per session for
O(1) lookup. Two source conversations sharing a single
session_id both contribute messages without colliding —
turn_index is assigned as a session-global running counter
in chronological (not UUID) order.