Hybrid search
BM25 + vector + reranking. Six pluggable backends. Find the right memory by meaning, not keyword.
OpenClaw's built-in memory is fine for getting started, but it lacks
semantic search, entity tracking, lifecycle management, and multi-agent
isolation. @remnic/plugin-openclaw is a thin adapter that
replaces the memory slot with the Remnic engine — all data still stays on
your disk as plain markdown.
# 1. Install the plugin package
openclaw plugins install @remnic/plugin-openclaw --pin
# 2. Wire up the memory slot automatically
remnic openclaw install
# 3. Restart the gateway
launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway
# or on Linux:
systemctl restart openclaw-gateway
# 4. Verify
remnic doctor remnic openclaw install writes
plugins.entries["openclaw-remnic"] and
plugins.slots.memory = "openclaw-remnic" to
~/.openclaw/openclaw.json. OpenClaw gates memory plugins
on the slots.memory key — without it, hooks never fire.
openclaw.json, and restart the gateway for you.
After restart, the gateway log should contain:
[remnic] gateway_start fired — Remnic memory plugin is active
id=openclaw-engram, memoryDir=~/.openclaw/workspace/memory/local On macOS:
grep "gateway_start fired" ~/.openclaw/logs/gateway.log
If the line is missing, run remnic doctor. Each failing
check includes a remediation hint pointing at
remnic openclaw install.
BM25 + vector + reranking. Six pluggable backends. Find the right memory by meaning, not keyword.
Tracks people, projects, tools, and relationships as structured entities. Causal and timeline queries.
Active → validated → stale → archived. Old memories drop out of recall automatically.
Multi-agent isolation with explicit principals. Shared context for agents that should see the same memory.
Review queues, shadow/apply modes, reversible transitions. Trust-zone promotion workflow.
Proactive session archive + hierarchical summary DAG. Context never dies when the window compacts.
{
"plugins": {
"allow": ["openclaw-remnic"],
"slots": { "memory": "openclaw-remnic" },
"entries": {
"openclaw-remnic": {
"enabled": true,
"config": {
// Option 1: OpenAI for extraction
"openaiApiKey": "${OPENAI_API_KEY}"
// Option 2: local LLM (Ollama, LM Studio)
// "localLlmEnabled": true,
// "localLlmUrl": "http://localhost:1234/v1",
// "localLlmModel": "qwen2.5-32b-instruct"
// Option 3: gateway model chain
// "modelSource": "gateway",
// "gatewayAgentId": "engram-llm",
// "fastGatewayAgentId": "engram-llm-fast"
}
}
}
}
} The full config reference has 90+ settings covering search backends, capture modes, namespaces, governance, and Memory OS features. See docs/config-reference.md.
With modelSource: "gateway", Remnic routes every LLM call
— extraction, consolidation, reranking — through an OpenClaw agent
persona's model chain instead of its own config. Define personas in
openclaw.json → agents.list[] with a primary
model and fallbacks[] array. Remnic tries each in order
until one succeeds. Build multi-provider chains like
Fireworks → local LLM → cloud OpenAI.