🛡️ NemoClaw Integration

NemoClaw's sandbox blocks memory
One command fixes it.

NemoClaw runs OpenClaw inside a deny-by-default sandbox. The plugin installs — but memory calls get blocked at the egress layer.

We handle the egress policy automatically.

50 million tokens free

No credit card. No trial period. One command and your sandbox has memory.

The sandbox problem

WHAT HAPPENS WITHOUT THIS

1. You install mr-memory

2. You add your API key

3. Plugin calls api.memoryrouter.ai ✗ BLOCKED

OpenShell blocks the request. Silent failure. You think it's broken.

WITH OUR INSTALLER

1. Installs plugin

2. Configures API key

3. Adds egress rule to sandbox policy

4. Restarts gateway

Everything works. Memory flows through the sandbox.

Get Your Free Memory Key →

50M tokens free. No credit card required.

One command. That's it.

Installs the plugin, configures your key, applies the egress policy, and restarts the gateway.

terminal
$ npx @memoryrouter/nemoclaw setup \
  --sandbox my-assistant \
  --api-key <your-memory-key>

memoryrouter-nemoclaw setup

────────────────────────────────────────

[0] Preflight checks...

✓ openshell found

✓ openclaw found

[1] Installing mr-memory plugin...

✓ Plugin installed

[2] Configuring MemoryRouter in ~/.openclaw/openclaw.json...

✓ API key configured

[3] Applying MemoryRouter egress policy to sandbox "my-assistant"...

✓ Policy applied to sandbox "my-assistant"

[4] Restarting OpenClaw gateway...

✓ Gateway restarted

────────────────────────────────────────

✓ Setup complete!

Optional — Upload your conversation history
$ openclaw mr upload
# Your AI wakes up already knowing everything you've ever talked about.

Automatically finds your workspace files and session history.

How it works under the hood

NemoClaw architecture:

User → OpenShell Sandbox → OpenClaw → Provider

The problem:

OpenClaw → api.memoryrouter.ai ← BLOCKED (not in egress policy)

After setup:

OpenClaw → api.memoryrouter.ai ← ALLOWED (egress rule added) → memories flow 🛡️

The setup command reads your sandbox's current network policy via openshell sandbox get, merges in the MemoryRouter egress block, and re-applies the full policy via openshell policy set. Idempotent — running it twice won't duplicate anything.

What gets added to your sandbox policy

Minimal egress rule. Only allows GET and POST to api.memoryrouter.ai:443. Scoped to the OpenClaw and Node binaries.

network_policies:
  memoryrouter:
    name: memoryrouter
    endpoints:
      - host: api.memoryrouter.ai
        port: 443
        protocol: rest
        tls: terminate
        enforcement: enforce
        rules:
          - allow: { method: GET, path: /** }
          - allow: { method: POST, path: /** }
    binaries:
      - path: /usr/local/bin/openclaw
      - path: /usr/local/bin/node

Why cloud memory in a sandbox?

NemoClaw's sandbox restricts filesystem writes to /sandbox/.openclaw-data and /tmp. Local vector databases won't survive sandbox rebuilds. Cloud memory persists through everything — sandbox restarts, onboard cycles, policy resets.

Infinite Space

Every conversation stored raw and complete. No compression, no summarization. Your AI remembers the details.

Instant Search

A100 GPUs running the latest embeddings model. Sub-100ms retrieval through the egress layer.

🛡️

Survives Everything

Sandbox restarts, nemoclaw onboard cycles, policy resets, compaction — your memories persist through all of it.

Get Started Free →

Install in 60 seconds. No credit card.

🔒 Your keys stay local

Relay architecture — MemoryRouter injects memories via hooks. Your API keys and provider calls never touch our servers. Stays inside the sandbox.

🧠 Smart storage

Only stores direct user ↔ AI conversation. Tool use, subagent work, and internal processing are excluded automatically.

🛡️ Sandbox-safe

Minimal egress rule. Only api.memoryrouter.ai:443. No wildcard domains. No broad network access. OpenShell-approved architecture.

🔄 Fully reversible

openclaw mr off disables memory. openclaw mr delete wipes the vault. Remove the egress rule to fully uninstall.

CLI reference

# Setup (the one command)

npx @memoryrouter/nemoclaw setup --sandbox <name> --api-key <key>

# Options

--dry-run # Preview changes without applying

--skip-policy # Skip egress policy update

--skip-plugin-install # Skip plugin installation

# After setup — same as regular OpenClaw

openclaw mr status # Vault stats

openclaw mr upload # Upload workspace + sessions

openclaw mr off # Disable memory

openclaw mr delete # Clear vault

Manual setup

Prefer to do it yourself? Here are the individual steps:

# 1. Install the plugin

openclaw plugins install mr-memory

# 2. Add your key

openclaw mr <your-memory-key>

# 3. Export current policy, add memoryrouter block, re-apply

openshell sandbox get my-assistant > policy.yaml

# Edit policy.yaml — add the memoryrouter network_policies block

openshell policy set my-assistant --policy policy.yaml --wait

# 4. Restart

openclaw gateway restart

Pricing

Free

$0

50M tokens included. No credit card required. Enough for weeks of heavy daily use.

Pro

$0.10 / 1M tokens

Pay as you go after free tier. No subscription. Cancel anytime.

FAQ

Performance. We're querying A100s on the edge in under 100ms — something local machines can't do. We host the latest top-performing embeddings model, which most local computers don't have enough RAM to host, let alone inference. In the cloud, we spin up multiple parallel instances to handle linear problems exponentially. And we create an infinite-dimensional vector space that can hold and search everything you have at light speed. Local storage hits a ceiling. We don't have one.

QMD rebuilds the entire vector index before every search. The more data you have, the bigger the index, the longer it takes — leading to 30-60 second wait times before your agent even starts responding. Because that search is so expensive, it's avoided and only used when necessary. Sometimes your agent responds instantly, other times it takes forever. The reason? It ran a memory search and you have a lot of data. And because of the size weakness, local memory is forced to take a minimalist approach — consolidating conversations to bullet points. But the detail is what matters. Memory should remember everything, not just the bullet points.

Local isn't free — you either pay with hardware or pay with degraded performance. Running embeddings locally means compromising on model quality, slower search times, and your machine grinding under the load. With MemoryRouter, you fractionally pay for the best hardware in the world — A100 GPUs, top-tier embeddings models, edge infrastructure — and only pay when you use it. It's more efficient to collectively share the best compute than to individually compromise on lesser hardware.

MemoryRouter is a community-built plugin that installs natively via OpenClaw's plugin system. Install with openclaw plugins install mr-memory and you're good to go.

OpenClaw 2026.3.7 or later. That's the version that introduced the hooks the MemoryRouter plugin relies on. Run openclaw --version to check yours, and openclaw update if you need to upgrade.

Nope. The plugin persists across OpenClaw updates. Run openclaw plugins update mr-memory to get the latest version.

Dramatically. MemoryRouter retrieves your memories in under 100ms and injects them before the model even starts. Same model, same prompt — responses go from 30-60 seconds down to 2-3 seconds. We call it OpenClaw Instant.

Your AI knows your name and your config. But does it remember the conversation where you decided to switch frameworks? The bug you spent 3 hours debugging together? The project context you've been building for weeks? MemoryRouter gives your AI persistent memory that compounds over time — every conversation, every decision, every file. It knows YOU. It knows the details. It knows the nuance. It remembers what you said yesterday in exact quotes. It's the difference between starting over every session and picking up exactly where you left off.

No. Our OpenClaw plugin is a relay — memories are retrieved and injected locally inside OpenClaw, before anything reaches your provider. Your API keys, your inference, your data — none of it touches our servers. We only store and retrieve memories — encrypted at rest and in transit.

No. The relay architecture means your API keys and provider calls never leave OpenClaw. We only handle memory storage and retrieval — your inference stays between you and your provider.

Only direct conversations between you and your AI. Tool use, subagent work, and internal processing are excluded automatically.

Yes. openclaw mr delete clears your entire vault instantly. You can also disable memory with openclaw mr off without deleting anything.

MEMORY.md is manual — you have to tell the AI to write things down, and hope it remembers to do it. MemoryRouter is automatic. Every conversation is stored and retrieved intelligently. No maintenance, no forgetting, no hoping.

Yes. Without MemoryRouter, your cron jobs and scheduled tasks run with zero context. With MemoryRouter, they remember everything you've ever discussed. Your morning email triage knows your priorities. Your code review agent knows your architecture. Every automated task runs with full context, never from scratch.

Automatically. MemoryRouter syncs your workspace files on every gateway startup and every new session — using a smart hash manifest that only updates files that actually changed. No rebuilding the entire index every time. Add a file, start a new chat, it's already there. You can also run openclaw mr sync anytime to force a sync.

Try it right now

50 million tokens free. One command to install. If it doesn't blow your mind, disable it with openclaw mr off.

Get your free memory key → 🛡️