Back to News
openclaw release-notes active-memory codex self-hosted agent-workflows

OpenClaw 2026.4.10: How to Actually Use Active Memory + Codex Provider in Production

OpenClaw v2026.4.10 landed with two changes that matter immediately for self-hosted operators: a bundled Codex provider path and the new Active Memory plugin.

Instead of just listing release notes, here is the practical rollout plan that avoids common breakage.

What changed (confirmed)

From the official openclaw/openclaw release notes for v2026.4.10:

  • Added bundled Codex provider so codex/gpt-* routes through Codex-managed auth + thread handling
  • Added optional Active Memory plugin (a memory sub-agent that runs before main reply)
  • Added openclaw exec-policy command to help align runtime exec approvals with local policy

From OpenClaw docs (/concepts/active-memory):

  • Active Memory is optional
  • It is a blocking pre-reply pass for eligible conversational sessions
  • Safe default is limiting to direct chats and one agent first

Why this matters

Most teams already have memory data, but recall is often reactive (or user-triggered). Active Memory makes recall proactive before the model writes its answer. Combined with Codex provider routing improvements, this release is mostly about reliability + better context in day-to-day usage.

20-minute rollout checklist

  1. Upgrade to 2026.4.10 and restart gateway cleanly.
  2. Enable Active Memory only for main first (not every agent).
  3. Limit scope to direct chats while tuning.
  4. Turn on /verbose and /trace for one test session and inspect behavior.
  5. Keep persistTranscripts: false during initial validation.
  6. If your workflows use Codex models, verify that codex/gpt-* requests resolve as expected and do not fall back silently.
  7. Run openclaw exec-policy show and confirm policy is aligned with your current safety posture before re-enabling heavy automation.

Recommended starter config

{
  "plugins": {
    "entries": {
      "active-memory": {
        "enabled": true,
        "config": {
          "enabled": true,
          "agents": ["main"],
          "allowedChatTypes": ["direct"],
          "queryMode": "recent",
          "promptStyle": "balanced",
          "timeoutMs": 15000,
          "maxSummaryChars": 220,
          "persistTranscripts": false
        }
      }
    }
  }
}

Then restart:

openclaw gateway restart

Fast validation prompts

After enabling:

  • Ask a follow-up question that depends on recent conversation details
  • Confirm reply quality improves without manually saying "search memory"
  • Check latency impact in verbose output

If latency climbs too much, reduce scope first (chat type + agent list) before disabling entirely.

Bottom line

This is a high-impact release for operators who care about practical agent reliability. If you only do one thing this week, roll out Active Memory in a controlled scope and verify Codex provider behavior in your real workflows.

It is one of the first OpenClaw updates this month that can improve answer quality without changing your user prompts.

Protect your AI agent with Clawly

Deploy your OpenClaw agent in an isolated, hardened container with encrypted credentials and managed updates. No DevOps required.

Deploy Your Agent