Back to News
openclaw self-hosted openai-compatible agent-workflows rag embeddings

OpenClaw as a Drop-In OpenAI API Gateway: /v1/models + /v1/embeddings for Self-Hosted Agent Stacks

OpenClaw's latest release added a practical compatibility layer many self-hosters have been waiting for: OpenAI-style endpoints directly on the Gateway.

In the release notes, OpenClaw explicitly calls out support for:

  • GET /v1/models
  • POST /v1/embeddings
  • forwarding explicit model overrides through POST /v1/chat/completions and POST /v1/responses

This matters because many tools in the ecosystem expect these endpoints by default (chat UIs, RAG pipelines, embeddings jobs, and quick prototype scripts).

Why this is a meaningful upgrade

For technical users running self-hosted agent workflows, this removes a common integration tax:

  1. Fewer custom adapters to connect OpenClaw to OpenAI-compatible clients
  2. Cleaner migration path from hosted APIs to self-hosted control planes
  3. Agent-native routing remains intact while exposing familiar HTTP paths

OpenClaw docs clarify an important design choice: the model field targets OpenClaw agents (openclaw/default, openclaw/<agentId>), while backend provider/model overrides are passed via x-openclaw-model.

That gives you compatibility without losing agent-level policy and routing semantics.

Practical setup (5-minute checklist)

1) Enable the endpoint in Gateway config

{
  "gateway": {
    "http": {
      "endpoints": {
        "chatCompletions": { "enabled": true }
      }
    }
  }
}

2) Verify model listing

curl -sS http://127.0.0.1:18789/v1/models \
  -H 'Authorization: Bearer YOUR_TOKEN'

You should see OpenClaw agent targets like openclaw/default.

3) Test embeddings with explicit backend override

curl -sS http://127.0.0.1:18789/v1/embeddings \
  -H 'Authorization: Bearer YOUR_TOKEN' \
  -H 'Content-Type: application/json' \
  -H 'x-openclaw-model: openai/text-embedding-3-small' \
  -d '{
    "model": "openclaw/default",
    "input": ["alpha", "beta"]
  }'

4) Point OpenAI-compatible clients at OpenClaw

Use base URL:

  • http://<gateway-host>:<port>/v1

Use your Gateway bearer token as API key.

Security note you should not skip

OpenClaw docs are explicit: this HTTP surface should be treated as operator-level access to your gateway. Keep it on loopback/private ingress/tailnet and avoid public exposure.

Sources

  • OpenClaw GitHub Releases (March 2026 release notes)
  • OpenClaw docs: gateway/openai-http-api

If your stack already assumes OpenAI endpoints, this is one of the highest-leverage OpenClaw updates in recent weeks: you can plug in faster while keeping the benefits of self-hosted agent control.

Protect your AI agent with Clawly

Deploy your OpenClaw agent in an isolated, hardened container with encrypted credentials and managed updates. No DevOps required.

Deploy Your Agent