OpenClaw v2026.3.24 shipped on March 24, 2026, bringing OpenAI API compatibility to the gateway layer and tightening outbound media security to align with filesystem policies. This release adds /v1/models and /v1/embeddings endpoints alongside model override forwarding for /v1/chat/completions, making OpenClaw a drop-in replacement for OpenAI in Retrieval Augmented Generation (RAG) pipelines and third-party clients. The outbound media fix ensures host-local file access respects your configured workspaceOnly policy, preventing sandbox escapes while maintaining functionality for trusted agents. Microsoft Teams receives a full SDK migration with production UX patterns, the Control UI now shows live tool availability, and bundled skills ship with one-click install recipes. Node.js 22.14+ is now supported alongside container-aware CLI commands for Docker and Podman workflows, enhancing flexibility for deployment.
Breaking Down the OpenAI Gateway Compatibility in v2026.3.24
The headline feature in v2026.3.24 is comprehensive OpenAI API compatibility at the gateway level. You can now point any OpenAI client at your OpenClaw instance and expect it to work without requiring code changes. The implementation adds three critical pieces: a /v1/models endpoint that exposes available models, a /v1/embeddings endpoint for vector generation, and proper forwarding of explicit model overrides through both /v1/chat/completions and /v1/responses. This matters because the AI ecosystem has largely standardized on OpenAI’s wire format. Vector databases, RAG frameworks, and many third-party tools hardcode those URL patterns. Previously, you needed proxy layers or client patches to bridge OpenClaw’s native API. Now the gateway handles translation transparently, simplifying integration. The model override fix specifically addresses clients that send model: "gpt-4" or custom names in the request body. OpenClaw forwards these correctly to the underlying provider rather than dropping them or forcing a default, preserving the client’s intent when chaining through multiple API layers. For developers building with local Large Language Models (LLMs) through OpenClaw, this means your existing LangChain or LlamaIndex code connects directly without needing adapter classes.
How the /v1/models Endpoint Expands OpenClaw Integration Capabilities
The /v1/models endpoint returns a JSON list of available models formatted exactly like OpenAI’s API. When you curl http://localhost:3000/v1/models, you receive an object containing data, object, and standard model metadata. This enables automatic discovery in tools like OpenWebUI, Continue.dev, or any client that populates model dropdowns dynamically. Before this release, those tools would fail during setup because they couldn’t fetch the model list. You had to manually configure endpoints or maintain forked clients. Now OpenClaw advertises its capabilities correctly, streamlining the setup process. The endpoint respects your configured providers and only lists models for which you actually have keys or access. If you have OpenAI, Anthropic, and local Ollama backends configured, the response aggregates them into a single compatible list. This reduces configuration drift between environments, ensuring your staging OpenClaw instance and production gateway expose identical schemas, so your client code behaves consistently. It also supports pagination and filtering parameters for large deployments with dozens of model variants. For developers building internal AI platforms, this means you can expose OpenClaw as a standard OpenAI-compatible endpoint to downstream teams without needing to write extensive documentation about custom APIs.
Understanding the Embeddings Endpoint for RAG Pipelines in v2026.3.24
Vector search is a cornerstone of effective RAG pipelines, and v2026.3.24 addresses this by adding /v1/embeddings. This endpoint accepts the standard OpenAI request format, including model, input, and optional encoding_format parameters. It returns vectors compatible with popular vector databases like Pinecone, Weaviate, Chroma, or any other vector store expecting OpenAI-shaped data. This new feature closes a critical gap where RAG frameworks would attempt to call OpenClaw for embeddings, only to receive a 404 error and fall back to direct OpenAI calls with your API keys. Now, the entire request stays within your OpenClaw infrastructure, enhancing data sovereignty and security. The implementation batches requests efficiently and supports multiple input strings per call, optimizing performance. You can configure which provider handles embeddings separately from chat models. For instance, you could use local nomic-embed-text via Ollama for cost savings on high-volume indexing, while reserving GPT-4 for more complex reasoning tasks. The response includes usage token counts for effective cost tracking. This endpoint also supports the dimensions parameter for models that allow vector truncation, enabling you to optimize storage costs in your vector database. For production deployments, this means you can finally use OpenClaw as a unified gateway for both generation and retrieval without splitting traffic across different API endpoints.
Model Override Forwarding Fixes Silent Failures in OpenClaw Workflows
Previous versions of OpenClaw sometimes ignored explicit model parameters in certain gateway paths. For example, if your client sent model: "claude-3-opus" through the OpenAI compatibility layer, OpenClaw might have inadvertently overridden it with a default model specified in environment variables or its own internal configuration. v2026.3.24 rectifies this by ensuring explicit model overrides are correctly forwarded through both /v1/chat/completions and /v1/responses. The gateway now diligently inspects the request body, extracts the specified model field, and passes it to the routing layer unmodified. This is crucial for multi-model routing strategies where you intend for specific agents to interact with particular providers or model instances. You can now reliably use OpenClaw as a transparent proxy that respects the client’s intent rather than enforcing server-side defaults. This fix applies to both streaming and non-streaming requests, ensuring consistent behavior. It also preserves custom model names you define in your OpenClaw configuration. If you alias gpt-4-turbo to a local fine-tuned model, clients requesting that alias will receive your local model, while the client remains unaware of the redirection. This provides fine-grained control over model routing without requiring any modifications to your client code.
The Outbound Media Security Fix Explained for v2026.3.24
The most important security fix in this release aligns outbound media access with your configured filesystem policy. Previously, agents could access host-local files and inbound media paths even when the workspaceOnly policy was enabled, inadvertently creating a sandbox escape vulnerability. v2026.3.24 enforces strict consistency: when workspaceOnly is disabled, agents can read host files and send them as outbound media as expected. However, when workspaceOnly is enabled, all file access is strictly restricted to the agent’s designated workspace directory, including media attachments. This prevents agents from accidentally or maliciously exfiltrating sensitive files from system directories like /etc, /home, or other critical system paths. The fix applies universally to all transport methods, including HTTP responses, email attachments, and chat platform uploads, ensuring a comprehensive security posture. It respects the same policy checks used for text file reading, establishing uniform security boundaries. For agents processing user uploads, this means inbound media stays quarantined according to your policy. While this change might be breaking for workflows that relied on the previous inconsistent behavior, it closes a significant attack surface where malicious skills or compromised prompts could potentially leak data outside the intended workspace.
How File System Policies Now Protect Host-Local Files and Data
OpenClaw’s filesystem policy is a fundamental control that governs what agents can access and modify on disk. With v2026.3.24, this policy is now authoritative for outbound media streams, significantly enhancing host-local data protection. If you configure fs: { workspaceOnly: true } in your agent’s configuration, the OpenClaw runtime will actively block any attempts to read a file like /etc/passwd and attach it to an external message, such as a Slack post. This rigorous policy check now occurs at the media streaming layer, not just at the initial text file reading layer, effectively closing the gap between file reading permissions and file sending permissions. You can verify the effectiveness of your policy with the new --dry-run flag on media operations or by carefully checking agent logs for MEDIA_POLICY_DENY entries, which indicate blocked attempts. The fix also meticulously handles edge cases, such as symbolic links pointing outside the workspace and relative path traversal attempts, ensuring robust protection. For Docker deployments, this functionality works seamlessly with volume mounts. If you mount /data into the container and designate it as the workspace, agents cannot escape to the host’s root filesystem, even if the container process theoretically has higher-level access. This alignment of security policies makes security audits simpler, as you only need to review one configuration key to fully understand potential data exfiltration risks.
Workspace-Only Mode vs. Extended File Access in OpenClaw: A Comparison
OpenClaw offers two distinct security postures for agent file access, allowing administrators to balance security and functionality. Workspace-only mode treats the agent’s working directory as a confined environment, similar to a chroot jail, providing maximum isolation. Extended access, conversely, permits reading from the full host filesystem, which can be necessary for system administration tasks or multi-project workflows where agents need broader access. v2026.3.24 makes this critical distinction consistent across all Input/Output (I/O) operations, including media attachments, ensuring predictable behavior.
| Feature | Workspace-Only Mode | Extended Access Mode |
|---|---|---|
| Text file reads | Restricted to workspace | Full filesystem access |
| Outbound media | Blocked outside workspace | Allowed from any path |
| Inbound media | Quarantined to workspace | Stored in workspace |
| Symbolic links | Resolved within workspace | Followed normally |
| Use case | Multi-tenant SaaS, untrusted skills | Single-user personal agent, system admin |
Choose workspace-only for production deployments where you run untrusted skills, interact with external users, or require strict data isolation. Use extended access only for personal assistants running on secured single-user machines or in controlled environments where broader file access is explicitly required and understood. The new consistency means you no longer accidentally leak files through media uploads when you believed workspace-only restrictions were fully active. To check your current configuration, use openclaw config get fs.workspaceOnly. If you are upgrading from v2026.3.22 or earlier, it is crucial to audit any existing skills that handle images or documents to ensure they do not rely on host file access that will now be blocked by the enhanced security policy.
Real-Time Tool Visibility in the Control UI Enhances Debugging
Debugging complex agent behavior often requires a clear understanding of which tools are actually available and functional at runtime. v2026.3.24 significantly improves this by adding a live “Available Right Now” section to the Control UI and updating the /tools endpoint to accurately reflect current agent capabilities rather than merely static configurations. The compact default view provides tool names and brief descriptions, while a detailed mode exposes comprehensive information such as tool schemas, required parameters, and permission contexts. This enhanced visibility is invaluable for diagnosing situations where an agent claims it cannot call a function, even when the skill appears to be installed. The UI intelligently distinguishes between globally available tools and session-specific ones loaded for the current conversation. If a tool requires an API key that has expired, it will be clearly indicated as unavailable with a red status. Furthermore, the endpoint respects runtime permission changes; if you revoke a skill mid-session, the tool list updates immediately without requiring an agent restart. For developers building complex multi-skill agents, this level of visibility prevents silent failures where the Large Language Model attempts to call a tool that exists in the codebase but lacks the necessary runtime authorization.
Microsoft Teams Integration Reaches Production Quality with SDK Migration
The Microsoft Teams integration in OpenClaw has undergone a substantial upgrade, migrating from previous webhook-based methods to the official Microsoft Teams Software Development Kit (SDK). This migration brings enterprise-grade reliability and a host of new features that align with modern collaboration platforms. New capabilities include streaming 1:1 replies that display typing indicators while the agent processes a request, welcome cards with prompt starters to guide users into effective interactions, and native AI labeling that marks messages as AI-generated content in accordance with Microsoft’s AI guidelines. The update also adds robust message editing and deletion support for messages sent by the agent, including graceful fallbacks when threading context is ambiguous. Previously, correcting a typo from the agent required sending a new message; now, the original message can be updated directly, improving user experience. The implementation adheres to best practices for AI agent User Experience (UX), providing informative status updates when tools are running, offering feedback buttons for users to rate responses, and including reflection prompts that allow the agent to explain its reasoning. These patterns reduce user confusion and build trust during long-running tasks. For IT administrators, the SDK migration means proper OAuth token handling, adherence to API rate limits, and comprehensive audit logging that satisfies corporate governance and compliance requirements. The integration can now scale to hundreds of concurrent team channels without encountering webhook rate limitations, making it suitable for large-scale deployments.
One-Click Skill Installation Streamlines OpenClaw Onboarding and Management
Setting up new skills in OpenClaw traditionally involved manual dependency installation, API key configuration, and editing configuration files, which could be cumbersome. v2026.3.24 introduces one-click install recipes for a range of bundled skills, including coding-agent, gh-issues, openai-whisper-api, session-logs, tmux, trello, and weather. These recipes are metadata files that clearly declare a skill’s dependencies and configuration requirements in a machine-readable format that both the OpenClaw Command Line Interface (CLI) and Control UI can interpret. When you attempt to enable a skill with missing dependencies, the interface intelligently recognizes the requirements and offers automatic installation, replacing cryptic error messages with a streamlined setup process. The Control UI now features status-filter tabs, allowing users to quickly view All, Ready, Needs Setup, and Disabled skills, complete with counts for each category. Clicking on a skill opens a detailed dialog box that provides comprehensive information: requirements, a toggle switch for activation, install actions, dedicated fields for API key entry, source metadata, and links to relevant homepages. This replaces the previous inline card view, which often cluttered the dashboard. For teams onboarding new developers or managing multiple OpenClaw instances, this significantly reduces setup time from hours to mere minutes. You can precisely identify which API keys are missing and paste them directly into the UI, eliminating the need to search through JSON configuration files. The recipe system is also designed to be extensible, allowing custom skills to include their own install scripts that execute within isolated sandboxes, ensuring security and consistency.
Container-Aware CLI Commands for Enhanced DevOps Workflows
Running OpenClaw within Docker or Podman containers is a common practice for achieving production isolation and portability, but debugging and management previously required complex docker exec commands. v2026.3.24 addresses this by introducing the --container flag and the OPENCLAW_CONTAINER environment variable to the CLI. When either of these is set, OpenClaw commands such as openclaw logs, openclaw config set, or openclaw skills list automatically execute within the specified container. This eliminates the need for manual context switching between host and container shells, greatly streamlining development and operational workflows. You can now script container management directly from your host’s Continuous Integration (CI) pipeline without needing to write elaborate wrapper scripts. The implementation intelligently detects whether Docker or Podman is in use and handles socket forwarding correctly, ensuring seamless communication. It also respects the container’s filesystem boundaries, so commands that modify configuration files update the persistent volume rather than the ephemeral container layer, preserving changes across restarts. For Kubernetes deployments, this feature allows for targeting specific pods with integrated kubectl functionality. The feature also supports interactive commands like openclaw chat by correctly attaching TTY, providing a fully interactive experience within the container. This bridges the gap between the security and isolation of containerized environments and the convenience of host-level tooling, allowing you to treat containerized OpenClaw instances as if they were local processes.
Node.js Version Support and Enhanced Update Safety in v2026.3.24
Node.js version requirements have been carefully refined in this release to improve compatibility and update safety. The minimum supported Node.js version has been lowered to 22.14+, while full support for Node 24 is maintained. This change is significant because it prevents the Node Package Manager (npm) from stranding users on older OpenClaw releases when their system Node.js version is slightly behind the absolute latest. The OpenClaw CLI now performs a preflight check against the target package’s engines.node field before attempting an openclaw update. If your current Node.js version is incompatible with the forthcoming release, you will receive a clear, actionable error message explaining the specific requirement, rather than experiencing a failed npm installation mid-process. This is particularly beneficial for users of Long Term Support (LTS) Linux distributions that typically ship with Node 22. These users can now stay on their current, stable Node.js version and still receive critical security patches and bug fixes for OpenClaw. The change does not impact runtime behavior; it solely enhances the installation and upgrade process. When you do upgrade Node.js, OpenClaw automatically detects and leverages new features such as improved stream handling. The preflight check also verifies npm registry connectivity and sufficient disk space before initiating the update, making the entire process more robust, especially on constrained systems.
Slack and Discord Platform Improvements Enhance Bot Interactions
Platform integrations for popular chat applications received significant polish in OpenClaw v2026.3.24, improving user experience and bot reliability. For Slack, interactive replies now restore rich reply parity for direct messages, automatically rendering simple trailing Options: lines as interactive buttons or selection menus. This fixes a regression where interactive elements only functioned within public channels, not in private direct message conversations with the agent. The setup defaults have also been improved for Slack app configuration, isolating reply controls from plugin interactive handlers to prevent event collisions and ensure smooth operation. On Discord, this release introduces optional autoThreadName: "generated" naming for automatically created threaded conversations. Instead of generic names like “Thread 1”, the agent can now generate contextual and descriptive titles based on the content of the initial message, making it much easier for users to track multiple concurrent conversations with the agent across different channels. Both Slack and Discord integrations now handle API rate limiting more gracefully, implementing exponential backoff strategies rather than failing immediately when hitting platform-imposed limits. These improvements collectively reduce the amount of custom code and configuration required to achieve production-grade bot behavior on these platforms. Users now benefit from enhanced enterprise chat features out of the box, without the need to manage separate bot frameworks or complex custom logic.
Security Audit Isolation for Maintainers Ensures Reproducible Tests
Previously, running OpenClaw’s test suite locally could lead to non-deterministic failures because tests might inadvertently interact with a user’s personal ~/.agents/skills directory. This created inconsistencies, particularly for maintainers and contributors. v2026.3.24 addresses this by isolating audit tests from personal skill resolution, ensuring that local installations do not interfere with maintainer pre-run checks. Tests now execute within a clean, temporary home directory, verifying only the bundled skills and explicitly defined test fixtures. This fixes scenarios where a skill installed for personal use might lack metadata required by the test suite, causing false negatives in Continuous Integration (CI) environments. The isolation extends beyond skill directories to include configuration files and environment variables, making test runs fully reproducible across different machines and development setups. For contributors submitting pull requests, this means your unique local setup will no longer cause spurious test failures, leading to a smoother contribution process. For security auditors, it guarantees that the code being audited matches the code actively being tested, without any unforeseen local modifications impacting results. This change required a refactoring of the skill resolution path to accept an explicit base directory rather than defaulting to os.homedir(). This architectural improvement also paves the way for future features, such as per-project skill directories, offering even greater flexibility and isolation.
What This Release Means for Production AI Agents and Enterprise Adoption
v2026.3.24 represents a significant milestone in OpenClaw’s journey towards maturity for enterprise deployment. The comprehensive OpenAI compatibility removes integration friction, allowing organizations to seamlessly drop OpenClaw into existing LLM pipelines without the need to rewrite client libraries or extensively modify existing codebases. The outbound media fix closes a critical security vulnerability that would be flagged by any diligent auditor in production reviews, significantly enhancing data protection. The Microsoft Teams integration now meets rigorous corporate compliance standards through its proper SDK usage and detailed audit trails, making it suitable for sensitive enterprise communications. One-click installation and streamlined skill management substantially reduce operational overhead when scaling to multiple agent instances or onboarding new team members. Furthermore, container-aware CLI commands fit naturally into standard DevOps practices, enabling automated deployment and management in modern infrastructure. Together, these changes directly address three primary blockers for production adoption: interoperability, security, and operational complexity. You can now confidently expose OpenClaw to external clients through its OpenAI-compatible endpoints, knowing that the robust filesystem sandbox holds firm against potential data breaches. This release maintains backward compatibility for well-behaved agents while strategically tightening security for edge cases. If your organization is building AI agents that handle sensitive data or integrate with complex enterprise systems, v2026.3.24 is the release to standardize on for enhanced reliability, security, and ease of use.
Upgrading to OpenClaw v2026.3.24: Step-by-Step Guide for a Smooth Transition
To ensure a smooth and safe upgrade to OpenClaw v2026.3.24, please follow these steps carefully. First, verify that your Node.js version is at least 22.14.0 by running node --version in your terminal. If your Node.js version is older, it is essential to update Node.js before proceeding with the OpenClaw upgrade to avoid compatibility issues. Once your Node.js environment is ready, execute openclaw update to initiate the upgrade process. This command will trigger a preflight check that validates your environment against the new requirements. If you are deploying OpenClaw in a containerized environment, such as Docker, pull the latest image using docker pull openclaw/openclaw:latest. After the update completes, confirm the new version by running openclaw --version.
Next, review your filesystem policy configuration with openclaw config get fs.workspaceOnly. If workspaceOnly is set to true, audit any existing skills that send media files to ensure they are now restricted to the workspace directory. Test the new OpenAI endpoints to confirm compatibility. You can do this by running a curl command to the /v1/models endpoint, replacing your-api-key with your actual OpenClaw API key:
curl http://localhost:3000/v1/models \
-H "Authorization: Bearer your-api-key"
Verify that the response accurately lists your configured models. To test the outbound media security fix, attempt to send a file from a directory outside the OpenClaw workspace while workspaceOnly is enabled; this operation should now fail with a clear permission error. Review the Control UI’s new skills tab to ensure all your installed skills are correctly displayed as “Ready” and address any listed as “Needs Setup.” If you utilize the Microsoft Teams integration, remember to reauthorize the app to activate the new SDK features and benefit from enhanced functionality. Finally, diligently check your agent logs for any deprecation warnings or unexpected errors and address them proactively to ensure continued stability and prepare for future releases.