OpenClaw 2026.4.27 Release: Codex Computer Use, DeepInfra Integration, and Fail-Closed Security

OpenClaw 2026.4.27 ships native Codex Computer Use commands, DeepInfra provider bundling, and fail-closed MCP security checks for desktop AI agents.

OpenClaw 2026.4.27 drops today with three major shifts for builders shipping desktop AI agents. The headline: Codex Computer Use now ships with native status and install commands, plus marketplace discovery and fail-closed MCP checks that harden security when agents control your desktop. DeepInfra joins the bundled provider set, giving you model discovery, media generation, TTS, and embeddings without external API key gymnastics. Tencent Yuanbao and QQBot support expand your channel coverage into Chinese messaging ecosystems. Under the hood, manifest-first metadata cuts Gateway boot times and makes provider configurations auditable. This release focuses on production reliability: Telegram and Slack fixes, Docker GPU passthrough, mobile presence protocols, and Windows restart handoffs.

What Changed in OpenClaw 2026.4.27?

OpenClaw 2026.4.27 landed on April 27, 2026, packing production-hardening features that change how you deploy desktop-controlling AI agents. The release centers on Codex Computer Use integration, bringing first-class CLI commands for installation and status checks alongside marketplace discovery that surfaces compatible skills without manual YAML hunting. Security gets serious with fail-closed MCP checks that prevent agents from executing desktop actions when compliance validation fails. DeepInfra emerges as a first-class citizen in the provider bundle, offering turnkey access to inference, text-to-speech (TTS), and embedding endpoints. Tencent’s Yuanbao and QQBot integrations open messaging channels for Chinese markets, while manifest-first plugin architecture slashes Gateway startup latency. Reliability improvements span Telegram socket stability, Slack media handling, Docker GPU passthrough for sandboxed local models, and granular presence tracking for mobile nodes. Windows deployments gain specific fixes for restart handoffs and update synchronization. This is a stability release with significant new capabilities.

Codex Computer Use Setup Ships with Native Commands

Previous Codex integration required manual MCP server configuration and cryptic environment variable exports. This often led to complex setup procedures and troubleshooting. OpenClaw 2026.4.27 eliminates that friction with native CLI commands, streamlining the deployment process for desktop AI agents. You now run openclaw codex install to bootstrap the Computer Use environment, which automatically configures the Model Context Protocol (MCP) server, downloads required dependencies, and validates desktop accessibility permissions. This command is designed to be idempotent, meaning you can run it multiple times without adverse effects, ensuring your environment is always correctly configured. The openclaw codex status command gives you immediate visibility into whether the Codex agent can see your screen, control inputs, and access the clipboard without launching a full chat session. This is crucial for verifying the agent’s operational readiness before committing to complex automation tasks.

These commands integrate seamlessly with the new marketplace discovery system. For example, when you run openclaw codex search --capability screenshot, the CLI returns vetted skills specifically designed and tested for desktop automation. The install command respects your claw.yaml sandbox settings, ensuring that even during setup, the Codex environment runs within your configured Docker or local sandbox boundaries. This design choice significantly reduces the “it works on my machine” deployment failures often encountered when moving from development laptops to production servers. The commands also handle platform-specific quirks: macOS permissions prompts are gracefully managed, Linux display server detection is automated, and Windows UI automation framework initialization is handled without requiring manual intervention, making cross-platform deployments more consistent.

Marketplace Discovery for Codex Desktop Control

Finding skills that actually work with desktop automation used to involve extensive trial and error, often consuming valuable development time. OpenClaw 2026.4.27 introduces a sophisticated marketplace discovery system specifically filtered for Codex Computer Use capabilities. When you query the marketplace, specialized tags like desktop-control, gui-automation, and screen-interpretation surface skills that have been rigorously tested against real desktop environments. This ensures that the skills you discover are reliable and perform as expected. The discovery API returns detailed compatibility matrices showing which skills support specific platform technologies such as macOS Accessibility, Windows UI Automation, or Linux AT-SPI, allowing you to select skills tailored to your target operating system.

You can further refine your search by filtering based on permission requirements. For instance, if your organization’s security policy prohibits clipboard access, you can easily exclude skills tagged clipboard-access from your search results. This granular control helps maintain compliance and security. The marketplace integration respects the new fail-closed MCP checks, meaning that discovered skills will not even install if they request permissions that exceed your mcp-policy.json allowances. This feature is a powerful safeguard against shadow IT scenarios where developers might install convenient but overly permissive automation tools without proper oversight. The discovery system also surfaces provider-owned onboarding policies, which are particularly relevant for the new DeepInfra integration where certain media generation skills might require specific GPU quotas or pre-configured access. This comprehensive approach allows you to browse, validate permissions, and install skills in a single, streamlined workflow: openclaw marketplace install --id desktop-screenshot --verify-mcp.

Fail-Closed MCP Checks: Security for Codex Mode

Desktop control demands paranoid security to prevent unauthorized or malicious actions. OpenClaw 2026.4.27 implements fail-closed MCP checks for Codex-mode operations, a critical security enhancement. This means that any Model Context Protocol validation failure immediately blocks the intended action rather than merely logging a warning. Specifically, when an agent attempts to perform a desktop action such as clicking a button, typing text, or capturing a screenshot, the Gateway rigorously validates the operation against your predefined mcp-policy.json rules. If the policy engine crashes, times out, or encounters an undefined permission, the operation halts immediately, preventing potential security breaches.

This fail-closed mechanism is designed to prevent privilege escalation, a common vulnerability where a compromised skill might exploit policy engine downtime or misconfiguration to perform unauthorized actions. You can configure this fail-closed behavior explicitly in your Gateway settings to match your organization’s security posture:

mcp:
  validation_mode: "strict"
  fail_closed: true
  timeout_ms: 500

The validation_mode: "strict" setting ensures that even minor policy violations are treated as critical, while the 500 millisecond timeout ensures agents do not hang indefinitely waiting for policy checks during fast-paced automation sequences. Furthermore, all Codex Computer Use operations generate immutable audit logs stored in ~/.openclaw/audit/codex/. These logs capture detailed information, including the exact screen coordinates, input strings, and the rationale behind each policy decision. This comprehensive logging satisfies stringent compliance requirements for sensitive applications such as financial trading bots, healthcare automation, and administrative access tools, where every action needs a clear and auditable paper trail.

DeepInfra Joins the Bundled Provider Set

The DeepInfra integration graduates from a community plugin to a fully bundled provider in OpenClaw 2026.4.27, marking a significant milestone for accessibility and ease of use. This means you no longer need to clone external repositories or manage separate API keys to access DeepInfra’s powerful inference stack. The provider now ships directly with the core OpenClaw distribution, enabling immediate access to model discovery, media generation capabilities, text-to-speech (TTS) services, and embedding endpoints through standard OpenClaw configuration files. This integration streamlines the process of leveraging advanced AI models within your agent workflows.

The bundled status implies that DeepInfra adheres to OpenClaw’s robust provider-owned onboarding policy framework. When you configure DeepInfra in your providers.yaml file, the system automatically validates your quotas and rate limits against DeepInfra’s API before the configuration is accepted and activated. This proactive validation prevents common startup failures where agents might attempt inference against exhausted credit accounts, leading to runtime errors. The integration also supports provider row aliasing, a convenient feature that allows you to map a specific DeepInfra model, such as deepinfra/llama-4, to a more generic alias like production-llm in your agent configurations. This abstraction means you avoid hardcoding vendor-specific paths and can swap models more easily in the future.

DeepInfra’s media generation capabilities are extensive, including advanced image editing and video generation, all accessible through the unified media.generate() skill interface. This consistency simplifies the development of multimedia-rich agents. The DeepInfra TTS pipeline integrates seamlessly with OpenClaw’s voice gateway, allowing for the construction of sophisticated voice agents that can utilize DeepInfra’s diverse voice personas alongside existing Azure and Google providers. This comprehensive integration makes DeepInfra a powerful and convenient addition to the OpenClaw ecosystem.

Tencent Yuanbao and QQBot Channel Expansion

OpenClaw 2026.4.27 significantly deepens its integration with Chinese messaging platforms, specifically Tencent Yuanbao and QQBot. Yuanbao, Tencent’s dedicated AI assistant platform, now appears in the channel catalog with complete documentation and detailed API schema entries. This means you can configure Yuanbao channels through the standard OpenClaw Gateway interface, elevating Tencent’s ecosystem to a first-class citizen alongside established platforms like Slack and Discord. This expansion opens up vast new opportunities for deploying AI agents in the Chinese market, catering to a massive user base.

QQBot improvements are specifically targeted at enhancing group chat scenarios, which are essential for community management, customer support, and multi-user interaction agents. The refactored pipeline now supports streaming responses in QQ groups, addressing a previous limitation where messages exceeding 2000 characters were chunked, often breaking code blocks and tables. This streaming capability ensures more natural and coherent communication. Furthermore, media upload functionality has been enhanced to handle proprietary Tencent file formats, automatically converting images and documents to QQ-compatible formats during transit. This eliminates the need for manual format conversions, simplifying agent development for multimedia content.

The streaming implementation leverages server-sent events (SSE) with an automatic fallback to polling for older QQ client versions, ensuring broad compatibility. Pipeline refactors have also been implemented to reduce memory pressure when handling high-frequency group messages, preventing the out-of-memory (OOM) crashes reported in previous versions during viral chat events. You configure QQBot channels using the same YAML structure as other providers, but with platform: qq and Tencent-specific authentication flows handled automatically by the Gateway, ensuring a consistent and secure setup process.

Manifest-First Metadata Cuts Gateway Boot Time

OpenClaw 2026.4.27 introduces a fundamental architectural shift towards manifest-first metadata for plugins, which significantly reduces Gateway initialization time. Previously, the Gateway would scan plugin directories at boot, executing initialization hooks to dynamically determine capabilities, provider rows, and model aliases. This process, while flexible, could introduce noticeable delays, especially with a large number of plugins. Now, plugins are required to ship with static manifest.json files that declare this metadata upfront.

This change means the Gateway can read these manifests without executing any plugin code during its initial startup phase. This allows it to build the entire capability graph in milliseconds rather than seconds, dramatically improving boot performance. This optimization is particularly valuable when running ephemeral Gateway instances in serverless environments, where rapid startup times are critical, or when restarting nodes during rolling deployments in a production cluster. The manifest format is comprehensive, including explicit definitions for provider rows, model catalog entries, skill aliases, and suppression flags, providing a complete declarative overview of each plugin’s offerings.

You generate these manifests using the openclaw plugin manifest --update command after modifying plugin code. This command not only creates the manifest.json file but also validates that declared capabilities accurately match the actual implementations, catching potential mismatches before deployment. The manifest-first architecture also enables more robust auditing capabilities: you can easily diff manifest.json files in version control systems to track precisely which provider aliases or capabilities have changed between releases, without needing to delve into the source code. This feature is invaluable for satisfying change management requirements in regulated industries where every API modification or capability change necessitates thorough documentation and traceability.

Auditing Provider Aliases and Suppressions

The manifest-first architectural shift in OpenClaw 2026.4.27 fundamentally transforms how provider configurations are managed, making them fully auditable through declarative files. This means that provider rows, aliases, and suppressions are now treated as version-controlled metadata rather than transient runtime state. Your providers.yaml file explicitly references aliases defined within plugin manifests, creating a clear and traceable chain of custody from the vendor API configuration all the way to the agent’s runtime configuration. This transparency is crucial for maintaining control and understanding your AI ecosystem.

When you suppress a provider row, for example, by disabling gpt-4-turbo in favor of a newer or more cost-effective model like gpt-4.1, this suppression is now explicitly recorded in the manifest. This entry includes a timestamp and a rationale field, providing a clear historical record of the decision. Compliance teams can leverage this feature to quickly scan these manifests and verify that deprecated or non-compliant models are not accessible in production environments, ensuring adherence to internal policies and external regulations. The audit trail extends to alias resolution as well: if a generic alias like fast-model is configured to point to a specific model such as deepinfra/llama-4-scout, the manifest explicitly records this mapping, eliminating ambiguity.

You can further strengthen your configuration governance by enforcing manifest validation in your Continuous Integration (CI) pipelines. Using a command like openclaw audit providers --strict, your build process will fail if aliases reference undefined provider rows or if suppressions lack proper justification comments. This proactive approach prevents configuration drift, where production Gateways might accumulate experimental or unapproved provider settings that bypass security and compliance reviews. The manifest system seamlessly integrates with OpenClaw’s existing policy engine, ensuring that once a provider is suppressed, it cannot be re-enabled through runtime API calls without explicit policy changes, thus reinforcing security and control.

Reliability Fixes for Telegram and Slack

Production messaging stability receives significant improvements in OpenClaw 2026.4.27 through targeted fixes for both Telegram and Slack integrations. Telegram bots previously suffered from startup race conditions where the Gateway would attempt to poll for messages before completing the authentication handshake. This often resulted in 401 Unauthorized errors and subsequent retry loops that consumed valuable API quotas and delayed agent responsiveness. The fix implemented in this release rigorously sequences the initialization process: authentication is completed first, followed by webhook registration, and finally, message polling begins. This sequential approach ensures a stable and authenticated connection from the outset.

Slack socket mode, a popular choice for persistent connections, gains enhanced resilience against media transfer stalls. When agents needed to upload large files to Slack channels, the socket connection previously had a tendency to time out during the transfer, often dropping the connection and leaving the upload operation incomplete or orphaned. The 2026.4.27 release addresses this by implementing chunked transfer acknowledgments and separate keepalive threads specifically for media operations. This design maintains socket health and prevents timeouts even during the transfer of multi-megabyte files, ensuring reliable delivery of rich media content.

Both platforms benefit from improved error classification. Transient network failures, which are often temporary, now trigger an exponential backoff strategy with jitter. This prevents overwhelming the API with repeated requests during network instability and allows the system to recover gracefully. In contrast, authentication errors, which indicate a more fundamental problem, immediately surface to monitoring systems rather than being hidden in debug logs, enabling faster incident response. You can configure these retry policies on a per-channel basis, providing fine-grained control over how your agents interact with each messaging platform:

channels:
  slack:
    retry_policy:
      max_attempts: 5
      backoff_ms: 1000
      jitter: true

Gateway Prewarm and Session Management

Cold start latency has been a persistent challenge for OpenClaw deployments, particularly in serverless or containerized environments where instances are spun up on demand. Version 2026.4.27 introduces Gateway prewarm sequences that proactively initialize critical subsystems before accepting live traffic. During startup, the Gateway now efficiently loads frequently used model catalogs, establishes robust provider connection pools, and even pre-warms sandbox containers if Docker mode is enabled. This proactive initialization ensures that when the first request arrives, the Gateway is already in an optimal state to respond, significantly reducing perceived latency and improving user experience.

Session handling also receives substantial improvements with sensible defaults that prevent resource leaks, a common issue in long-running applications. Previously, idle sessions could remain open indefinitely, consuming valuable memory and database connections. Now, session.max_idle_ms defaults to 300000 (five minutes), with an automatic cleanup mechanism for orphaned history contexts. This ensures that resources are reclaimed efficiently. The history truncation policy defaults to smart, intelligently preserving crucial system messages and tool definitions while rotating out older user queries, maintaining context without bloating memory.

Update synchronization is also enhanced for long-running Gateway instances. When you deployed new skills or updated provider configurations, the Gateway previously required a full restart to pick up these changes, leading to service interruptions. OpenClaw 2026.4.27 implements hot-reload capabilities for manifest changes, allowing the Gateway to synchronize updates across cluster nodes without dropping active connections. This means configuration changes can be applied with zero downtime. Windows deployments specifically benefit from improved restart handoffs, where the new process gracefully accepts incoming connections before the old process fully terminates. This eliminates the typical 2-3 second downtime window experienced during binary updates, ensuring continuous service availability even on Windows platforms.

Docker GPU Passthrough for Sandbox Workloads

Sandboxed agents can now access host GPUs through the new sandbox.docker.gpus configuration option in OpenClaw 2026.4.27. This opt-in feature significantly enhances the performance capabilities of local AI models by passing the --gpus flag directly to Docker when spawning sandbox containers. This enables high-performance tasks such as local Large Language Model (LLM) inference, complex image generation, and demanding video processing to occur within isolated environments, bridging the gap between security and computational power.

You configure GPU access in your claw.yaml file with clear and flexible options:

sandbox:
  docker:
    gpus: "all"  # Or "device=0,1" for specific cards
    runtime: "nvidia" # Specify the Docker runtime, e.g., "nvidia", "amdgpu", "intel"

This feature directly addresses the previous dilemma where GPU-accelerated tasks either required using unsafe privileged containers or running workloads outside the secure sandbox boundaries. Agents can now invoke local Stable Diffusion models, perform Whisper transcriptions, or run custom PyTorch models, all while maintaining robust filesystem and network isolation. The implementation includes a crucial check: OpenClaw validates host Docker runtime capabilities before attempting GPU passthrough, gracefully falling back to CPU execution if the host lacks the NVIDIA Container Toolkit or an equivalent runtime for AMD (ROCm) or Intel (oneAPI) GPUs.

Security remains a paramount concern. Even with GPU access, security protocols remain tight: GPU access respects existing sandbox network policies and volume mounts. The agent is granted access to the GPU but is strictly prevented from escaping the container through CUDA drivers or other hardware interfaces. This rigorous isolation enables on-premise deployments where sensitive data cannot leave the machine, but you still require the high performance of GPU-accelerated tasks, all managed under OpenClaw’s comprehensive agent orchestration and memory management framework.

Mobile Presence Protocol for iOS and Android

Mobile agent deployments gain significantly improved reliability and state tracking through the introduction of the authenticated node.presence.alive protocol event in OpenClaw 2026.4.27. Both iOS and Android nodes are now capable of emitting these crucial events during background transitions, network reconnections, and other state changes. These events update the Gateway’s node.list with accurate last-seen timestamps without requiring full, resource-intensive connection handshakes, making mobile agents more responsive and traceable.

This new protocol effectively solves the “ghost node” problem, a common challenge where mobile devices appeared offline in the Gateway’s registry despite being actively connected or available. Previously, the Gateway might have treated silence as an indication of presence, leading to message delivery attempts to disconnected phones or a lack of real-time status updates. The new protocol requires cryptographic authentication of presence events, a critical security measure that prevents spoofing where malicious actors might attempt to fake mobile agent availability, ensuring the integrity of your agent swarm.

iOS implementations specifically address the platform’s stringent background execution constraints. They leverage push notification triggers to emit presence updates efficiently, minimizing battery drain often associated with persistent background connections. Android devices, on the other hand, utilize WorkManager for reliable presence transmission, ensuring that events are sent even when the device is in doze mode or under system resource constraints. Both platforms accurately update the last_seen_at timestamp in the Gateway’s node registry, which is easily viewable through openclaw node list --format json. This enhanced presence awareness allows you to build more reliable and intelligent mobile agent swarms, where the orchestrator has a precise understanding of which devices are ready to accept tasks versus those that last checked in hours ago, enabling better task allocation and resource management.

Enhanced Attachment Handling Beyond Images

The chat.send method in OpenClaw 2026.4.27 has been significantly expanded to support a wider range of attachment types beyond just images, now including documents, audio files, and video files. Previously, attempting to send non-image attachments often resulted in silent failures or required cumbersome base64 encoding hacks that drastically bloated message sizes and increased processing overhead. The new implementation provides a robust and intelligent system for handling diverse media types, staging files as agent-readable media paths while maintaining explicit error handling for formats not supported by specific channels.

When an agent sends a PDF document or an MP3 audio file via chat.send, the Gateway now intelligently validates the MIME type of the attachment against the capabilities of the target channel. For instance, Slack receives the file through its native file upload APIs, Telegram utilizes its dedicated document messages, and QQBot leverages its newly enhanced media upload pipeline. This channel-aware routing ensures optimal delivery. Crucially, if a specific channel does not support a particular attachment type, the Gateway now returns a clear UNSUPPORTED_ATTACHMENT error, rather than silently dropping the file. This explicit feedback is invaluable for agent developers, allowing them to build more resilient and platform-aware agents.

This enhancement enables powerful new document-processing and multimedia workflows. Agents can now analyze PDFs directly within chat threads, transcribe audio files, or even review video content, all integrated seamlessly into the conversational flow. The internal staging system temporarily stores attachments in ~/.openclaw/staging/, ensuring data persistence during transit and automatically cleaning up after successful transmission or according to a configured Time-To-Live (TTL) expiration. For particularly large files, the Gateway intelligently streams content to channels that support chunked uploads, preventing memory exhaustion when sharing multi-gigabyte video files with agents. This comprehensive attachment handling significantly broadens the scope of tasks that OpenClaw agents can perform, making them more versatile and powerful.

Windows Restart Handoffs and Update Sync

Windows-specific reliability improvements in OpenClaw 2026.4.27 significantly enhance the platform’s robustness, focusing on graceful restart handoffs and efficient update synchronization. The previous Windows service implementation often led to service interruptions: the old Gateway process would be abruptly terminated before the new binary had fully completed its initialization, resulting in noticeable 2-3 second connection drops during updates. The new handoff protocol meticulously addresses this by employing a named pipe signaling mechanism. This allows the old process to maintain its listening sockets and continue serving requests until the new process explicitly signals its readiness to take over. This ensures near-zero downtime during service restarts, a critical feature for continuous operations.

Update synchronization also sees substantial improvements, specifically targeting race conditions that could arise when multiple Windows nodes in a cluster attempted simultaneous updates. Such scenarios could exhaust network bandwidth, lead to contention, and temporarily degrade service performance. The 2026.4.27 release implements a sophisticated staggered update window mechanism based on node ID hashing. This ensures that if you have a 50-node Windows fleet, updates are distributed over a sensible 10-minute window rather than all nodes attempting to update concurrently. This controlled rollout minimizes resource strain and maintains overall service stability.

Furthermore, Windows-specific path handling has been improved to correctly manage sandbox directories containing spaces or Unicode characters. This resolves previous Docker volume mount failures that plagued users with non-ASCII usernames or complex directory structures. The service wrapper now also correctly handles SIGTERM equivalents during system shutdown, ensuring that in-flight agent operations have the opportunity to complete or checkpoint their state before the process terminates. These collective changes make Windows a much more viable and reliable production platform for 24/7 agent hosting, moving beyond its previous role primarily as a development workstation.

OpenClaw 2026.4.27 Production Deployment Implications

OpenClaw 2026.4.27 marks a significant maturity shift, transforming the framework from experimental tooling into a robust production-grade infrastructure for AI agents. The introduction of fail-closed MCP checks and comprehensive audit logging directly addresses critical compliance requirements for regulated industries. This means organizations in finance, healthcare, or government can now deploy OpenClaw agents with confidence, knowing that policy violations will actively block actions rather than merely generating warnings that might be overlooked. This feature enables GitOps workflows, where Gateway configurations and compliance policies can reside in version control alongside application code, fostering transparency and traceability.

The bundling of DeepInfra as a first-class provider dramatically reduces vendor management overhead and simplifies credential management. This offers enterprise-grade inference capabilities without the complexities of managing external credential sprawl or integrating community plugins. For organizations deploying agents in the field, the new mobile presence protocols for iOS and Android enable the creation of reliable field agent deployments. Mobile devices can now participate in distributed workflows without the “ghosting” problem, ensuring that the orchestrator has accurate, real-time information about node availability.

Docker GPU passthrough bridges a crucial gap between sandbox security and local AI performance. This feature allows sensitive models to run on-premise, leveraging powerful GPU acceleration, without sacrificing the isolation and security benefits of containerization. This is particularly important for industries handling proprietary or confidential data. For teams operating mixed-OS clusters, the Windows restart handoffs eliminate one of the last major platform disparities, ensuring that both Linux and Windows nodes update with equal grace and minimal disruption. These are not merely convenience features; they are critical blockers-removers for organizations that previously couldn’t adopt OpenClaw due to stringent compliance, reliability, or cross-platform compatibility concerns. For additional context on how OpenClaw compares to other frameworks for production use, we recommend reviewing our analysis of OpenClaw vs AutoGPT for production migrations.

Upgrade Path and Migration Guide for OpenClaw 2026.4.27

Upgrading to OpenClaw 2026.4.27 requires specific attention to manifest migration and MCP policy updates to ensure a smooth transition and leverage the new features. The first crucial step for any custom plugins you have developed is to run openclaw plugin manifest --update on all of them. This command generates the new static manifest.json files that are now mandatory for Gateway startup. Without these manifests, the Gateway will fail fast with clear error messages indicating precisely which plugins lack the required declarations, preventing unexpected behavior during operation.

If you are currently utilizing Codex Computer Use, it is imperative to execute openclaw codex install immediately after the upgrade. This command performs the necessary configuration for the new fail-closed MCP checks and integrates with the enhanced marketplace discovery system. Furthermore, you must review your existing mcp-policy.json file. While the default behavior for fail-closed remains permissive for backward compatibility, production deployments are strongly advised to explicitly set fail_closed: true for all desktop control operations within your policy. This ensures the highest level of security and compliance for your desktop-controlling agents.

For users who previously relied on community plugins for DeepInfra integration, you must now remove references to those external repositories. Instead, migrate your configurations to utilize the bundled provider configuration. The core API for DeepInfra remains compatible, but the configuration keys have moved from community.deepinfra to bundled.deepinfra. This change streamlines management and leverages the benefits of the bundled provider.

Mobile deployments will necessitate updating your iOS and Android client applications to version 2026.4.27 or later. While older clients will continue to function, they will not emit the new authenticated node.presence.alive events, meaning the orchestrator will rely on stale connection states. Windows users should also verify that their service accounts have the necessary additional permissions for the new named pipe handoff mechanism, which is critical for graceful restarts. Fortunately, no database schema changes or API deprecations are introduced in this release, ensuring that existing agent runtime APIs remain backward compatible and minimizing the impact on your data layer.

Frequently Asked Questions

How do I enable fail-closed MCP checks for Codex Computer Use?

To enable fail-closed Model Context Protocol (MCP) checks for Codex Computer Use, you need to modify your gateway.yaml configuration file. Locate the mcp section and add fail_closed: true. This setting ensures that any policy validation failure immediately halts an operation rather than proceeding with a warning. Additionally, for maximum security and responsiveness, set validation_mode: "strict" and configure a timeout_ms (e.g., 500 milliseconds) to prevent automation sequences from hanging indefinitely during policy evaluations. After making these changes, it’s crucial to test your policies. You can do this using openclaw codex status --validate to confirm that your mcp-policy.json loads correctly and enforces your desired security posture without blocking legitimate operations.

What DeepInfra models are available in the bundled provider?

The bundled DeepInfra provider in OpenClaw 2026.4.27 offers a comprehensive suite of models. For large language models, you’ll find Llama 4 Scout and Maverick, along with Qwen 3. Various embedding models are also available for tasks like semantic search and retrieval-augmented generation. You can discover the full list via openclaw models list --provider deepinfra. Media generation capabilities include Stable Diffusion XL for image creation and a range of video endpoints. Text-to-speech services offer multiple voice personas with streaming support for real-time applications. You can reference these models directly, for example, deepinfra/llama-4-scout, or create custom aliases in your providers.yaml for easier management. DeepInfra’s API quota validation during Gateway startup proactively prevents runtime failures due to exhausted credits, ensuring service reliability. All these capabilities are seamlessly integrated with OpenClaw’s unified skill interface.

Do I need to update my iOS and Android apps for the presence protocol?

Yes, it is highly recommended to update your mobile clients to version 2026.4.27 or later to fully leverage the new authenticated node.presence.alive protocol. Older clients will still function and maintain basic connectivity, but they will not emit these specific presence events. This means their last_seen_at timestamps in the Gateway’s node registry will not be updated reliably, potentially causing the orchestrator to rely on stale connection states. The new protocol is designed for battery efficiency: iOS devices utilize push triggers for background updates, while Android devices leverage WorkManager to manage presence transmission effectively, even in low-power modes. Without the update, older mobile agents may appear “ghosted” in the node list, making it difficult for the orchestrator to accurately determine their current availability and assign tasks efficiently. You can verify presence reporting using openclaw node list --format json after updating.

Can I use Docker GPU passthrough with AMD or Intel GPUs?

Yes, the sandbox.docker.gpus configuration in OpenClaw 2026.4.27 is designed to support AMD and Intel GPUs, in addition to NVIDIA. For AMD GPUs, you’ll need a host system with ROCm (Radeon Open Compute) installed and configured for container runtime. For Intel GPUs, oneAPI support is required. For NVIDIA GPUs, the NVIDIA Container Toolkit is necessary. OpenClaw intelligently validates the host’s container runtime capabilities before attempting to spawn containers with GPU passthrough. If the required runtime or drivers are not detected, it will gracefully fall back to CPU execution, preventing errors. You can test your GPU passthrough configuration using openclaw sandbox test --gpus. It’s important to note that even with GPU access, the agents remain securely sandboxed; they cannot escape the container through driver vulnerabilities, maintaining the isolation and security of your environment.

What breaking changes should I watch for when upgrading from 2026.4.26?

When upgrading from OpenClaw 2026.4.26 to 2026.4.27, several key breaking changes require attention. Firstly, all custom plugins must now include static manifest.json files. The Gateway will refuse to start if these are missing, so run openclaw plugin manifest --update on your plugins prior to deployment. Secondly, if you were using the DeepInfra community plugin, you must switch to the bundled provider namespace, changing your configuration from community.deepinfra to bundled.deepinfra. Thirdly, Windows service accounts will require additional permissions to accommodate the new named pipe handoff mechanisms for graceful restarts. While Codex configurations migrate automatically, it’s prudent to verify mcp-policy.json compatibility, especially if you plan to enable fail-closed security. Fortunately, there are no database schema changes or API deprecations that affect existing agents, ensuring backward compatibility for runtime APIs.

Conclusion

OpenClaw 2026.4.27 ships native Codex Computer Use commands, DeepInfra provider bundling, and fail-closed MCP security checks for desktop AI agents.