What Just Happened: Rampart Drops as First Native Security Layer for OpenClaw Agents
A developer running unsupervised AI agents 24/7 just open-sourced the missing security piece. Rampart launched this week on Hacker News with a direct value proposition: stop bad commands before they execute, specifically for OpenClaw and Claude Code deployments. The creator built it out of personal necessity after watching their home lab k3s cluster get managed by an agent with no guardrails beyond human eyeballs on logs. This new Rampart tool addresses a crucial security gap for autonomous AI systems.
The tool addresses a gap we’ve covered before. Our managed OpenClaw hosting comparison noted that DIY deployments often lack enterprise-grade isolation and robust security features. While Rampart doesn’t solve multi-tenancy or network segmentation, it effectively addresses the immediate problem of command-level safety. You define security policies using YAML, and Rampart evaluates every Bash invocation, file read, and file write against those rules. Denied operations return errors to the agent without ever executing the potentially harmful command.
This capability is increasingly important as OpenClaw adoption accelerates into production-like workloads. Our autonomous content marketing case study showcased agents running for days unsupervised. Without a tool like Rampart, operating such agents is akin to playing Russian roulette with your underlying infrastructure, risking data loss, system compromise, or unauthorized access.
Why Unsupervised Agent Operations Need Interception, Not Just Monitoring
Monitoring provides insights into what has already occurred, often after a failure has happened. Interception, on the other hand, actively prevents a failure from occurring in the first place. Most OpenClaw deployments today rely on logging and human review for security, a strategy that proves inadequate when agents operate overnight or across different time zones. The Rampart author explicitly highlighted this limitation: “I could see what the agent was running, but had no way to stop a bad command before it executed.”
The inherent latency between observing a problem and reacting to it creates significant vulnerability windows. An AI agent, due to a misinterpretation of a prompt or a hallucination, could execute a destructive command like rm -rf / or exfiltrate sensitive SSH keys within milliseconds. Human response times, even in highly optimized security operations centers, typically measure in minutes at best. Rampart effectively closes this critical gap, providing pre-execution evaluation in under 20 microseconds, ensuring that malicious commands are stopped before they can cause damage.
This architectural shift, moving from reactive security to preventative security, parallels the evolution of kernel-level access control mechanisms. Systems like SELinux and AppArmor advanced beyond mere audit logs to implement mandatory access controls, enforcing security policies at a fundamental level. Rampart applies a similar principle to the context of AI agents, where the “user” is a stochastic entity and the threat model includes not only malicious intent but also well-intentioned but incorrect reasoning by the AI.
How Rampart’s Policy Engine Works Under the Hood
Rampart policies are defined in declarative YAML files, which are evaluated against every tool call initiated by an AI agent. The policy syntax supports three distinct verdicts for each command: allowed (the command executes normally), denied (the command is blocked, and an error is returned to the agent), and logged (the command executes but is recorded for later review). Rules are designed to match command patterns, supporting flexible prefix matching and argument globbing for comprehensive coverage.
Here’s a practical example of a Rampart policy taken directly from the launch documentation:
policies:
- pattern: "rm -rf /"
action: denied
- pattern: "sudo *"
action: logged
- pattern: "curl *"
action: logged
- pattern: "wget *"
action: logged
- pattern: "git push *"
action: allowed
- pattern: "go build *"
action: allowed
- pattern: "cat ~/.ssh/id_rsa"
action: denied
The order of evaluation is crucial in Rampart’s policy engine; rules are processed from top to bottom, with the first matching rule determining the action. This “first-match-wins” logic allows administrators to specify broad denial rules early in the policy, then carve out specific exceptions or allowances later. The engine is optimized for efficiency, compiling patterns into highly performant internal representations, which helps achieve the sub-20-microsecond evaluation target even with complex policies containing hundreds of rules.
To ensure accountability and provide tamper evidence, Rampart generates hash-chained audit trails. Each log entry includes a cryptographic hash of the previous entry, along with a timestamp, the policy version in effect, and the full context of the command. This structure means that any modification to a past log entry would invalidate the subsequent hashes, making tampering immediately detectable. Users can stream live audit events using rampart watch or generate detailed reports with rampart report. The audit log format is fully documented and easily parseable, facilitating integration with Security Information and Event Management (SIEM) systems.
Installation Paths: From One-Liner Setup to Custom Integration for OpenClaw
Rampart is designed for ease of use and minimal friction during installation and integration. The most straightforward installation paths for common AI agent frameworks involve simple, single-command setups:
To integrate Rampart with Claude Code:
rampart setup claude-code
For OpenClaw agents, the setup is equally simple:
rampart setup openclaw
Both of these commands install the necessary interception hooks at the shell and Model Context Protocol (MCP) layers, respectively. The OpenClaw integration specifically targets the tool execution path, which we previously detailed in our OpenClaw skills guide, effectively wrapping the framework’s native command dispatch mechanism to enforce policies.
For advanced users or more specialized use cases, Rampart offers three additional integration options. The rampart wrap command provides a generic shell wrapper, allowing it to secure virtually any agent or script:
rampart wrap --policy strict.yaml -- ./my-agent
The rampart mcp mode enables Rampart to function as a protocol proxy. It positions itself between any MCP client and server, filtering and enforcing policies on all traffic that conforms to the Model Context Protocol. This mode extends Rampart’s utility beyond just OpenClaw and Claude Code, making it compatible with custom implementations that utilize the MCP standard.
Finally, rampart serve exposes an HTTP API, providing a flexible way for platforms that require external policy consultation to integrate with Rampart:
rampart serve --port 8080 --policy api-policy.yaml
In this mode, agent platforms can POST proposed commands to the Rampart server and receive an immediate allow, deny, or log response. This capability facilitates centralized policy management across distributed fleets of AI agents, without requiring any code modifications to the individual agents themselves, enhancing security posture across diverse deployments.
Performance Characteristics: Why 20 Microseconds Matters for AI Agent Security
The adoption of any security solution often hinges on its performance overhead. If a security layer significantly degrades the responsiveness or throughput of the system it protects, its practical utility diminishes. Rampart’s performance claim of under 20 microseconds per evaluation is therefore a critical aspect deserving thorough examination. This evaluation speed translates to the ability to make approximately 50,000 security decisions per second on a single processing core. To put this in perspective, typical OpenClaw tool calls often involve network round trips that measure in milliseconds, meaning Rampart’s contribution to overall latency is almost negligible, ensuring that security does not become a bottleneck.
The tool’s design, specifically its implementation as a zero-dependency Go binary, is a key factor in achieving this high performance. Unlike solutions built on interpreted languages or virtual machines, there is no Python interpreter startup cost, no Java Virtual Machine (JVM) warm-up time, and no need for container image pulls during execution. The Rampart codebase, comprising approximately 14,000 lines, includes its own optimized YAML parser and pattern matcher, which avoids reliance on potentially slower external packages or libraries and mitigates supply chain security risks. This lean architecture makes Rampart particularly suitable for resource-constrained environments, such as air-gapped deployments or home labs, where every megabyte of RAM and every CPU cycle counts.
Furthermore, Rampart maintains a minimal memory footprint through its streaming evaluation approach. Policies are compiled into efficient finite state machines rather than being interpreted as complex data structures, optimizing memory usage. Audit logs are managed using buffered I/O with configurable flush intervals, ensuring that logging operations do not disproportionately consume system resources. This efficiency means that Rampart can run effectively on low-power hardware, such as Raspberry Pi-class devices, alongside your AI agents without introducing significant resource contention or performance degradation.
Threat Models: What Rampart Blocks and What It Cannot Prevent
Understanding the scope of Rampart’s protection is essential for effective deployment and for designing a robust defense-in-depth strategy. Rampart excels at preventing specific, dangerous operations that commonly arise from AI agent misbehavior. These include destructive filesystem commands (e.g., deleting critical system files), privilege escalation attempts (e.g., trying to modify /etc/sudoers), unauthorized network egress (e.g., connecting to suspicious external servers), and credential exfiltration (e.g., reading SSH private keys). These types of actions directly map to common failure modes of Large Language Models (LLMs) where agents might misinterpret instructions, hallucinate tool names, or attempt unintended operations.
However, it is equally important to recognize what Rampart is not designed to prevent. Rampart does not prevent:
- Logic bugs in allowed commands: If an agent is allowed to use
git pushbut pushes to the wrong repository, Rampart will not block this, as the command itself is considered valid according to policy. - Prompt injection attacks: Rampart operates at the command execution layer. It cannot prevent malicious prompts from manipulating an agent’s reasoning upstream, before a command is even formulated.
- Vulnerabilities in allowed tools: If an agent is permitted to use
curl, andcurlitself has an undiscovered vulnerability, Rampart will not protect against exploitation of that vulnerability. - Side-channel data exfiltration: Subtle methods of data leakage, such as varying command execution times or error messages to encode information, are beyond Rampart’s scope.
Therefore, Rampart should be considered a crucial component within a broader defense-in-depth architecture, not a standalone solution. It must be combined with other security measures such as network segmentation, the use of least-privilege service accounts for agent execution, and robust input validation on agent prompts. Our previous discussion on Nucleus MCP coverage highlighted memory security; Rampart focuses on execution security. Both aspects are vital for comprehensive AI agent protection.
Comparing Rampart to Alternative Security Approaches for AI Agents
When considering security for AI agents, several approaches exist, each with its own trade-offs regarding latency, coverage, maintenance, and open-source availability. A comparative analysis helps position Rampart within the broader security landscape.
| Approach | Latency | Coverage | Maintenance Burden | Open Source |
|---|---|---|---|---|
| Rampart | <20µs | Command/tool level | Policy YAML updates | Yes (Apache 2.0) |
| Container Sandboxing | Variable (ms) | Process/network level | Image rebuilds, seccomp profiles, AppArmor | Partial (Kernel features) |
| Cloud IAM + SCPs | 10-100ms | Cloud API level | Terraform/CloudFormation, policy definitions | No (Proprietary cloud services) |
| Manual Approval Queues | Minutes-hours | Human discretion | Queue monitoring, human review processes | N/A |
| Managed Platform Guards | Unknown | Platform-defined | Vendor-managed, configuration via platform UI | No (Proprietary platform features) |
Container sandboxes, such as Docker with seccomp profiles or Kubernetes with network policies, provide a stronger isolation boundary at the process and network level. However, their overhead can be higher, and they don’t solve the problem of a legitimate container executing an illegitimate command. For instance, a container might be allowed to run bash, but bash executing rm -rf / is the specific problem Rampart addresses. Cloud Identity and Access Management (IAM) and Service Control Policies (SCPs) operate at the cloud API granularity, which is often too coarse for fine-grained, intra-container actions performed by AI agents.
Manual approval queues, while offering the highest level of human oversight, fundamentally sacrifice the autonomy that is the primary benefit of unsupervised AI agents. This approach defeats the purpose of deploying AI agents for continuous, autonomous operations. Managed platforms, as discussed in our hosting comparison, may offer similar security controls, but these are often opaque and proprietary. Rampart provides DIY operators with equivalent capabilities, but with full transparency, open-source code, and portability. Your Rampart policies can be migrated across self-hosted, cloud-based, and edge deployments without modification, ensuring consistent security posture regardless of infrastructure.
Policy Design Patterns for OpenClaw Workflows
Designing effective Rampart policies requires a deep understanding of your AI agents’ operational patterns and the specific tasks they perform. A recommended starting point is to deploy Rampart in observation mode to baseline normal agent behavior. This can be achieved by wrapping your agent with a policy that logs all commands:
rampart wrap --policy log-all.yaml -- openclaw run --observe
After reviewing the generated logs, you can identify typical OpenClaw patterns, which commonly include:
- File reads: Agents frequently read configuration files, process templates, or retrieve documentation.
- File writes: This includes generating code, updating configuration files, or creating build artifacts.
- Shell commands: Agents often invoke build tools (e.g.,
make,npm build), package managers (e.g.,apt,pip), or version control operations (e.g.,git clone,git commit). - Network activity: API calls to configured external endpoints or requests for model inference are common.
When structuring your policies, a defensive approach is recommended: begin with broad denials for inherently dangerous operations, then progressively carve out specific allowances for known, safe workflows.
# Base denials first for maximum safety
policies:
- pattern: "rm -rf *"
action: denied
- pattern: "* > /etc/*" # Prevent writing to critical system directories
action: denied
# Then specific allowances for known and trusted workflows
- pattern: "npm install *"
action: allowed
condition: cwd == "/app/project-*" # Only allow npm install within specific project directories
# Log suspicious but not immediately critical commands for review
- pattern: "curl *"
action: logged
Rampart’s policy engine also supports conditions, allowing rules to be activated or modified based on contextual information such as environment variables, the agent’s current working directory, or even user context. Utilizing these conditions enables highly granular control, constraining allowed operations to their expected and safe contexts, thereby reducing the attack surface.
Integration with Existing OpenClaw Tool Ecosystem
Rampart is designed to integrate seamlessly with the broader OpenClaw tool ecosystem, complementing existing solutions we have previously highlighted. For example, LobsterTools provides a catalog of community-contributed tools; Rampart policies can be crafted to reference specific tool signatures, enabling fine-grained control over which LobsterTools an agent is permitted to use and how. Similarly, Molinar, an open-source alternative for agent orchestration, can share Rampart instances, particularly through the flexible HTTP API mode, ensuring consistent policy enforcement across different orchestrators.
The emergence of marketplaces like Moltedin’s for OpenClaw sub-agents introduces additional security complexities, as sub-agents from third parties might carry unknown risks. Rampart offers a crucial layer of defense here by enforcing parent-agent policies, regardless of the sub-agent’s origin. This means that even if a marketplace agent is compromised, its ability to execute commands is constrained by the overarching policies defined in Rampart, preventing it from escalating privileges or performing unauthorized actions beyond the defined command patterns.
The MCP proxy mode of Rampart is particularly beneficial in heterogeneous environments. If you are operating a mix of local LLM solutions like McClaw alongside cloud-based OpenClaw instances, rampart mcp can provide unified policy enforcement. By acting as a central point of control for all Model Context Protocol traffic, it significantly reduces configuration drift and ensures a consistent security posture across diverse agent deployments.
Audit Trail Analysis and Incident Response with Rampart
Rampart’s hash-chained audit logs are a powerful feature for forensic reconstruction and incident response. The design of the audit trail, where each log entry cryptographically links to the previous one, creates an immutable record. This chain structure is a critical security mechanism: any attempt to modify a past log entry will immediately invalidate the hash of the subsequent entry, making tampering easily detectable and ensuring the integrity of your security logs. For effective incident response, a systematic approach is recommended:
- Preserve Log Integrity: Immediately copy the relevant audit logs to write-once, read-many (WORM) storage or a secure, immutable archive to prevent any further modification.
- Identify Active Policy Version: Determine which version of the Rampart policy was active at the time of the incident. This context is vital for understanding why certain actions were allowed or denied.
- Trace Agent Decision Tree: Reconstruct the sequence of commands executed by the agent, correlating them with the audit entries. This helps in understanding the agent’s reasoning and the path that led to the incident.
- Correlate with External Timestamps: Cross-reference Rampart’s audit logs with timestamps from other systems, such as LLM API logs, container metrics, or network traffic logs, to build a comprehensive timeline of events.
The HTML report generator included with Rampart provides human-readable timelines, simplifying the review process. For automated analysis and integration with security tools, the JSON audit format is fully documented and parseable.
{
"timestamp": "2026-02-12T09:23:17.004Z",
"sequence": 15234,
"previous_hash": "sha256:a1b2c3...",
"command": {
"tool": "bash",
"args": ["rm", "-rf", "/tmp/old-builds"],
"working_dir": "/app"
},
"policy_applied": "production-safety.yaml",
"rule_matched": 7,
"verdict": "allowed",
"hash": "sha256:d4e5f6..."
}
The inclusion of sequence numbers helps detect any missing entries in the log stream, further enhancing integrity. Crucially, the capture of the working directory and environment variables provides vital execution context that is often absent in standard shell history or basic system logs, offering a more complete picture for forensic investigations.
Operational Considerations for 24/7 Deployments of OpenClaw Agents
The original motivation behind Rampart’s development—continuous operation of AI agents in a home lab environment—highlights several critical operational challenges that need to be addressed for 24/7 deployments. One significant concern is how to manage policy updates without interrupting the agents’ ongoing tasks. Rampart addresses this through an atomic policy reload mechanism:
rampart reload --policy v2.yaml
When a reload command is issued, any command evaluations currently in progress will complete using the old policy. However, all subsequent commands will immediately be evaluated against the newly loaded policy. This approach ensures a seamless transition without requiring agents to be restarted, minimizing downtime and operational disruption.
Disk space management for long-running audit trails is another important consideration. Rampart provides configurable options for audit log rotation to prevent unbounded disk usage:
audit:
max_size_gb: 10
max_age_days: 30
compress_after_days: 7
These settings allow administrators to define limits on the total size of the audit logs, how long uncompressed logs are retained, and when logs should be compressed to save space. Even when logs are rotated or compressed, the hash chain integrity is maintained, ensuring that the audit trail remains verifiable when stored centrally or shipped to a remote logging solution.
For effective monitoring, it is essential to track Rampart’s health and performance. This includes observing evaluation latency percentiles. Sudden spikes in latency can indicate underlying issues such as overly complex policies, resource contention on the host system, or other performance bottlenecks. While not yet fully implemented at launch, Prometheus-compatible metrics export is planned, which will allow for integration with existing monitoring stacks and provide real-time insights into Rampart’s operational status.
Community Feedback and Roadmap Signals for Rampart
The launch of Rampart on Hacker News generated significant community engagement, with users providing valuable feedback and expressing interest in specific features and integrations. Key requests that emerged from early discussions included the need for comprehensive default policies tailored to common AI agent use cases, as well as prioritization of various integration points.
Among the most requested features were Kubernetes-native admission controller integration, which would allow Rampart policies to enforce security at the pod creation level within Kubernetes clusters. There was also strong interest in Open Policy Agent (OPA) and Rego policy language compatibility, which would enable users to leverage a more expressive and widely adopted policy-as-code framework. Additionally, a Terraform provider was requested to facilitate infrastructure-as-code workflows for deploying and managing Rampart policies alongside other cloud resources.
The maintainer of Rampart has indicated an openness to implementing webhook notifications for blocked actions. This feature would enable real-time alerting to systems like Slack or PagerDuty without the need for constant polling of log streams, significantly improving incident response times. Another common requirement that surfaced was the ability to compose multiple policies, allowing for hierarchical policy structures (e.g., a base organizational policy, overridden by team-specific policies, further refined by project-specific rules).
Notably, no commercial roadmap was disclosed at the time of launch. The choice of the Apache 2.0 license and the explicit emphasis on “zero runtime dependencies” strongly suggest a commitment to sustained independence from platform vendors. This aligns well with OpenClaw’s own community-driven governance model, fostering an environment of open collaboration and avoiding vendor lock-in for critical security components.
Implications for Managed OpenClaw Providers
The introduction of Rampart presents strategic considerations for managed OpenClaw providers, such as those analyzed in our managed hosting comparison. These providers face a choice: they can transparently integrate Rampart into their offerings, develop proprietary alternatives, or risk falling behind by ignoring this crucial security layer. For platforms that pride themselves on rapid provisioning, like the 60-second provisioning platforms, Rampart offers a significant advantage. It allows them to provide robust command-level security as a configurable option, rather than requiring substantial infrastructure investment or complex custom development.
The landscape of differentiation among providers may shift. Instead of competing solely on core infrastructure or raw agent execution capabilities, providers might increasingly differentiate themselves through the user experience of policy management. This could include offering intuitive visual editors for policies, providing curated sets of recommended security templates for common use cases, and developing comprehensive compliance reporting features. This mirrors the evolution of cloud providers, who initially competed on Kubernetes offerings and later shifted to competing on the quality and features of their managed control planes.
Furthermore, the inclusion of Rampart or similar capabilities is likely to become a standard expectation in platform Service Level Agreements (SLAs) and compliance certifications, such as SOC 2. Auditors, already familiar with traditional change control and auditing practices, will recognize the equivalent rigor provided by Rampart’s immutable audit trails and the ability to enforce mandatory approval workflows for agent actions, enhancing the overall trustworthiness and security posture of managed OpenClaw environments.
What Builders Should Do This Week to Enhance OpenClaw Agent Security
If you are currently operating OpenClaw agents, particularly in unsupervised modes, deploying Rampart in observation mode should be a top priority this week. This initial step allows you to generate a baseline of your agents’ actual behavior without immediately enforcing policies. Reviewing these logs is crucial, as agents often interact with files or make network calls through unexpected paths that you might have overlooked. This observation period provides invaluable insights into your agents’ operational footprint.
Following the observation phase, thoroughly test the denial paths. Verify that commands blocked by Rampart produce useful and actionable error messages for your agents. Some Large Language Models (LLMs) are designed to retry actions aggressively when they encounter errors; in such cases, you may need to implement prompt engineering strategies to help your agents handle Rampart rejections gracefully, perhaps by prompting them to re-evaluate their approach or select an alternative tool.
It is also vital to document the rationale behind your policy decisions. Future maintainers, including yourself in six months, will need to understand why a specific command like curl is set to logged rather than denied, or which git remotes are explicitly trusted. Clear documentation ensures policy consistency and simplifies future audits and modifications.
Finally, consider contributing effective policies back to the community. The Rampart launch explicitly requested this input. Sharing standard patterns for common OpenClaw workflows can significantly reduce the security burden for everyone, fostering a more secure and collaborative ecosystem for AI agent development and deployment.
Long-Term Trajectory: Will This Become Standard Infrastructure for AI Agents?
Rampart’s foundational design, with its multiple integration modes (direct hooks, shell wrapper, MCP proxy, HTTP API), suggests a broader ambition than merely serving as a single-project utility. These diverse integration points anticipate a wide array of adoption paths, accommodating various AI agent frameworks and deployment models. The impressive performance characteristics, particularly the sub-20-microsecond evaluation time, indicate that Rampart is engineered to handle high-throughput scenarios, potentially scaling beyond the current operational demands of most AI agents.
We foresee three primary evolutionary pressures shaping the future of AI agent security tools like Rampart. First, policy languages will inevitably grow more expressive and sophisticated as AI agent use cases become increasingly complex and nuanced. Second, the demand for verification tools will emerge, enabling developers and security engineers to formally prove properties of their policies—for instance, demonstrating that a specific denial cannot be bypassed or that two different policies are functionally equivalent. Third, for high-assurance deployments, we anticipate the integration of hardware-backed attestation mechanisms to further enhance the trustworthiness and integrity of the policy enforcement layer.
The core insight underpinning Rampart—that AI agents require robust execution-time safety constraints, distinct from training-time alignment efforts—is a fundamental and enduring principle. While Rampart offers one effective approach to this problem, it is highly probable that other competitors and industry standards will emerge. Early adoption of tools like Rampart allows organizations to build valuable operational experience, which will remain relevant regardless of which specific solutions ultimately gain widespread adoption.
Connection to Broader AI Safety Discussions
Rampart occupies a critical niche at the concrete, operational end of the AI safety spectrum. Its focus is on preventing specific, dangerous actions within specific system environments, rather than addressing abstract concepts like existential risk, ultimate model capabilities, or broad societal impacts of AI. This narrowly defined scope is a deliberate design choice and a strength, enabling practical, deployable security solutions while larger, more philosophical debates continue to unfold.
The tool exemplifies the principle of “defense in depth” as applied to AI systems. While efforts in model training aim to make AI agents helpful and harmless, and techniques like Reinforcement Learning from Human Feedback (RLHF) work to reduce harmful outputs, Rampart adds a final, crucial layer of protection against residual failures, misinterpretations, or unintended consequences during execution. No single layer of security is sufficient on its own; a multi-layered approach is essential for comprehensive protection.
Specifically for OpenClaw, this security primitive significantly complements the framework’s inherent extensibility. Our analysis of OpenClaw tool registry fragmentation highlighted the trust challenges associated with community-contributed tools. Rampart provides a technical enforcement mechanism that can mitigate risks where social verification or trust alone might fall short, ensuring that even tools from less trusted sources operate within predefined safety boundaries.
Frequently Asked Questions
What is Rampart and how does it protect OpenClaw agents?
Rampart is an open-source security layer that intercepts every tool call from AI agents before execution. It evaluates commands against YAML policies that define allowed, denied, or flagged operations. Blocked commands never run; the agent sees an error and continues. It supports OpenClaw, Claude Code, and any MCP-compatible agent through multiple integration modes including hooks, shell wrappers, and HTTP API.
How fast is Rampart’s policy evaluation?
Policy evaluation completes in under 20 microseconds per command. This overhead is negligible even for high-frequency agent operations. The tool is written in Go with zero runtime dependencies, making it suitable for resource-constrained environments like home labs and edge deployments where OpenClaw agents commonly run.
What deployment options does Rampart offer?
Rampart provides four integration modes: rampart setup openclaw and rampart setup claude-code for native hooks; rampart wrap as a shell wrapper for any agent; rampart mcp as an MCP protocol proxy; and rampart serve as an HTTP API for external platforms. All modes support Linux and macOS with identical policy semantics.
Can Rampart prevent all dangerous AI agent actions?
Rampart blocks defined threats based on explicit policy rules, but cannot prevent novel attack patterns outside policy scope. Best practice combines deny-lists for known dangers (rm -rf /, SSH key exfiltration) with allow-lists for approved workflows. Regular policy updates and audit log review remain essential for comprehensive protection.
How does Rampart compare to managed security solutions?
Unlike proprietary alternatives, Rampart is Apache 2.0 licensed with full source visibility and local operation. Managed platforms like ClawHosters may integrate similar controls, but Rampart gives DIY operators equivalent capabilities without vendor lock-in. The audit trail format is documented and portable across deployment scenarios.