OpenClaw vs. Vett: Securing the AI Agent Ecosystem

Compare OpenClaw's open-source AI agent framework with Vett's security registry. Learn how Vett mitigates supply chain risks in OpenClaw skill deployment.

OpenClaw and Vett solve fundamentally different problems in the AI agent stack, yet they intersect at a critical vulnerability: the skill supply chain. OpenClaw, created by Peter Steinberger in late 2025, provides an open-source framework for running autonomous AI agents locally on your hardware, connecting through messaging apps like WhatsApp and Signal to execute shell commands, manage files, and control browsers. It stores memories and capabilities as Markdown and YAML files, emphasizing local-first operation and model agnosticism. Vett, launched in response to OpenClaw agents flagging security gaps, functions as a cryptographic registry that scans, signs, and verifies AI agent skills before installation. While OpenClaw gives you the engine to run agents, Vett ensures the fuel you pour into that engine is not contaminated with exfiltration scripts or command-and-control droppers. This crucial distinction highlights the complementary nature of these two powerful tools in the evolving landscape of AI automation.

At-a-Glance: OpenClaw vs. Vett Feature Comparison

FeatureOpenClawVett
Primary FunctionAutonomous AI agent execution frameworkSecurity registry and skill verification
Deployment ModelLocal (Mac/Windows/Linux)Hybrid (cloud scanning, local verification)
Skill FormatMarkdown/YAML files, shell scriptsAny code scanned before packaging
Security ModelImplicit trust in skill sourcesExplicit verification with Sigstore signing
Threat DetectionRuntime sandboxing (limited)Static analysis + LLM behavioral check
Supply Chain ProtectionNone (pulls from GitHub HEAD)Immutable, signed artifacts with audit trails
Performance ImpactLocal execution overheadMilliseconds at install time, zero runtime overhead
IntegrationStandalone or with managed hostingCLI hook: npx vett add
Core PhilosophyLocal-first, model-agnostic executionTrust-but-verify, cryptographic integrity
Target UserDevelopers, power users, researchersSecurity teams, enterprises, cautious developers
Open Source StatusFully open sourceOpen source components, managed service

What Is OpenClaw and Why Did It Go Viral?

OpenClaw erupted onto GitHub in early 2026, accumulating over 200,000 stars and becoming the flagship project of the vibe coding movement. Developer Peter Steinberger built it as a reaction to cloud-only AI assistants that trap your data on someone else’s servers. OpenClaw runs entirely on your local machine, processing natural language commands through familiar messaging interfaces like Telegram, Discord, and iMessage rather than forcing you into yet another web dashboard. This local execution model provides unparalleled privacy and control over your data, a significant draw for many users.

The framework’s architecture centers on agentic execution. Unlike ChatGPT, which answers questions within a browser tab, OpenClaw actually performs tasks: it reads your calendar, sends emails, executes shell commands, and manipulates files directly on your filesystem. It maintains persistent memory through human-readable Markdown and YAML files stored in a local workspace, making backup and version control straightforward. Its model-agnostic design lets you connect to OpenAI’s GPT-4, Anthropic’s Claude, or run entirely offline through Ollama with local LLMs. This combination of local privacy, omnichannel access, and real-world action capability resonated with developers tired of SaaS lock-in and seeking more autonomous, powerful tools.

What Is Vett and Why Did It Launch?

Vett emerged from a specific vulnerability demonstration posted to Hacker News. The creator showed that AI agents including OpenClaw installations were pulling skills from GitHub repositories at HEAD, with no versioning, signing, or scanning. OpenClaw agents themselves had flagged the issue on Moltbook, noting that “skill.md is an unsigned binary.” The demonstration proved this was not theoretical: by adding a few lines of Python to an official skill, the researcher exfiltrated environment variables, shell history, and git configuration from both Claude Code and Codex. This incident highlighted a critical gap in the security posture of modern AI agent platforms.

The registry launched to address the 64,000 unverified skills sitting on Vercel’s skills.sh and similar repositories. Of approximately 5,000 skills scanned during Vett’s initial audit, 59 contained critical risks including base64-obfuscated droppers calling command-and-control servers disguised as Google or LinkedIn tools. Another 335 showed high-risk patterns like arbitrary shell execution and piped installers. Vett provides a deterministic static analysis layer followed by LLM behavioral verification, then cryptographically signs clean skills using Sigstore’s ECDSA P-256 standard with Rekor transparency logging. This creates an immutable, auditable trail missing from current agent workflows, offering a much-needed layer of trust and accountability.

The Supply Chain Attack Surface in Modern AI Agents

The current AI agent ecosystem operates on implicit trust. When you install a skill for OpenClaw or any compatible agent, you typically execute a command that pulls code directly from the latest commit on a GitHub repository. This HEAD-based deployment model means you receive whatever code existed at the exact moment of installation, with no guarantee that the repository has not been compromised, sold, or hijacked since you last checked it. This lack of versioning and cryptographic verification introduces significant risk, as a trusted source can become malicious without any warning or mechanism for detection.

The attack surface extends beyond obvious malware. Sixteen popular skills use curl | bash patterns, including “React Native Best Practices” with 5,400 installs, which pipes to a domain the maintainer does not control. If that domain expires or gets sold, those installations become delivery vehicles requiring no exploit. More sophisticated attacks use two-stage payloads that modify configuration files to disable network sandboxing, then frame malicious exfiltration as legitimate registry checks. Because agents like Claude Code often ask for user confirmation with reasons written by the skill itself, social engineering becomes trivial. You think you are approving a name check; you are actually approving data theft, demonstrating the subtle and deceptive nature of these modern threats.

How OpenClaw Currently Handles Skill Installation

OpenClaw manages capabilities through a skill system based on Markdown and YAML files stored in your local workspace. When you want to extend your agent’s abilities, you typically clone a repository or copy skill definitions into your OpenClaw configuration directory. The framework then parses these files to understand available tools, their parameters, and their execution contexts. This flexible system allows for rapid experimentation and customization, aligning with its “vibe coding” philosophy.

By default, OpenClaw trusts the skills you provide. It executes shell commands and Python scripts with the permissions of the user running the agent. There is no built-in cryptographic verification that the skill you downloaded matches the skill the author intended. The framework focuses on execution flexibility rather than supply chain integrity. This design prioritizes the “vibe coding” experience where you rapidly iterate and test capabilities, but it assumes you audit every line of code you install. In practice, most users do not read 500-line Python scripts before letting their agent execute them, creating a trust gap that sophisticated attackers exploit to inject malicious functionality.

Inside Vett’s Two-Layer Security Model

Vett approaches the skill verification problem with defense in depth. The first layer consists of a static analyzer running 40-plus detection rules against the actual code. This is not a simple regex scan; it uses the TypeScript compiler for JavaScript and TypeScript files, and Python’s AST module with regex fallback for Python scripts. The analyzer performs source-sink data flow tracking, identifying when sensitive data like environment variables flows to network requests. This deep code analysis helps uncover hidden malicious patterns.

The second layer activates for ambiguous cases where static analysis cannot definitively classify behavior. An LLM compares the observed capabilities against the skill’s declared purpose. A packaging tool calling an unrecognized endpoint receives different scrutiny than a deployment tool calling AWS. Skills passing both layers receive Sigstore signing with ECDSA P-256 keys and storage in a content-addressed immutable registry. This two-layer approach catches both obvious malware and subtle logic bombs that evade simpler scanning methods, providing a robust verification process.

Static Analysis: The First Line of Defense

Vett’s static analyzer operates deterministically, completing scans in milliseconds even for complex skill packages. The system builds an abstract syntax tree (AST) for each supported language, then traces execution paths from sensitive sources to dangerous sinks. For example, if a script reads your .env file and subsequently makes an HTTP request, Vett flags this as an exfiltration chain rather than treating them as separate benign actions. This intelligent tracing is crucial for identifying potential data breaches.

The analyzer checks dependencies against the Open Source Vulnerabilities (OSV) database, catching known security flaws in libraries your skills import. It detects cross-file import chains where a seemingly innocuous entry point loads malicious functionality from a hidden module. It also validates that documentation references match actual files in the package, catching typosquatting attempts where a skill claims to use config.yaml but actually executes config.yaml.exe. This deterministic layer filters out 59 critical and 335 high-risk patterns found in the wild before any LLM analysis begins, establishing a strong foundational security posture.

LLM-Based Behavioral Verification

Static analysis alone cannot resolve every ambiguous case. A skill might legitimately need to read files and make network requests for benign synchronization purposes. Vett’s second layer uses an LLM to perform semantic analysis of the code’s intent. The system prompts the model with the skill’s declared purpose from its documentation, then asks it to evaluate whether the actual implementation aligns with that purpose. This contextual understanding helps differentiate legitimate actions from malicious ones.

This layer catches sophisticated evasion techniques. In the proof-of-concept attack against Codex, the malicious skill used a two-stage payload that modified config.toml to enable sandbox network access, then framed the exfiltration as a registry name check. While static analysis might flag the network access, the LLM layer recognizes that a registry name check does not require transmitting shell history and git configuration. The LLM also evaluates the plausibility of user-facing prompts, identifying when a skill writes its own social engineering messages to trick you into approving malicious actions. This advanced behavioral analysis provides a crucial safeguard against intelligent threats.

Cryptographic Signing and Immutable Storage

Verification means nothing without enforcement. Vett signs approved skills using Sigstore, the same infrastructure securing open-source software supply chains for Kubernetes and npm. Each skill receives an ECDSA P-256 signature attesting that it passed both analysis layers. These signatures are recorded in the Rekor transparency log, creating a public, immutable audit trail. This transparency ensures that any verified skill’s integrity can be independently confirmed at any time.

Storage uses content-addressing, meaning the hash of the skill’s contents determines its retrieval key. If a maintainer updates their skill, the new version receives a new hash and requires re-verification. You cannot silently substitute a compromised version for a verified one because the content addresses would not match. This eliminates the HEAD-based deployment risk where a compromised GitHub account immediately poisons all future installations. Once you pin a Vett-verified skill, you know exactly what code executes on your machine, providing strong guarantees of integrity and authenticity.

Real Attack Scenarios: From Proof of Concept to Production

The Vett launch included a disturbing proof of concept that demonstrated how easily current agents trust malicious code. The researcher took an official skill that packages other skills, added Python code to extract environment variables and shell history, then installed it into Claude Code and Codex. Claude Code executed the script without flagging the outbound request. Codex initially caught the naive version, but after the researcher updated the payload to use configuration file manipulation and social engineering framing, even Codex asked for confirmation with a plausible reason written by the attacker. This illustrated the fragility of current trust models.

Beyond this demo, Vett’s scans found 59 critical risk skills in the wild, including base64-obfuscated droppers disguised as Excel or LinkedIn tools that call out to malicious IPs. Another 335 showed high-risk patterns like arbitrary shell execution. These are not theoretical vulnerabilities. They are active, installable skills that current agents will execute with full user permissions, demonstrating that the supply chain attack surface is already being exploited by malicious actors seeking to compromise user systems and data.

Performance Impact: Security Without Slowing Down

Security tools often impose unacceptable latency on development workflows. Vett avoids this by performing heavy analysis once at registration time, not during your daily usage. The static analyzer runs in milliseconds because it uses compiled AST traversal rather than interpreted regex matching. Deterministic checks against the OSV database and pattern matching happen locally in memory. This design ensures that the security checks are efficient and non-intrusive to the user’s workflow.

Once a skill is signed, verification becomes a simple cryptographic check. Your OpenClaw agent validates the ECDSA signature against the Rekor log in microseconds. There is no runtime sandboxing overhead, no network latency during agent execution, and no background scanning processes consuming CPU. You pay the analysis cost once when adding a skill to the registry, then enjoy zero-overhead security for every subsequent installation across your team. This efficient model makes security a seamless part of the development and deployment process.

Integration: Using Vett with OpenClaw Agents

Integrating Vett into your OpenClaw workflow requires minimal configuration. Instead of cloning skills directly from GitHub, you use the Vett CLI to request verification. The process starts with a simple command:

npx vett add github.com/owner/repo/skill-name

The Vett registry pulls the code from the specified repository, runs both analysis layers (static and LLM-based behavioral verification), and returns a signed, content-addressed version if it successfully passes all security checks. This ensures that only validated code proceeds to your local environment.

You then configure OpenClaw to pull skills from Vett’s immutable storage rather than directly from GitHub HEAD. This typically involves updating your skill configuration to reference the content hash provided by Vett rather than a branch name or direct URL. When OpenClaw initializes, it performs a crucial step: it verifies the cryptographic signature against the Rekor transparency log before loading the skill into its execution environment. If a signature fails verification or the hash does not match the transparency log entry, OpenClaw refuses to load the skill, preventing compromised or tampered code from reaching your local filesystem and executing potentially harmful commands.

Developer Workflow Comparison

Working with OpenClaw alone follows a rapid iteration cycle. You find a skill on GitHub, copy it into your workspace, and test it immediately. This “vibe coding” approach prioritizes speed over safety, assuming you review every line of code you import. This workflow is highly efficient for personal projects or trusted, internal skills where the developer has full visibility and control over the code. However, the workflow breaks down when you install third-party skills from unknown sources or when your team grows beyond a single developer who can personally audit every dependency, introducing significant security blind spots.

Adding Vett introduces a verification gate that slows the initial installation slightly but prevents catastrophic security failures. You submit the skill, wait milliseconds for static analysis and potentially longer for LLM review if the code is ambiguous, then receive a signed artifact. This “trust but verify” workflow aligns with mature software supply chain practices, where security is integrated earlier in the development lifecycle. It shifts security left, catching exfiltration scripts before they reach your machine rather than hoping your runtime sandbox catches them during execution, thereby significantly reducing the risk of compromise.

Choose OpenClaw Alone When…

You should consider running OpenClaw without Vett when you operate in completely air-gapped environments where no third-party skills ever enter your network. If you write every skill yourself, meticulously audit every line of code you execute, and never import community tools or external dependencies, you might not require a verification registry. Solo developers prototyping on isolated machines with no sensitive data can accept the inherent risks for the sake of development velocity.

Similarly, if you maintain strict network-level egress filtering that prevents any unexpected outbound connections, you might mitigate the exfiltration risk without software signing. This network-level control can act as a partial safeguard, though it doesn’t address local data manipulation. OpenClaw alone also suits environments where cryptographic verification infrastructure is unavailable or prohibited due to specific regulatory or operational constraints. Just understand that by choosing this path, you are implicitly trusting every repository you clone and every update you pull, placing the full burden of security assurance on your own manual review processes.

Choose OpenClaw + Vett When…

You need OpenClaw plus Vett the moment you install a skill you did not personally audit or create. This includes production deployments, team environments where multiple developers share skills, and any system handling sensitive data like API keys, proprietary code, or personal identifiable information (PII). If your OpenClaw agent has access to your email, calendar, production servers, or financial data, you absolutely cannot afford to run unsigned, unverified code. The potential for data breaches or system compromise is too high to ignore.

Vett becomes essential when you use skills from the long tail of community contributions. The 64,000 skills on public repositories include hundreds of high-risk packages, and you cannot distinguish the malicious ones from the helpful ones by star count alone. Relying solely on popularity is a dangerous security practice. Furthermore, compliance requirements often mandate this combination. If you need audit trails showing exactly what code executed on your systems and cryptographic proof that it was not tampered with, Vett’s Sigstore integration provides the necessary attestations to meet stringent regulatory and security standards, ensuring accountability and integrity throughout your AI agent operations.

Frequently Asked Questions

Can Vett work with AI agent frameworks other than OpenClaw?

Yes. Vett operates as a framework-agnostic security layer that validates skills before they reach any agent runtime. While OpenClaw agents specifically flagged the “unsigned binary” problem that motivated Vett’s creation, the registry works with Cursor, Claude Code, Windsurf, and any system that installs skills from GitHub. The CLI uses a standard format: npx vett add github.com/owner/repo/skill-name, making it compatible with any agent that can hook into pre-installation verification workflows. This broad compatibility ensures that Vett can serve as a universal security solution for the diverse AI agent ecosystem.

How does Vett’s static analysis differ from traditional antivirus scanning?

Vett uses AST-based capability analysis rather than signature matching. Traditional antivirus looks for known malware signatures, while Vett analyzes the abstract syntax tree of TypeScript, Python, and JavaScript code to detect data flow from sensitive sources like .env files to network sinks. It tracks cross-file import chains and validates behavior against declared purpose. This analytical depth catches novel attacks like base64-obfuscated droppers and curl | bash patterns that signature-based systems miss, running deterministic checks in milliseconds. This proactive, behavior-centric approach offers a superior defense against zero-day threats and polymorphic malware.

What happens if a skill passes Vett’s scan but behaves maliciously at runtime?

Vett focuses on supply chain verification, not runtime sandboxing. If a skill uses environmental triggers or time-delayed execution to bypass static analysis, runtime protection requires additional layers. However, Vett’s Sigstore signing creates an immutable audit trail. If malicious behavior is detected post-installation, you can trace the exact signed artifact, revoke trust for that specific hash, and alert the registry. This accountability mechanism differs from unsigned GitHub HEAD installs where attribution and rollback are impossible, providing a critical forensic capability even if a sophisticated attack evades initial static checks.

Does using Vett break OpenClaw’s “local first” philosophy?

No. Vett performs scanning remotely but stores verification results locally. You can run Vett’s analysis on their infrastructure then cache signed skill manifests on your machine. The verification keys and transparency logs are public and auditable. OpenClaw retains full local execution of skills, your data never leaves your hardware during agent operation, and you can air-gap verified skills after scanning. Vett adds a verification step without compromising the local execution model that makes OpenClaw attractive, ensuring privacy and control remain paramount.

How do I migrate existing OpenClaw skills to use Vett verification?

Migration requires submitting your current skills to the Vett registry for scanning. Run npx vett add with your skill’s GitHub repository path. The static analyzer will process your Markdown and YAML skill definitions, Python tooling scripts, and dependencies. If it passes, you receive a signed, content-addressed version you can reference in your OpenClaw configuration. For skills with high-risk patterns like arbitrary shell execution, you will need to refactor or sandbox those capabilities before receiving a signature, ensuring that all deployed skills meet a baseline security standard.

Conclusion

Compare OpenClaw's open-source AI agent framework with Vett's security registry. Learn how Vett mitigates supply chain risks in OpenClaw skill deployment.