OpenClaw is an open-source AI agent framework created by Peter Steinberger that transforms local machines into autonomous value-generating systems. Unlike cloud-dependent alternatives, OpenClaw runs entirely on your hardware, orchestrating multiple AI agents that schedule tasks, browse the web via headless Chrome, execute API trades on blockchain exchanges, and persist memory across reboots without external dependencies. By following this guide, you will deploy a production-ready 24/7 agent swarm on a Mac Mini capable of autonomous operation, multi-agent coordination, and real-world task execution such as market making and content generation. This setup mirrors verified deployments generating substantial daily revenue through automated workflows.
What You Will Build
You will build a fully autonomous AI agent organization that runs 24/7 on a local Mac Mini without cloud dependencies. By the end of this guide, your OpenClaw deployment will orchestrate five specialized agents working in concert: a research agent that scrapes live data via headless Chrome, a trading agent executing signed API calls on Polymarket’s CLOB for Polygon-based markets, a content agent generating daily reports from aggregated data, a monitoring agent tracking system health and resource usage, and a master orchestrator coordinating the swarm via structured MCP protocols. The system persists state across reboots using local SQLite and vector memory stores, schedules tasks via cron-like expressions with retry logic, and handles sub-agent spawning for parallel workloads. You will configure custom tool registries in YAML, implement security hardening with eBPF policies, and deploy a production-grade setup capable of generating autonomous value. This mirrors live deployments pulling $1k daily with proper market edges.
Prerequisites for OpenClaw Deployment
You need an Apple Silicon Mac Mini (M4 base model with 16GB Unified Memory minimum) or an ARM64 Linux equivalent with 32GB RAM for stable 24/7 operation. Install Python 3.11 or newer, Node.js 20 LTS, Git, and Xcode Command Line Tools on macOS. Clone the OpenClaw repository to /opt/openclaw for production paths. Obtain API keys from OpenRouter or Anthropic for LLM inference. If implementing trading features, prepare a Polygon wallet with MATIC for gas fees and USDC for positions. Install Playwright dependencies for browser automation: npx playwright install chromium. Docker 24.0+ is required for sandboxed tool execution. Configure a static local IP for your Mac Mini and enable automatic login in System Settings. For monitoring, allocate 50GB disk space for logs and memory databases. A UPS is recommended for power stability. Finally, install pm2 globally via npm for process management: npm install -g pm2.
To ensure a smooth installation, verify your system meets all these requirements before proceeding. For instance, check your Python version with python3 --version and Node.js with node -v. If you’re on macOS, xcode-select --install will install the command line tools. Docker Desktop for Mac or Linux is sufficient for containerization. Setting a static IP address for your Mac Mini prevents network interruptions and ensures consistent access to your agents.
Step 1: Installing OpenClaw Core and CLI
Clone the repository and install the framework. Run git clone https://github.com/openclaw/core.git /opt/openclaw then cd /opt/openclaw && pip install -e .. Verify installation with openclaw --version; expect 2.4.1 or higher. Initialize the configuration directory: openclaw init --path ./config. This generates agents.yaml, tools.yaml, and memory.yaml templates in the config folder. Install the CLI extensions for monitoring: openclaw-cli install dashboard. Set permissions: chmod 700 ./config to protect API keys and chown -R $USER:$USER /opt/openclaw. Test the base installation by spawning a debug agent: openclaw agent spawn --config ./config/agents.yaml --mode debug. Check logs in ./logs/debug-agent.log for successful LLM initialization. If you encounter permission errors on macOS, ensure your user owns /opt/openclaw and the Python environment has xattr support for extended file attributes. Disable Gatekeeper warnings for the binary if needed.
The -e flag in pip install -e . installs the package in editable mode, which is beneficial for development as changes to the source code are immediately reflected without reinstalling. The openclaw init command is crucial as it sets up the foundational configuration files that you will modify in subsequent steps. The dashboard CLI extension provides a web-based interface for monitoring agent activity, which is indispensable for production deployments.
Step 2: Configuring Your First OpenClaw Agent
Edit config/agents.yaml to define your primary agent characteristics. Set the model endpoint to claude-3-5-sonnet-20241022 via OpenRouter or a local Ollama instance for offline operation. Configure the system prompt to define personality constraints and available capabilities explicitly. Example configuration:
agent_id: primary_orchestrator
model:
provider: anthropic
name: claude-3-5-sonnet-20241022
temperature: 0.2
max_tokens: 4096
system_prompt: |
You are an autonomous orchestrator. You have access to tools:
- web_search: Query DuckDuckGo for information
- execute_shell: Run bash commands in sandboxed Docker
- trade_execute: Submit orders to Polymarket API
Always verify data before trading. Never exceed risk limits.
Bind tools by referencing entries in tools.yaml. Set autostart: true and schedule: "*/5 * * * *" for 5-minute execution loops. Save and validate the configuration with openclaw config validate before starting the agent.
The system_prompt is a critical element, defining the agent’s core identity, purpose, and constraints. A well-crafted system prompt can prevent unexpected behaviors and ensure the agent stays within its intended operational boundaries. The temperature setting controls the randomness of the LLM’s output; a lower value like 0.2 promotes more deterministic and focused responses, which is ideal for autonomous agents performing critical tasks. The max_tokens parameter prevents excessively long and potentially costly responses.
Step 3: Building the OpenClaw Tool Registry
Define available capabilities in config/tools.yaml with strict schemas. Each tool requires a name, description, and execution handler specification. For shell commands using Docker sandboxing:
tools:
- name: execute_shell
description: "Execute bash commands in Docker sandbox"
type: docker
image: "alpine:latest"
command_prefix: "/bin/sh -c"
timeout: 30
allowed_commands: ["curl", "grep", "awk", "sed"]
working_dir: "/tmp"
For web search capabilities:
- name: web_search
description: "Search the web via DuckDuckGo"
type: python_module
module: openclaw.tools.search
function: duckduckgo_query
args_schema:
query: {type: string, required: true}
max_results: {type: integer, default: 5}
Register custom skills by placing Python files in ./skills/. Import them in tools.yaml with type: local_python and path: ./skills/custom.py. Reload the registry without restarting the daemon: openclaw tools reload --signal HUP.
The tools.yaml file is central to defining what your agents can actually do. The type: docker tool ensures that shell commands are executed within an isolated container, significantly enhancing security by preventing agents from directly interacting with the host system. The allowed_commands list provides a whitelist, further restricting potential malicious or unintended actions. For python_module tools, OpenClaw automatically handles argument validation based on the args_schema provided, ensuring correct function calls.
Step 4: Implementing Persistent Memory
Configure memory layers in config/memory.yaml for persistence across restarts. OpenClaw supports SQLite for structured data and Nucleus MCP for vector search and retrieval. For local-first deployment:
memory:
working_memory:
backend: sqlite
path: ./data/working.db
ttl: 3600
max_size: "1GB"
long_term:
backend: nucleus_mcp
embedding_model: text-embedding-3-small
dimensions: 1536
path: ./data/vector_store
similarity_metric: cosine
Agents access memory via the built-in memory tool. In your agent logic, call memory.store(key="findings", value=data, tier="long_term") to persist across sessions. Query with memory.search(query="market trends", top_k=3). Enable memory compaction to prevent storage bloat: set compaction_schedule: "0 2 * * *" to prune expired working memory entries at 2 AM daily. Verify persistence by stopping and restarting the agent; previous context should reload automatically from ./data.
Persistent memory is a cornerstone of autonomous agents, allowing them to learn and retain information over time, even through system reboots or planned maintenance. working_memory with a ttl (time-to-live) is ideal for short-term context and scratchpad data, while long_term memory, backed by a vector database like Nucleus MCP, is crucial for storing and retrieving complex, high-dimensional embeddings for advanced context retrieval and learning. The embedding_model specifies which model to use for generating these embeddings.
Step 5: Scheduling Autonomous Tasks
OpenClaw uses standard cron syntax for task scheduling with added reliability features. Define schedules in config/scheduler.yaml:
tasks:
- name: market_scan
agent: trading_agent
schedule: "*/15 * * * *"
timezone: "UTC"
timeout: 300
retries: 3
concurrent: false
on_failure: alert
- name: content_digest
agent: content_agent
schedule: "0 9 * * *"
timezone: "America/New_York"
The scheduler runs as a daemon process separate from the agents. Start it with openclaw scheduler start --daemon --pidfile ./run/scheduler.pid. It maintains execution state in ./data/scheduler.db to track last run times and prevent duplicate executions after crashes or reboots. Agents receive schedule metadata via the task_context environment variable. Implement idempotency checks in your agent code to handle overlapping executions gracefully. Monitor scheduled tasks via the web dashboard at http://localhost:8080/scheduler. Pause individual jobs without restarting the service: openclaw scheduler pause --job market_scan.
The scheduler is the backbone of 24/7 operation, ensuring agents perform their duties reliably and on time. The retries and on_failure parameters are essential for building robust systems that can recover from transient errors. concurrent: false prevents the same task from being run multiple times simultaneously, which is critical for operations like trading where duplicate actions could lead to significant losses. The timezone setting ensures tasks are executed at the correct local time, regardless of the server’s timezone.
Step 6: Orchestrating Multi-Agent Swarms
Configure the parent agent to spawn and manage sub-agents dynamically. In agents.yaml, define the orchestrator with its children:
agent_id: swarm_master
type: orchestrator
heartbeat_interval: 30
sub_agents:
- id: researcher
config: ./agents/researcher.yaml
max_instances: 3
min_instances: 1
- id: trader
config: ./agents/trader.yaml
max_instances: 1
Sub-agents communicate via the internal message bus using JSON-RPC format. Send directives from the master: message_bus.send(target="researcher", payload={"task": "scrape_weather_data", "callback_topic": "trader"}). The orchestrator monitors health via heartbeat pings every 30 seconds. If a sub-agent fails to respond after three missed beats, the master terminates and restarts it automatically. Scale horizontally by increasing max_instances for parallel task processing of independent workloads. View real-time swarm topology and message flow in the dashboard under the Swarm tab. Agents share memory contexts by specifying shared_memory_pool: swarm_pool in their individual configurations.
Multi-agent swarms allow for complex task decomposition and parallel processing, significantly boosting the overall capability and throughput of the OpenClaw system. The swarm_master acts as a central coordinator, distributing tasks and managing the lifecycle of its sub-agents. The heartbeat_interval and automatic restart mechanisms provide high availability, ensuring that individual agent failures do not bring down the entire swarm. The shared_memory_pool facilitates seamless collaboration by allowing agents to access common data and contexts, reducing redundant processing.
Step 7: Integrating Headless Browser Automation
Install Playwright dependencies: openclaw tool install playwright && npx playwright install chromium. Configure the browser tool in tools.yaml:
- name: headless_browser
description: "Control Chrome for web scraping and interaction"
type: browser
browser_type: chromium
headless: true
stealth: true
user_agent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36"
viewport: {width: 1920, height: 1080}
locale: "en-US"
Use this in agent Python code:
from openclaw.tools import browser
page = browser.new_page()
page.goto("https://polymarket.com/event/weather")
price = page.locator("[data-testid='price']").text_content()
browser.close()
Enable screenshot capture for debugging failed scrapes: set screenshots: true and screenshot_path: ./logs/screenshots/ in config. The browser runs in an isolated Docker container with no access to host file systems. Handle dynamic content by waiting for specific selectors or network idle events before attempting to extract data. Implement robust error handling for common browser automation issues like element not found or timeout errors.
Headless browser automation is indispensable for agents that need to interact with websites, scrape data from dynamic pages, or perform actions that require a full browser environment. Playwright, integrated with OpenClaw, provides a powerful and reliable solution. The stealth: true option helps in avoiding detection by anti-bot measures, while user_agent and viewport settings allow for mimicking a real user’s browser environment. The isolation provided by Docker containers ensures that even if a website tries to execute malicious scripts, it won’t compromise your host system.
Step 8: Configuring Secure Wallet Integration for Trading
For agents performing financial transactions, secure wallet integration is paramount. OpenClaw provides built-in support for interacting with EVM-compatible blockchains. First, define your wallet in config/wallets.yaml:
wallets:
polygon_mainnet:
chain_id: 137
rpc_url: "https://polygon-rpc.com"
private_key_env: "POLYGON_PRIVATE_KEY"
gas_limit: 200000
gas_price_multiplier: 1.1 # 10% above current gas price
Store your private key securely as an environment variable, e.g., export POLYGON_PRIVATE_KEY="0x...". Never hardcode private keys directly into configuration files or source code. Agents access the wallet via the blockchain tool:
from openclaw.tools import blockchain
wallet = blockchain.get_wallet("polygon_mainnet")
# Example: Send a transaction to a smart contract
tx_hash = wallet.send_transaction(
to="0xContractAddress",
value=0, # In Wei
data="0xMethodSignatureAndEncodedArgs"
)
print(f"Transaction sent: {tx_hash}")
Implement strict risk management parameters in your agent’s configuration, such as daily loss limits, maximum position sizes, and slippage tolerance. Use a separate wallet for testing on a testnet before deploying to mainnet. Regularly audit smart contract addresses and API endpoints to prevent supply chain attacks. Consider using a hardware security module (HSM) for private key storage in extremely high-value deployments.
Step 9: Setting Up Monitoring and Alerting
A production OpenClaw deployment requires robust monitoring to ensure continuous operation and prompt detection of issues. Start the OpenClaw dashboard: openclaw dashboard start --port 8080. This provides a local web interface to visualize agent status, task queues, memory usage, and tool executions.
For more advanced system-level monitoring, integrate with Prometheus and Grafana. Expose OpenClaw metrics by adding a metrics section to your config/system.yaml (create if it doesn’t exist):
system:
metrics:
enabled: true
port: 9090
path: /metrics
Then, configure your Prometheus server to scrape http://localhost:9090/metrics. Create Grafana dashboards to visualize key metrics like agent uptime, LLM token usage, tool execution success rates, and memory consumption.
For critical alerts, integrate with PagerDuty or Slack. OpenClaw can trigger webhooks on specific events (e.g., agent crash, task failure, risk limit breach). Configure these in config/alerts.yaml:
alerts:
agent_failure:
webhook_url: "https://hooks.slack.com/services/..."
payload: {"text": "OpenClaw agent {{agent_id}} failed. Error: {{error_message}}"}
condition: "agent.status == 'failed'"
risk_breach:
webhook_url: "https://events.pagerduty.com/v2/enqueue"
payload: {"routing_key": "YOUR_PAGERDUTY_INTEGRATION_KEY", "event_action": "trigger", "payload": {"summary": "High Risk Alert: {{agent_id}} breached risk limits", "severity": "critical"}}
condition: "trading.risk_breached == true"
Regularly review logs in ./logs/ for detailed debugging information. Implement log rotation to prevent disk space exhaustion.
Step 10: Hardening Security and Best Practices
Securing your OpenClaw deployment is crucial, especially for agents handling sensitive data or financial transactions.
- Isolate Execution Environments: Always use Docker for
execute_shelltools. Ensure Docker containers run with minimal privileges and restricted network access. - Secret Management: Never hardcode API keys or private keys. Use environment variables, macOS Keychain, or dedicated secret management services (e.g., HashiCorp Vault, AWS Secrets Manager) for production.
- Network Security: Restrict network access to your Mac Mini. Use a firewall to block incoming connections to all ports except those explicitly required (e.g., SSH, OpenClaw dashboard if accessed internally). Never expose sensitive services directly to the public internet without proper TLS and authentication.
- Least Privilege: Configure user accounts with the minimum necessary permissions. The user running OpenClaw agents should not have root access.
- Code Audits: Regularly audit your custom skill code and tool definitions for vulnerabilities, especially those interacting with external systems or executing shell commands.
- Dependency Management: Keep all Python packages and system dependencies updated to patch known security vulnerabilities. Use
pip-auditto scan for known vulnerabilities in your Python dependencies. - Data Encryption: Encrypt sensitive data at rest (e.g., your
./datadirectory) using full disk encryption (FileVault on macOS) or specific directory encryption. - Backup Strategy: Implement a regular backup strategy for your
config/anddata/directories. These contain your agent configurations, memory, and logs, which are critical for recovery. - Runtime Security (Advanced): For high-security environments, consider using eBPF-based runtime security tools like Raypher to enforce fine-grained policies on process execution and system calls, detecting and preventing anomalous behavior. Rampart can be used for policy enforcement on tool executions, ensuring agents only use tools in predefined ways.
- Regular Review: Periodically review agent behavior, logs, and configurations to identify and mitigate potential security risks or unintended operations.
Comparison of OpenClaw with Other AI Agent Frameworks
To better understand where OpenClaw stands in the AI agent ecosystem, it’s helpful to compare it with other prominent frameworks.
| Feature / Framework | OpenClaw | AutoGPT | LangChain | CrewAI |
|---|---|---|---|---|
| Primary Focus | 24/7 Autonomous Operations, Production Bots, Multi-Agent Swarms | Exploratory, Single-Agent Tasks, Goal-Oriented | LLM Orchestration, Tooling, Chains | Orchestration of Specialized Agents, Role-Playing |
| Execution Model | Deterministic Scheduler, Explicit State, YAML Configs | Recursive Thought Loop, Dynamic Prompting | Chains, Agents with Tools | Hierarchical/Collaborative Agents, Tasks |
| Persistence | SQLite, Vector DB (Nucleus MCP), File-based | Limited (JSON, File-based for some) | Various (Vector DBs, Caches) | Limited (Task history) |
| Multi-Agent Support | First-class, Structured MCP/ACP Protocols, Orchestrator | Limited (forked projects exist) | Via Agent Executor & Tools | Core feature, defined roles & tasks |
| Tool Definition | Declarative YAML, Python Modules, Docker Sandboxing | Python Code (requires coding) | Python Code, Pydantic Models | Python Code, Pydantic Models |
| Headless Browser | Built-in Playwright Integration, Dockerized | Often External Libraries (Selenium, Playwright) | External Libraries (Selenium, Playwright) | External Libraries (Selenium, Playwright) |
| Scheduling | Cron-based, Daemonized Scheduler, Retries | Manual/External | External (e.g., Celery, Airflow) | Manual/External |
| Security Features | Docker Sandboxing, Env Var Secrets, eBPF/Rampart Integration | Basic | Depends on underlying tools | Depends on underlying tools |
| Local-First Design | Yes, designed for local machines (e.g., Mac Mini) | Yes, can run locally | Yes, but often used with cloud LLMs | Yes, can run locally |
| Real-world Use Cases | Automated Trading, Content Generation, Data Scraping, DevOps | Research, Creative Writing, Simple Task Automation | RAG, Chatbots, Data Analysis | Complex Workflows, Research, Code Generation |
This table highlights OpenClaw’s emphasis on robust, production-grade autonomous operations with strong support for multi-agent coordination, persistent memory, and security, especially when compared to more exploratory or library-focused frameworks.
Conclusion and Next Steps
By following this comprehensive guide, you have successfully set up a robust, autonomous AI agent framework with OpenClaw on your local machine. You now have a system capable of 24/7 operation, multi-agent coordination, persistent memory, and secure tool execution, mirroring production deployments that generate significant value. You’ve configured your first agent, built a custom tool registry, implemented persistent memory, scheduled tasks, orchestrated a multi-agent swarm, integrated headless browser automation, set up secure wallet integration, and established a monitoring and alerting system. Furthermore, you’ve learned crucial security hardening techniques and understand OpenClaw’s unique position in the AI agent landscape.
To further enhance your OpenClaw deployment, consider the following next steps:
- Develop Custom Skills: Explore creating more sophisticated custom Python skills (
./skills/) that leverage specialized external APIs or local libraries, expanding your agents’ capabilities. - Optimize LLM Prompts: Experiment with different system prompts and agent directives to refine agent behavior and improve task completion accuracy. Techniques like few-shot prompting or chain-of-thought prompting can significantly boost performance.
- Advanced Orchestration: For highly complex workflows, delve deeper into the MCP/ACP protocols to design intricate communication patterns between your agents, enabling more advanced forms of collaboration and problem-solving.
- Cloud Integration (Optional): While OpenClaw is designed for local-first operation, you can integrate it with cloud services for specific tasks, such as offloading heavy computation to cloud GPUs for fine-tuning models or leveraging managed databases for massive long-term memory requirements, while keeping core agent logic local.
- Community Engagement: Join the OpenClaw community forums or Discord channel. Sharing your experiences, asking questions, and contributing to the project can accelerate your learning and provide valuable insights.
- Performance Tuning: Monitor resource utilization (CPU, RAM, disk I/O) and fine-tune agent concurrency, memory settings, and task schedules to achieve optimal performance without overstressing your hardware.
- Explore Advanced Tooling: Investigate integrating more advanced tools such as image generation models, speech-to-text services, or specialized data analysis libraries to further expand your agents’ operational scope.
The journey into autonomous AI agents is continuous. With OpenClaw, you have a powerful and flexible foundation to build, deploy, and manage intelligent systems that can automate complex tasks and generate real-world value.