OpenClaw Installation and Deployment
Installation and Deployment Strategy
OpenClaw offers multiple deployment methods, but before getting started, we need to clarify: where is the most suitable place to deploy an AI Agent?
An AI Agent is like an intern to whom you've granted permissions to operate your computer. If you give it maximum privileges without restrictions, it might hallucinate while performing a task like rm -rf, causing irreversible damage. Therefore, environment isolation is our primary consideration during installation.
We will focus on the two most mainstream deployment methods: npm Global Installation (suitable for development and testing) and Docker Sandbox Deployment (suitable for production/long-term use).
Option 1: npm Global Installation (Recommended for Developers)
This is the fastest and most direct way, suitable for familiarizing yourself with the OpenClaw workflow on your local machine and developing/debugging custom skills.
Prerequisites:
- Operating system: macOS / Linux / WSL2.
Node.jsinstalled: version >= 22.0.0 is required (strongly recommended to usenvmfor version management).
Step 1: Installation
Open your terminal and execute the following command to globally install the latest OpenClaw core package:
npm install -g openclaw@latest
Step 2: Initialization Wizard (Onboarding)
After installation, don't rush to start it. You need to run the initial configuration wizard first:
openclaw onboard --install-daemon
This interactive wizard will guide you through setting core parameters:
- Model Provider Selection: Choose the provider behind the LLM you plan to connect (e.g., OpenAI, Anthropic, Ollama, etc.).
- API Key Configuration: Enter the corresponding API Key.
- Communication Channels: Select your control terminal (e.g., Web-based terminal or Telegram Bot Token).
Step 3: Start and Access
Once everything is ready, start the Gateway service:
openclaw gateway start
If using the local Web terminal, open the default address http://127.0.0.1:18789 in your browser. You will see the OpenClaw console and can start chatting.
[!CAUTION] Security Warning Never expose port 18789 directly to the public internet! If you are using npm installation on a cloud server, be sure to bind it to
127.0.0.1and use an SSH tunnel, or configure an Nginx reverse proxy with authentication mechanisms.
Option 2: Docker / Docker Compose Deployment (Highly Recommended)
If you plan to use OpenClaw as a long-term automation assistant executing scripts on your host machine, "caging" it inside Docker is the safest approach. Even if it hallucinatse and executes extremely dangerous system commands, the damage is limited to resetting the container.
Prerequisites:
- Docker Engine and
docker-composeinstalled on the host machine.
Directory Mapping: Isolation with Data Persistence
Before running the container, we must understand: The application can be periodically restarted or destroyed, but the AI's configuration and memory must be preserved.
OpenClaw stores all configuration materials in the ~/.openclaw directory by default. We need to map this directory out of the container.
Deployment Steps
Create an empty directory as your workspace and create a docker-compose.yml file within it:
version: '3.8'
services:
openclaw:
image: getopenclaw/openclaw:latest
container_name: openclaw-agent
restart: unless-stopped
ports:
- "18789:18789"
volumes:
# Mount the host config directory to /root/.openclaw inside the container
- ~/.openclaw:/root/.openclaw
environment:
- TZ=Asia/Shanghai # Ensure correct timezone for Cron tasks
# Best practice: Drop capabilities to prevent privilege escalation
cap_drop:
- ALL
Then, run the target command in the current directory on the host:
docker compose up -d
Initialization in Docker Mode
Since OpenClaw is running inside a container, if you need to execute the initial wizard, you can enter the container to do so:
docker exec -it openclaw-agent bash
# Once inside the container terminal, run:
openclaw onboard
Alternatively, for more advanced users, you can directly edit the JSON configuration files generated under ~/.openclaw on the host machine.
Core Configuration Structure
Regardless of your chosen deployment scheme, the data center for OpenClaw resides in the ~/.openclaw directory. Understanding this structure is crucial:
~/.openclaw/
├── openclaw.json # Core configuration file; the "motherboard" settings
├── exec-approvals.json # Command whitelist and approval policies (Security core)
├── workspace/ # Workspace directory (Agent identity, project files)
│ ├── SOUL.md # Spiritual identity/instruction
│ ├── USER.md # Master-owner configuration
│ └── MEMORY.md # Long-term memory
└── skills/ # All installed extension skills and plugins
You can think of this as a robot's anatomical chart:
openclaw.jsonis the nervous system, deciding which "brain" model to connect to.exec-approvals.jsonis the safety valve component.- The
workspacedirectory contains the robot's acquired memories and identity profiles.
Next, we will dismantle the "Nervous System" in detail: How to configure and switch between different LLM providers.