OpenClaw is an open-source AI agent framework that gives users the ability to run autonomous, tool-using agents connecting to files, browsers, APIs, and more. That scope of autonomy is exactly what makes it powerful and exactly why it needs to be deployed with care.
If your company is anything like ours, people have probably started coming to your security team in the past few weeks saying they want to try out OpenClaw.
Or just installing it without saying anything.
We didn't want to stifle experimentation, so we started piecing together what we believe to be a safer default deployment.
Out of the box, OpenClaw trades security for convenience. That’s fine for a solo developer experimenting on a personal machine. It is a real problem in an enterprise environment. Here’s what keeps security teams up at night:
At UiPath, we believe security and productivity are not a trade-off. They are a design challenge. So instead of blocking OpenClaw, we re-engineered how it gets deployed. Our solution is a one-command VM that applies hardened defaults from day one, giving teams the AI-powered productivity boost they want without handing over the keys to the kingdom. Here’s how each problem gets addressed:
What we shipped was a virtual machine that users can spin up with one command and whose logs are ingested continuously into Azure.
If you're just interested in running this yourself, or taking a look at the code, go to the repository and follow the installation instructions. If you want to learn more about our approach, follow along.
On a high level, the system layout is the following:
Everything is packaged together using Vagrant for ease of deployment and we use Task to provide users with a minimal interface for interacting with the machine.

First, the installation: we installed OpenClaw via pnpm so we can delay the latest version available for updates by a few days. In case of a supply chain attack, our hope is that this buffer will guard the users until the issue is fixed.
As we said before, the gateway runs as a SystemD service under a dedicated openclaw user. If the gateway process is compromised, we've used some of the sandboxing options made available by SystemD to greatly limit access to other parts of the system:
ProtectSystem=strict, mounts the filesystem as read-only except for the data directories of the process and /dev/, /proc/, and /sys/. The process can write to /var/lib/openclaw, /tmp, and a few more — and nothing else.ProtectHome=yes makes /home, /root, and /run/user inaccessible.PrivateDevices=yes removes access to /dev.PrivateTmp=yes gives the process its own /tmp, so it can't read temp files from other processes. NoNewPrivileges=yes makes sure that the children of this process can't gain new privileges.RestrictNamespaces=yes prevents creating new namespaces.ProtectControlGroups=yes makes the cgroup filesystem read-only.ProtectProc=noaccess and ProcSubset=pid hide other processes in /proc, limiting reconnaissance after a compromise.With all that being said, we've had to execute this unit as part of the docker group, which can be easily escalated to root on the virtual machine. Access to docker was necessary so we could sandbox agents, tools, and the browser inside containers. We feel that this is a decent compromise given that we are running inside a VM, but we are open to exploring rootless docker/podman setups.
On the application side, we tried to disable most of the functionality and enable all the sandboxes that OpenClaw makes available at the time of writing.
We expect you to have an honest conversation with your teams and explain to them that they should exercise caution and common sense when enabling features and giving agents the right to perform actions on their behalf. As a rule of thumb, read-only is preferred and will provide a good-enough productivity boost. If you insist on write rights (e.g. sending messages/emails to other people), we recommend creating a separate dedicated account for OpenClaw under that platform. This way if it does modify things or does any kind of damage, it should be limited by the rights and resources of that account, not yours. Exercise caution in the power you give it.
Philosophy aside, here's a high level overview of what we changed in the config:
We used fluent-bit for log ingestion due to its support of a vast number of input sources and output destinations. For our particular use case we ingest logs into Azure Blobs, but configuring it for any other supported output destination should be pretty straightforward.
When the virtual machine is provisioned, we install the azure cli and fluent bit, but azure credentials are not persisted while the machine is running. Instead, we follow this process:
This way we ensure that no valid azure credentials are left on the machine while it's running.
The SAS we generated at step 2 above will expire after a maximum of 7 days. This represents an issue because we expect this VM to be long-lived.
To solve this, we block access to the gateway with a short Lua script that tries to read the stored SAS (if it exists) and checks if it is about to expire. If it is, the user is presented with an HTML page instructing them to run a single command that will generate a new valid token.
We’ve exposed a small set of commands to interact with the VM and the openclaw service by using a Taskfile. At the time of writing, users can:
approve-device: Get the list of pending approvals and approve the connection of a device to OpenClaw create: Sets up and starts the OpenClaw Virtual Machine destroy: Destroy the OpenClaw Virtual Machine down: Suspends the OpenClaw Virtual Machine login: Get the OpenClaw authentication URL (aliases: auth) restart: Restart the openclaw Virtual Machine setup-models: Setup API keys for any model provider shell: Get a terminal session to interact with the OpenClaw service using the CLIup: Start the openclaw Virtual Machine logs:dump: Dump the logs of OpenClaw to a file logs:tail: Connect to the log stream of OpenClawThere are some improvements that can be made to the project:
Think you can help with any of that? We welcome PRs from the larger community.
Agents are everywhere. Even if model improvements were to stop today, they are here to stay.
One can expect that the pace of releases and hype of such products is going to increase. The problem is that security comes as an afterthought, but it doesn’t have to be this way.
Just as the cost of code generation is going to become negligible, so is the cost of secure code. A security minded agent can be introduced anywhere in the development cycle: research, planning, review. There are already skills/plugins that can help with this: code review plugin, security scan.
We think the results will be night and day.