5 Comments
User's avatar
richardstevenhack's avatar

Really glad to see this article. I've been screaming about this thing in my Notes since it came out, once the security issues became clear.

Good job. You guys at Kilo Code continue to impress me with your logic and professionalism. I don't at the moment use Kilo Code, but when I'm ready to do vibe coding, I expect I'll be choosing it as my primary utility.

Not to mention that I don't trust Anthropic as far as I can throw Dario Amodei. :-)

Ismael La's avatar

This is actually useful. Make it's scope much narrower and it can even be better at the tasks we throw on it. Insightful post.

Zen Equity's avatar

What marginal difference does it make compared to using Github Copilot agents or Kilo Cloud Agents? I imagine by making OpenClaw an intern, you must have planned to give it more than Github to do and this is just the beginning?

Lakshmi Narasimhan's avatar

The intern framing is exactly right. I've been running a similar setup with my OpenClaw agent having its own isolated identity for Git operations. The difference between 'AI that acts as me' and 'AI that contributes under my review' is massive for both security and accountability.

One thing I'd add: the review process actually makes the AI more useful, not less. When you know every output gets checked, you can let it take bigger swings. It's counterintuitive but the guardrails enable more ambitious automation.

The ScuttleBot pattern of tracing actions to an AI identity instead of your identity is something more people should adopt. Makes audit trails cleaner too.

Julian Goldie SEO's avatar

What’s your favourite use case?