Peter joining OpenAI is a win for agents, but open source only stays “open” if governance and funding are truly independent. Foundations, forks, and open protocols are the real safety nets, not just promises.
For products like KiloClaw, the value isn’t who employs Peter, it’s reliability, integrations, and understanding real user workflows. If OpenClaw stays open, everyone wins. If it doesn’t, the community will route around it like it always does.
"They’re getting the value of his way of looking at agents in the world"
Yeah - shipping AI generated code that he did not even READ...
Here's the response I posted on a Note earlier:
Well, that makes sense. The creator of OpenClaw - the guy who ships AI generated code without even reading it - is joining Sam Altman, the guy who promises AGI next week - and never delivers.
They deserve each other.
Then there's this which I also linked to in a note:
MIT Professor: The Most Dangerous AI Application on the Internet Just Went Viral…
Everyone in the cybersecurity industry who has looked at OpenClaw has clawed their eyes out (pun intended). Now OpenAI is going to spread a massive security disaster?
Good and thoughtful analysis. Thanks for that. We really want OpenClaw to remain strong.
Excellent POV 👍 On that note, when are you launching KiloClaw... Can't wait 🫷
This is a balanced take.
Peter joining OpenAI is a win for agents, but open source only stays “open” if governance and funding are truly independent. Foundations, forks, and open protocols are the real safety nets, not just promises.
For products like KiloClaw, the value isn’t who employs Peter, it’s reliability, integrations, and understanding real user workflows. If OpenClaw stays open, everyone wins. If it doesn’t, the community will route around it like it always does.
"They’re getting the value of his way of looking at agents in the world"
Yeah - shipping AI generated code that he did not even READ...
Here's the response I posted on a Note earlier:
Well, that makes sense. The creator of OpenClaw - the guy who ships AI generated code without even reading it - is joining Sam Altman, the guy who promises AGI next week - and never delivers.
They deserve each other.
Then there's this which I also linked to in a note:
MIT Professor: The Most Dangerous AI Application on the Internet Just Went Viral…
https://www.youtube.com/watch?v=ePEg5KS8DqA
Everyone in the cybersecurity industry who has looked at OpenClaw has clawed their eyes out (pun intended). Now OpenAI is going to spread a massive security disaster?
What could go wrong?