OpenClaw is just scaffolding, I don't think the developer did anything truly novel or 'frontier-level' in the way that OpenAI or Anthropic or Google would consider it.I really appreciated your post. Could you elaborate a bit more on the point above? I’m curious to hear how you see things taking a very bad turn.
I think it was more about optics and possibly hiring the developer vs. actually gaining novel technology. Agentic workflows are tricky and the 'move fast and break things' doesn't work very well when these tools have access to your raw filesystem or credit cards, and scams (likely created by actual humans, NOT AIs) were all over the place within a day or two on those agent-to-agent platforms that were in the news as a hot thing a couple weeks ago "the AI is alive and they're scheming" kind of energy. They're heuristic systems talking to each other which is VERY COOL in an ant colony way but when these people can't even get MCP working with high security I don't think there's any chance of Agent-to-Agent scaling to some kind of massive scale soon. If it was a 'hire and do this in 2-3 years' thing, yeah, but I suspect very much it isn't – exactly because it's in the public consciousness.
I'm basing this on personal experience viewing them, this is also how I know the "AI spun up a blog to slag on a developer who wouldn't accept a pull request" was BS, because the blog itself had brackets like [TOPIC] all over the place that a human clearly set up. The media ran with it irresponsibly because it was a sensational story, and even Ars had to retract their article about it, although ironically that was because they used an AI to fabricate quotes.
Scaffolding is important when you're using multi-agents or even multi-workflows, I am probably going to spend some part of the middle / late part of my year building my own, but it's not research heavy, and that + strategy is I think something missing from OpenAI vs. Anthropic who has been nailing both product placement and execution, other than the voice issue I mentioned which IS a big deal to me.
Anthropic spent a while quietly getting integrated with corporate America and carefully made deals and demonstrated their power, OpenAI seems to be throwing everything at the wall to see what sticks and I think that strategy won't pay off since this field has very little in the way of 'moats', at least for now. I expect this will change to some degree with world models and it's why we have companies spinning up warehouses full of robots with cameras to just run experiments and train, which is really more RL than ML/AI. My pet theory is that RL has a lot more "there there" than we give it credit for at least in the public consciousness.
I'm also personally working on something involving what I'd call a substrate that I can't really talk about for now but seems promising, at least on paper. This is also why I'm particularly frustrated that I can't read research papers easily anymore because so many of them just measure banal garbage, and usually not very well. I don't care about AI assisted productivity because anyone who has deep experience will know there is fundamental utility and anyone who has limited experience and skill will have their "AI sucks, haha" validated, and the worst part is they are both correct.
These people have flooded the marketplace of ideas and LLM assisted writing has made this much worse. I used to read hundreds to thousands of papers a year and now I really have to pick and choose and I feel like I'm missing a lot because the noise to signal ratio is insane.