You seem to be implying that the company that employs the best chemists should therefore also make the best cakes. I don't see an obvious reason why this should hold true. I think it's fair to ridicule a bunch of chemists acting as master patissiers.
They're completely vibe-coding one of their flagship products. It's not unreasonable to consider that the people who took that decision are, indeed, idiots.
> I'm assuming they wouldn't do something like this unless the recent load issues (mostly driven by OpenClaw usage) were seen as an existential threat.
I think another possibility is that they are trying to shift the burden of OpenClaw to their competitors.
Openclaw is an always on AI assistant that's plugged into a bunch of MCPs. You don't understand what kinds of problems that can help solve and cant envision any use cases for that?
From a conceptual perspective it sounds great. The problem is that OpenClaw isn't actually a solution to that problem for 2 reasons, user expectation and underlying security. The majority of people I've talked to who want an 'AI assistant' effectively are expecting a proper executive assistant, just in AI form. A proper executive assistant will remember every important bit you tell them, they won't need to be reminded of it later, and more importantly they come to me of their own volition when something comes up. All things OpenClaw does not solve. Further, using MCP as the underlying protocol means you have to implicitly trust every piece of data you connect to that AI, because otherwise it's way too easy for me to send you an email with hidden instructions just for your AI to read. I mean even the defaults for the OpenClaw install had basically opened everyone who installed it and didn't configure it in any way to any attacker. So while I agree with you that there are problems in this space that an AI agent 'could' solve, OpenClaw does not currently solve any of them, and in fact does the opposite, exposing you and all your information easily.
I think the important point in the parent comment is "Burning a shit ton of tokens". Openclaw was built fast and loose, making it use far too many tokens for trivial things. I'm confident the next Claw can and will be engineered to be at least 10x as token efficient and more reliable.
Drafting email responses for work, organizing talking points for upcoming meetings based on email and doc context. Creating tickets for work tracking. Anything you can do with claude code and mcps pretty much.
None of those things require an always on token burner. I'm not trying to be rude, but do you think that's the only way to present relevant information to an LLM or something? It's literally the least efficient way to do it.
Seems silly to bash a company for using open source exactly in accordance with the license. If they expected to be compensated, they picked the wrong licensing terms.
It shouldn't be the role of a company to hold their nose and work with the government, it should be the government's role to inspire confidence that what they are doing with the technology is ethical.
> Calling people who work on AI for wartime purposes immoral is fundamentally immoral when AI in war replaces the need for human casulties.
This is naive. It will only reduce casualties for the side with the AI, and will very likely embolden countries to fight more wars.
I get the point, I simply find the methods these types of thought experiments use hold no real psychological or philosophical value. I could see gaining insight/value by using the actions of creating, asking, or answering within the unrealistic bounds (or even my objection) of these fantastical hypotheticals designed to reveal basic human behaviour as some kind of party trick -- to be the real psychological/philosophical thought exercise though.
> Depending on where you live, a reasonable portion to the large majority of the population is now dead. The ones alive have, by definition, a strong bias towards individualism and noncooperation.
Anyone who picked blue gambled their own lives over nothing. There is nothing altruistic about pressing the blue button and especially nothing altruistic about trying to convince people to press the blue button. The altruistic thing is to convince everyone that they don't need to kill themselves by pressing the blue button.
Either everyone lives or you don't get to experience the mess that follows when red wins cause you're dead. Sounds like a strictly better option indeed.
The checks and balance are between the 3 branches of government. If congress wanted to stop the war, they could. If the supreme court wanted to hand the power to start wars back to congress they could.
Just because they don't, doesn't mean they aren't able. The real flat earth theory is thinking that unwritten rules and institutions were protected from a president that insists on pulling every lever of power at once, but that's separate from the checks and balances.
If one person in executive position is able to effectively override the nation's rules and institutions it sounds awfully close to saying there are no checks and balances.
reply