Hacker Newsnew | past | comments | ask | show | jobs | submit | ajam1507's commentslogin

It very much depends on what kind of company you work for. You could never run a startup like this, for example.

Pretty wild to say that the company with one of the best models (arguably the best) is a bunch of idiots.

You seem to be implying that the company that employs the best chemists should therefore also make the best cakes. I don't see an obvious reason why this should hold true. I think it's fair to ridicule a bunch of chemists acting as master patissiers.

> Pretty wild to say that the company with one of the best models (arguably the best) is a bunch of idiots.

It would be pretty wild if they didn't considering all the money thrown at them!

You're looking at one of the largest investments business (as a collective) has ever made. They had better be one of the forerunners in the space :-/


And you think with all of this money they are employing idiots?

They're completely vibe-coding one of their flagship products. It's not unreasonable to consider that the people who took that decision are, indeed, idiots.

The people working on the models almost certainly aren't the same people writing the code for their harness.

Even idiots can succeed if you uncritically funnel them hundreds of billions of dollars.

You can't just burn money in a pit to get the best AI model out. Undoubtedly some of the smartest people in the world are working on frontier AI.

> I'm assuming they wouldn't do something like this unless the recent load issues (mostly driven by OpenClaw usage) were seen as an existential threat.

I think another possibility is that they are trying to shift the burden of OpenClaw to their competitors.


I think this makes sense. I don't understand what problem OpenClaw is solving or what the use case is other than just burning a shit ton of tokens.

That's all the industry.

Openclaw is an always on AI assistant that's plugged into a bunch of MCPs. You don't understand what kinds of problems that can help solve and cant envision any use cases for that?

From a conceptual perspective it sounds great. The problem is that OpenClaw isn't actually a solution to that problem for 2 reasons, user expectation and underlying security. The majority of people I've talked to who want an 'AI assistant' effectively are expecting a proper executive assistant, just in AI form. A proper executive assistant will remember every important bit you tell them, they won't need to be reminded of it later, and more importantly they come to me of their own volition when something comes up. All things OpenClaw does not solve. Further, using MCP as the underlying protocol means you have to implicitly trust every piece of data you connect to that AI, because otherwise it's way too easy for me to send you an email with hidden instructions just for your AI to read. I mean even the defaults for the OpenClaw install had basically opened everyone who installed it and didn't configure it in any way to any attacker. So while I agree with you that there are problems in this space that an AI agent 'could' solve, OpenClaw does not currently solve any of them, and in fact does the opposite, exposing you and all your information easily.

I think the important point in the parent comment is "Burning a shit ton of tokens". Openclaw was built fast and loose, making it use far too many tokens for trivial things. I'm confident the next Claw can and will be engineered to be at least 10x as token efficient and more reliable.

Ah I didn't realize they meant openclaw literally. By now openclaw is the generic term for these integrated agents it seems.

Do you have some examples?

Drafting email responses for work, organizing talking points for upcoming meetings based on email and doc context. Creating tickets for work tracking. Anything you can do with claude code and mcps pretty much.

None of those things require an always on token burner. I'm not trying to be rude, but do you think that's the only way to present relevant information to an LLM or something? It's literally the least efficient way to do it.

Seems silly to bash a company for using open source exactly in accordance with the license. If they expected to be compensated, they picked the wrong licensing terms.


And what do you think they could do with those things?


It shouldn't be the role of a company to hold their nose and work with the government, it should be the government's role to inspire confidence that what they are doing with the technology is ethical.

> Calling people who work on AI for wartime purposes immoral is fundamentally immoral when AI in war replaces the need for human casulties.

This is naive. It will only reduce casualties for the side with the AI, and will very likely embolden countries to fight more wars.


I think you've missed the point of a thought experiment entirely.


I get the point, I simply find the methods these types of thought experiments use hold no real psychological or philosophical value. I could see gaining insight/value by using the actions of creating, asking, or answering within the unrealistic bounds (or even my objection) of these fantastical hypotheticals designed to reveal basic human behaviour as some kind of party trick -- to be the real psychological/philosophical thought exercise though.


> Depending on where you live, a reasonable portion to the large majority of the population is now dead. The ones alive have, by definition, a strong bias towards individualism and noncooperation.

Anyone who picked blue gambled their own lives over nothing. There is nothing altruistic about pressing the blue button and especially nothing altruistic about trying to convince people to press the blue button. The altruistic thing is to convince everyone that they don't need to kill themselves by pressing the blue button.


> Blue is purely a win-win for me

Dying is a win for you?


Either everyone lives or you don't get to experience the mess that follows when red wins cause you're dead. Sounds like a strictly better option indeed.


The checks and balance are between the 3 branches of government. If congress wanted to stop the war, they could. If the supreme court wanted to hand the power to start wars back to congress they could.

Just because they don't, doesn't mean they aren't able. The real flat earth theory is thinking that unwritten rules and institutions were protected from a president that insists on pulling every lever of power at once, but that's separate from the checks and balances.


If one person in executive position is able to effectively override the nation's rules and institutions it sounds awfully close to saying there are no checks and balances.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: