That's the interesting question, right? Because if this unwinds during a period of external inflation (say, because of a big war and energy shortage) then even the Bernanke would say helicopter money won't work
Not that I'm some paragon when it comes to critical thinking exactly, but if there any sort of proof or evidence of Anthropic "silencing negativity"? Wouldn't surprise me, but also haven't seen anything conclusive about it either, so spreading that they are as fact, is ironically FUD itself.
Cursor seemingly went out of their way to not mention that they were actually running Kimi K2.5 and essentially by that omission made it seem like they had made their own model. They added a note to a blog post about using it at some point and then when they wrote a new one they conveniently left it out again.
Because it dumbs everything down, makes the output quality worse and more expensive, and removes personal agency and is dehumanizing. Plus, does it actually prevent harm, do we have evidence?
Finally, what is often missed is what if an actual good is decided harmful or something that is harmful is decided by AI company board XYZ to be “good”?
I think censorship is bad because of that danger. Quis custodiet ipsos custodes (who will watch the watchers).
Instead of throwing ourselves into that minefield of moral hazard, we should be lifting each other up to the tops of our ability and not infantilizing / secretly propagandizing each other.
There's enough evidence that Anthropic would be liable if they didn't make a reasonable effort to do something about it.
Look, I get where you're coming from, partially. I generally believe we should make an effort to maximize individual liberty. But in this case, were talking about severe bodily harm and the death of young adults. We've spent the last decade dealing with the chaos and general unwellness has brought to our societies. This isn't much different.
What are you giving up here where such sacrifices are worth it? Can you measure it? What's the utility?
There's room for models trained for non consumer purposes, further age restriction etc but shit is moving so fast. If there are actual needs for a less censored model these can be addressed.
> Finally, what is often missed is what if an actual good is decided harmful or something that is harmful is decided by AI company board XYZ to be “good”?
This is just standard product liability and consumer protection. Companies who do nothing to protect their consumers from known harms are liable. Are you saying you think that's somehow bad for society?
I want to give give you realistic expectations: Unless you spend well over $10K on hardware, you will be disappointed, and will spend a lot of time getting there. For sophisticated coding tasks, at least. (For simple agentic work, you can get workable results with a 3090 or two, or even a couple 3060 12GBs for half the price. But they're pretty dumb, and it's a tease. Hobby territory, lots of dicking around.)
Do yourself a favor: Set up OpenCode and OpenRouter, and try all the models you want to try there.
Other than the top performers (e.g. GLM 5.1, Kimi K2.5, where required hardware is basically unaffordable for a single person), the open models are more trouble than they're worth IMO, at least for now (in terms of actually Getting Shit Done).
We need more voices like this to cut through the bullshit. It's fine that people want to tinker with local models, but there has been this narrative for too long that you can just buy more ram and run some small to medium sized model and be productive that way. You just can't, a 35b will never perform at the level of the same gen 500b+ model. It just won't and you are basically working with GPT-4 (the very first one to launch) tier performance while everyone else is on GPT-5.4. If that's fine for you because you can stay local, cool, but that's the part that no one ever wants to say out loud and it made me think I was just "doing it wrong" for so long on lm studio and ollama.
> We need more voices like this to cut through the bullshit.
Just because you can't figure out how to use the open models effectively doesn't mean they're bullshit. It just takes more skill and experience to use them :)
> We need more voices like this to cut through the bullshit.
Open models are not bullshit, they work fine for many cases and newer techniques like SSD offload make even 500B+ models accessible for simple uses (NOT real-time agentic coding!) on very limited hardware. Of course if you want the full-featured experience it's going to cost a lot.
There is absolutely a use case for open models... but anyone expecting to get anywhere near the GPT 5.x or Claude 4.x experience for more demanding tasks (read: anything beyond moderate-difficulty coding) will be sorely disappointed.
I love my little hobby aquarium though... It's pretty impressive when Qwen Coder Next and Qwen 3.5 122B can accomplish (in terms of general agentic use and basic coding tasks), considering that the models are freely-available. (Also heard good things about Qwen 3.5 27B, but haven't used it much... yes I am a Qwen fanboi.)
A reasonable conclusion, considering that money and power seem to have their own gravity, so people with more of both end up getting even more of both, and vice versa.
Can't blame someone who comes to such a conclusion about money and power.
Labeling power evil is not automatic, its just making an observation of the common case. Money-backed power almost never works for the forces of good, and the people who claim they're gonna be good almost always end up being evil when they're rich and powerful enough. See also: Google.
Google is the company that created a class-less non-hierarchical internet. Everyone can get the same access to the same services regardless of wealth or personhood. Google is probably the most progressive company to ever exist, because money stops no one from being able to leverage google's products. Born in the bush of the Congo or high rise of Manhatten, you are granted the same google account with the same services. The cost of entry is just to be a human, one of the most sacrosanct pillars of progressive ideology.
Yet here they are, often considered on of the most evil companies on Earth. That's the interesting quirk.
> Google is the company that created a class-less non-hierarchical internet.
Can you explain what you mean by this? I disagree but I don't understand how you think Google did this so I am very curious.
For my part, I started using the internet before Google, and I strongly hold the opinion that Google's greatest contribution to the internet was utterly destroying its peer to peer, free, open exchange model by being the largest proponent of centralizing and corporatizing the web.
The alternative was a teleco AOL style internet with pay tiers for access to select websites. The free web of the 90's would remain, but would be about as culturally relevant as Linux.
Surely you have to recognize the inconsistency of saying that Google "corporatized" the web, while the vast majority of people using google have never paid them anything. In fact many don't even load their ads or trackers, and still main a gmail account.
If we put on balance good things and evil things google has done, with honest intention, I struggle very hard to counter "gave the third world a full suite of computer programs and access to endless video knowledge for free with nothing more than dumpy hardware", while the evil is "conspired with credit card companies to find out what you are buying".
This might come off like I am just glazing google. But the point I am trying to illuminate is that when there is big money at play, people knee-jerk associate it with evil, and throw all nuance out the window.
Besides, IRC still exists for you and anyone else to use. Totally google free.
No I actually do understand where your opinion comes from now and I partially agree. I had forgotten about how badly the ISPs wanted the internet to mirror Cable TV plans.
There’s several subjects to go into here and HN probably isn’t the best place for the amount of detail this discussion requires but I will just note the amount of people blocking Google’s ads and trackers is negligible and has significantly shrunk in the mobile first era.
The wave is shifting to other corporations now but for a good while most of the internet was architected to give Google money. Remember SEO? An entire practice of web publishing centered around Google’s profit share. That hasn’t disappeared- it’s just evolved and transformed into more ingrained rent-seeking.
It’s a sane default to label power as evil in a society driven by greed, usury, and capital gain. Power tends to corrupt, particularly when the incentives driving its pursuit or sustenance undermine scruples or conscientiousness. It is difficult to see how power is not corrupting when it becomes an end in itself, rather than a means directed toward a worthy or noble purpose.
It's early days for Opus 4.7, but I will say this: Today, I had a conversation go well into the 200K token range (I think I got up to 275K before ending the session), and the model seemed surprisingly capable, all things beings considered.
Particularly when compared to Opus 4.6, which seems to veer into the dumb zone heavily around the 200k mark.
It could have just been a one-off, but I was overall pleased with the result.
I’m super envious. I can’t seem to do anything without a half a million tokens. I had to create a slash command that I run at the start of every session so the darn thing actually reads its own memory- whatever default is just doesn’t seem to do it. It’ll do things like start to spin up scripts it’s already written and stored in the code base unless I start every conversation with instructions to go read persistence and memory files. I also seem to have to actively remind it to go update those things at various parts of the conversation even though it has instructions to self update. All these things add up to a ton of work every session.
Something sounds very wrong with your setup or how you use it.
Is your CLAUDE.md barren?
Try moving memory files into the project:
(In your project's .claude/settings.local.json)
{ ...
"plansDirectory": "./plans/wip",
"autoMemoryDirectory": "/Users/foo/project/.claude/memory"
}
(Memory path has to be absolute)
I did this because memory (and plans) should show up in git status so that they are more visible, but then I noticed the agent started reading/setting them more.
This does kind of smell like the wrong way to use it. Not trying to self-promote here, but the experiences you shared really made me think I headed the right direction with my prompting framework ("projex" - I once made a post about it).
I straight up skip all the memory thing provided by harnesses or plugins. Most of my thread is just plan, execute, close - Each naturally produce a file - either a plan to execute, a execution log, a post-work walkthrough, and is also useful as memory and future reference.
Something seems wrong. A half-million tokens is almost five times larger than I allow even long-running conversations to get too. I've manually disabled the 1M context, so my limit is 200K, and I don't like it to get above 50%.
Is it... not aware of its current directory? Is its current directory not the root of your repo? Have you maybe disabled all tool use? I don't even know how I could get it to do what you're describing.
Maybe spend more time in /plan mode, so it uses tools and the Explore sub-agent to see what the current state of things is?
- Use the Plan mode, create a thorough plan, then hand it off to the next agent for execution.
- Start encapsulating these common actions into Skills (they can live globally, or in the project, per skill, as needed). Skills are basically like scripts for LLMs - package repeatable behavior into single commands.
If i had to guess i think you have probably overstuffed the context in hopes of moulding it and gotten worse outcomes because of that. I keep the default context _extremely_ small (as small as possible) and rely on invoked slash commands for a lot of what might have been in a CLAUDE.md before
reply