Hacker Newsnew | past | comments | ask | show | jobs | submit | spyckie2's commentslogin

Take away the hype and OpenAI / Anthropic are covering themselves with money and lighting themselves on fire to see who can make the bigger bonfire...

It is officially the 2010 Google era at Anthropic (the era where Google released tons of new products and spread themselves too thin).

Anyone remember Google's social media platform??? Google Plus?

This is a good era to be in! Its the era of product experimentation.

As long as you realize that 90% of the products will not be supported long term if it doesn't contribute to bottom line revenue, then just appreciate it for what it is, a bunch of smart people trying to create useful products.

Just don't be surprised if Anthropic goes the Google route, which is shutting down the majority of the products that are too small / not successful enough to impact their revenue.


You mean you aren't still using Google Duo and Allo? Google Reader? Playing games on your Stadia? I'd be worried about really locking into a specific Anthropic product at this point other than Claude Code


I never recovered from Inbox being killed.


I still fondly remember playing cyberpunk 2077 on release date with no download time. What the future could have been . Probably would have become economically infeasible regardless with GPU prices due to AI though


Every Anthropic release uses Claude models.

Not every Google product release used Google search. Some of them were completely outside of Google's domain.


> It's been funny watching my own attitude to Anthropic change, from being an enthusiastic Claude user to pure frustration.

You were enthusiastic because it was a great product at an unsustainable price.

Its clear that Claude is now harnessing their model because giving access to their full model is too expensive for the $20/m that consumers have settled on as the price point they want to pay.

I wrote a more in depth analysis here, there's probably too much to meaningfully summarize in a comment: https://sustainableviews.substack.com/p/the-era-of-models-is...


Off topic, but I really like the writing style on your blog. Do you have any advice for improving my own? In an older comment[1], you mentioned the craft of sharpening an idea to a very fine, meaningful, well-written point. Are there any books, or resources you’d recommend for honing that craft? Thanks in advance.

[1] https://news.ycombinator.com/item?id=44082994


The thing that inspires my writing is that the best sentences are self evident. Meaning you declare it without evidence and it feels so intuitively right to most people. It resonates, either being their lived experience, or being the inevitable conclusion of a line of thinking.

Making a sentence like requires deeply understanding a problem space to the point where these sentences emerge, rather than any "craft" of writing.

So the craft is thinking through a topic, usually by writing about it, and then deleting everything you've written because you arrived at the self evident position, and then writing from the vantage point of that self evident statement.

I feel that writing is a personal craft and you must dig it out of yourself through the practice of it, rather than learn it from others. The usage of AI as a resource makes this much clearer to me. You must be confident in your own writing not because it is following best practices or techniques of others but because it is the best version of your own voice at the time of being written.


Curious why you think that? Stuff like

> Yes, there is a relative scale level...

> Yes, having the smartest model will...

> yes Chinese AI companies have ...

yes yes yes, I didn't say anything, why write in a way that insinuates that I was thinking that?

I mean it doesn't come off as AI slop, so that's yay in 2026. But why do you think it is so good?


haha it is poorly written, its one of my pieces with the fewest drafts, i just wrote it and clicked submit to get the thoughts out of my head.

I think he is referring to the art of refining an idea though, which I do have something to say on his comment.


I agree with what you what you have written, which is why I would never pay a subscription to an external AI provider.

I prefer to run inference on my own HW, with a harness that I control, so I can choose myself what compromise between speed and the quality of the results is appropriate for my needs.

When I have complete control, resulting in predictable performance, I can work more efficiently, even with slower HW and with somewhat inferior models, than when I am at the mercy of an external provider.


What’s your setup?


For now, the most suitable computer that I have for running LLMs is an Epyc server with 128 GB DRAM and 2 AMD GPUs with 16 GB of HBM memory each.

I have a few other computers with 64 GB DRAM each and with NVIDIA, Intel or AMD GPUs. Fortunately all that memory has been bought long ago, because today I could not afford to buy extra memory.

However, a very short time ago, i.e. the previous week, I have started to work at modifying llama.cpp to allow an optimized execution with weights stored in SSDs, e.g. by using a couple of PCIe 5.0 SSDs, in order to be able to use bigger models than those that can fit inside 128 GB, which is the limit to what I have tested until now.

By coincidence, this week there have been a few threads on HN that have reported similar work for running locally big models with weights stored in SSDs, so I believe that this will become more common in the near future.

The speeds previously achieved for running from SSDs hover around values from a token at a few seconds to a few tokens per second. While such speeds would be low for a chat application, they can be adequate for a coding assistant, if the improved code that is generated compensates the lower speed.


Thank you for that, it's very interesting. I keep wanting to find time to try out a local only setup with an NVIDIA 4090 and 64gb of RAM. It seems like it may be time try it out.


I used the $60/mo subscription and I bet most developers get access to AI agents via their company, and there was no difference. They should have reduced the rate limits, or offered a new model, anything except silently reduce the quality of their flagship product to reduce cost.

The cost of switching is too low for them to be able to get away with the standard enshittification playbook. It takes all of 5 minutes to get a Codex subscription and it works almost exactly the same, down to using the same commands for most actions.


Thank goodness for capitalism for providing multiple competitors to multibillion dollar companies


My bad — I had Max, so more than $20. I can’t edit the comment any more. Can’t keep track of the names. I wonder when ‘pro’ started to mean ‘lowest tier’.

But your article is interesting. You think some of the degradation is because when I think I’m using Opus they’re giving me Sonnet invisibily?


Hard to say, but the fact is the intelligence was there and now it's not.

Maybe they are giving Sonnet, or maybe a distilled Opus, or maybe Opus but with lower context, not quite sure but intelligence costs compute so less intelligence means cheaper compute.


At my job and for personal projects I pay per token with claude and I've had no problems at all with it. No slowdowns, no "throttling", nothing.

I'm honestly surprised how many people have subscriptions and are expecting anthropic to eat the cost lol


So instead of breaking shit they should have just increased their prices.


dang why did I click, they are gorgeous...


I mean their ads business just broke $80b per quarter, not sure where this idea is coming from...


Google hasn't seen its legacy ad revenue start to dent until products with built-in agents start to see mass adoption.

Writing is on the wall that orders of magnitude fewer people will be going to google.com or using an interactive Google search in the next 5 years though.


LLMs are pretty mediocre for a lot of money queries like searching to buy shoes, looking at flights etc due to them not being up to date. So sure you can use them as a wrapper on top of Google but I assume a huge chunk of people will just go to Google to do that or use Google agents. Chrome will prove a very valuable asset for that - the whole experience can become agentic and Google is very well positioend to convert billions of users into their AI. Power of habit and also Google will deliver a very high quality experience at scale that only OpenAI can currently compete with. I'm not saying their search / ads revenue is never gonna drop - it might. But it will be a slow process (as we can see. it's actually still freaking growing in the high tens) and Google is well positioned to recover the lost revenue with its A.I offerings.


LLMs can execute searches? You can absolutely send ChatGPT to look for a cheap flight and it will do pretty well. And because I am paying for ChatGPT rather than the advertiser's, I am the customer and not the product.


You may pay to ChatGPT, but sooner or later you will become their product too. All the conversations you had or will have will be turned into signals to match you with products from advertisers, maybe not directly in the conversation with them, but anywhere else. It's not a mater of if, but looking at the pace things are going, and how financially pressured openai is, it's only a matter of time that their conversations with them will be turned into profit in some way or another, they basically have no choice financially.


> You can absolutely send ChatGPT to look for a cheap flight and it will do pretty well.

Sure, once they figure out how to count to three.


> Writing is on the wall that orders of magnitude fewer people will be going to [product] or using [product] in the next 5 years though.

counterpoint: which service or product is immune to this statement?


I think there is a pattern it will always be nerfed the few weeks before launching a new model. Probably because they are throwing a bunch of compute at the new model.


Yeah maybe that but atleast let us know about this Or have dynamic limits? Nerfing breaks trust. Though I am not sure if they actually nerf it intentionally. Haven't heard from any credible source. I did experience in my workflow though.


You mean the US, right? Especially with the part 2?

I know this may sound like a shock because you are privileged but 7% yoy return on capital is NOT the norm for the rest of the world. Just look at any other index not called the S&P or the Dow. Look up US exceptionalism.

The US policy for retirement savings shackles the younger generation with a ticking time bomb. Forcing your own citizens to save money for themselves is a lot better than forcing your own citizens to pay for others. Which one is more morally cruel?

HK has a similar forced savings, but that ROI is like 1 or 2% and the options to invest are paltry.

Some perspective is necessary. Yes it’s not great but compared to the rest of the world it’s stellar.


> I know this may sound like a shock because you are privileged but 7% yoy return on capital is NOT the norm for the rest of the world. Just look at any other index not called the S&P or the Dow. Look up US exceptionalism.

I have sympathy for your general position, but this particular one is a bit silly: I live outside the US (in Singapore, in fact), and I can invest in US equity just fine.


If ROI is lower than inflation then what’s the point of saving? So you can have an even worse standard of living after you retire?

Forced investment in low ROI vehicles is just a tax by another name.


Well, would you have a better standard of living with $0 or $1000 when you retire?

Even if that $1000 used to be worth $10000, that $0 is still worth $0.


Anthropic was the first to spam reddit with fake users and posts, flooding and controlling their subreddit to be a giant sycophant.

They nuked the internet by themselves. Basically they are the willing and happy instigators of the dead internet as long as they profit from it.

They are by no means ethical, they are a for-profit company.


> Anthropic was the first to spam reddit with fake users and posts, flooding and controlling their subreddit to be a giant sycophant.

Is the Claude subreddit less authentic than the ChatGPT one?

I remember for a while the Claude subreddit was filled with people saying "I asked Claude if it was conscious and the answer was soooo fascinating you guys."

I think the ChatGPT one was filled with posts like "I had ChatGPT write my resume and now I'm rolling in cash!"

I found both subreddits unreadable.


I actually agree with you, but I have no idea how one can compete in this playing field. The second there are a couple of bad actors in spammarketing, your hands are tied. You really can’t win without playing dirty.

I really hate this, not justifying their behaviour, but have no clue how one can do without the other.


Its just law of the jungle all over again. Might makes right. Outcomes over means.

Game theory wise there is no solution except to declare (and enforce) spaces where leeching / degrading the environment is punished, and sharing, building, and giving back to the environment is rewarded.

Not financially, because it doesn't work that way, usually through social cred or mutual values.

But yeah the internet can no longer be that space where people mutually agree to be nice to each other. Rather utility extraction dominates—influencers, hype traders, social thought manipulators-and the rest of the world quietly leaves if they know what's good for them.

Lovely times, eh?


> the rest of the world quietly leaves if they know what's good for them.

Userbase of TikTok, Instagram and etc. has increased YoY. People suck at making decisions for their own good on average.


I'm pretty sure this might be a hot take, but I believe we need some sort of a Tech Police.

We have Road Police, Financial Police, Mail Police, Work Safety Police, Military Police...


All those you mentioned are somewhat physical and not that simple across the borders. Practically speaking you will never get universal laws across all nations, otherwise financial havens wouldn’t exist either.


This is why you are not the finance guy.

My finance people care about the cents, a ROI of 7% is average but at 8.5% and now you are a world class asset of that inventory type. That’s sometimes the difference of a few hundred k out of 20m but they would not take the deal if it is slightly over due to their risk appetite.

The 3b external either matters a ton to fit their risk models OR they are doing a favor to an outside party. Probably a bit of both.


Well, given that it is an equity sale, split still feels like it is the prorated amount so that alphabet continues to own its percentage - not more not less.

Obviously you're entitled to your view, but I don't think it's that kind of finance model right now - it's far too speculative and the upside too unknown to be adjusting for small amounts on risk models.


What's the point honestly.

Given the pace of current ai, in 2 months dark factories will peak hype and then in another 6 months it will be fully identified in its cost/benefit drawbacks, and the wisdom of the crowds will have a relatively accurate understanding of its general usefulness, and the internet will move on to other things.

The next generation of ai coding will make dark factories legit due to their ability to architect decently. Then generation after will make dark factories obsolete due to their ability to make it right the first time. That's about 8 months out for SOTA, and 14 months out for Sonnet/Flash/Pro users.

No need for them to come out of stealth, just imagine 1000s of junior/mid engineers crammed into an office given vague instructions to build an app and spit out code. Imagine a cctv in the room overlooking the hundreds of desks, and then press fast forward 100x speed.

That's literally what they built, because that's what's possible with Opus.


The funny thing is that the rest of the software industry is dying, except for the trillions of venture capital being invested into these AI coding whatevers. But given the slow death of software, once these AI coding whatevers are finished, there's going to be nothing of value left for them to code.

But I'm sure the investors will still come out just fine.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: