Hacker Newsnew | past | comments | ask | show | jobs | submit | sambull's commentslogin

these subscriptions have limits.. how could someone use $200 worth on $20/month.. is that not the issue with the limits they set on a $20 plan, and couldn't a claude code user use that same $200 worth on $20/month? (and how do i do this?)

The limits in the max subscriptions are more generous and power users are generating loss.

I'm rather certain, though cannot prove it, that buying the same tokens would cost at least 10x more if bought from API. Anecdotally, my cursor team usage was getting to around 700$ / month. After switching to claude code max, I have so far only once hit the 3h limit window on the 100$ sub.

What Im thinking is that Anthropic is making loss with users who use it a lot, but there are a lot of users who pay for max, but don't actually use it.

With the recent improvements and increase of popularity in projects like OpenClaw, the number of users that are generating loss has probably massively increased.


I've spent $17.64 on on-demand usage in cursor with an estimated API cost of $350, mostly using Claude Opus 4.5. Some of this is skewed since subagents use a cheaper model, but even with subagents, the costs are 10x off the public API costs. Either the enterprise on-demand usage gets subsidized, API costs are 10x higher, or cursor is only billing their 10% surplus to cover their costs of indexing and such.

edit: My $40/month subscription used $662 worth of API credits.


Cursor also significantly upcharges compared to API pricing. Last I checked they were charging ~3X API prices for Anthropic models

oh, I figured out the costs for the enterprise plan. It's $0.04 per request, I'm not charged per token at all. The billing is completely different for enterprise users than regular users.

This exactly. I think this is why Anthropic simply don’t want 3rd party businesses to max out the subscription plans by sharing them across their own clients.

More than 20x actually. According to ccusage I’ve consumed the equivalent of $4500 worth of API tokens in the last 30 days on my $200 subscription.

I'd agree on this. I ended up picking up a Claude Pro sub and am very less than impressed at the volume allowance. I generally get about a dozen queries (including simple follow up/refinements/corrections) across a relatively small codebase, with prompts structured to minimize the parts of the code touched - and moving onto fresh contexts fairly rapidly, before getting cut off for their ~5 hour window. Doing that ~twice a day ends up getting cut off on the weekly limit with about a day or two left on it.

I don't entirely mind, and am just considering it an even better work:life balance, but if this is $200 worth of queries, then all I can say is LOL.


Bumping into those limits is trivial, those 5 hour windows are anxiety inducing, and I guess the idea is to have a credit card on tap to pay for overages but…

I’m messing around on document production, I can’t imagine being on a crunch facing a deadline or dealing with a production issue and 1) seeing some random fuck-up eat my budget with no take backs (‘sure thing, I’ll make a custom docx editor to open that…’), 2) having to explain to my boss why Thursday cost $500 more than expected because of some library mismatch, or 3) trying to decide whether we’re gonna spend or wait while stressing some major issue (the LLM got us in it, so we kinda need the LLM to get us out).

That’s a lot of extra shizz on top of already tricky situations.


The usage limit on your $20/month subscription is not $20 of API tokens (if it was, why subscribe?). Its much much higher, and you can hit the equivalent of $20 of API usage in a few days.

I think you've stated this in reverse.

API limits are infinite but you'd blow through $20 of usage in a maybe 1 hours or less of intense Opus use.

The subscription at $20/mo (or $200) allows for vastly more queries than $20 would buy you via API but you are constrained by hourly/weekly limits.

The $20/mo sub user will take a lot longer to complete a high token count task (due to start/stop) BUT they will cap their costs.


So I’m not allowed to use the $20 plan and max out its limits?

Max out on their terms, not yours.

Their bet is that most people will not fill up 100% of their weekly usage for 4 consecutive weeks of their monthly plan, because they are humans and the limits impede long running tasks during working hours.


You can max it out via first party clients only.

I dont like it either, but its not an unreasonable restriction.


I do believe it's unreasonable. The limits are the limits, you reach them there's no more free lunch after.

Fix the limits, so the limits are reached at a rate that sustains their business.. ? obviously this WILL happen eventually when they need to pay for things.


"A rate that sustains their business" at the moment probably looks like API pricing or maybe even higher. That means subscriptions get significantly more expensive and/or limited, which is maybe where things are headed.

The median subscriber generates about 50% gross margin, but some subscribers use 10x the amount of inference compute as other subscribers (due to using it more...), and it's a positive skewness distribution.

Is there any limits to that users 200/month? Why should they not be able to use the limits to the extent from other tools?

If openclaw chews my 200/month up in 15 days... I don't get more requests for free


There is no monthly limit, it (currently) is a weekly and 5-hourly limit. If they allow anyone to use any tool with their subscription service, you could have a system (like OpenClaw) which involves 0 human interaction and is constantly consuming 100% of your token limit, then waiting until limits reset to do it all over again. It seems fairly clear that Anthropic is probably losing money on such usage patterns.

Once again: you can use API keys and pricing to get UNLIMITED usage whenever you want. If you are choosing to pay for a subscription instead, it is because Anthropic is offering those subscriptions at a much better value-per-token. They are not offering such a subscription out of the goodness of their heart.


There are 4 weeks in a month.

4 periods of weekly limits, is a monthly limit.


That's... not how that works. Might as well say Anthropic has a 63 day limit (cuz that's 9 weeks).

The point of the first half of my comment is that you cannot chew through your tokens in 15 days, because although the billing cycle is monthly, the limits are not.


4 weeks * 12 months = 48 weeks in a year * 7 days in a week = 336 days per year - close enough :)

The secret is there is no path on making that back.

My crude metaphor to explain to my family is gasoline has just been invented and we're all being lent Bentley's to get us addicted to driving everywhere. Eventually we won't be given free Bentley's, and someone is going to be holding the bag when the infinite money machine finally has a hiccup. The tech giants are hoping their gasoline is the one that we all crave when we're left depending on driving everywhere and the costs go soaring.

Why? Computers and anything computer related have historically been dropping in prices like crazy year after year (with only very occasional hiccups). What makes you think this will stop now?

Commodity hardware and software will continue to drop in price.

Enterprise products with sufficient market share and "stickiness", will not.

For historical precedent, see the commercial practices of Oracle, Microsoft, Vmware, Salesforce, at the height of their power.


> Commodity hardware and software will continue to drop in price.

The software is free (citation: Cuda, nvcc, llvm, olama/llama cpp, linux, etc)

The hardware is *not* getting cheaper (unless we're talking a 5+ year time) as most manufacturers are signaling the current shortages will continue ~24 months.


> The software is free (citation: Cuda, nvcc, llvm, olama/llama cpp, linux, etc)

If you factor in the cost of integration and ongoing maintenance - by humans or llms - it is not free. But it certainly has never been cheaper.


> The hardware is not getting cheaper (unless we're talking a 5+ year time)

Yes, that's the time I'm talking about.

You also had a blip with increasing hard disk prices when Thailand flooded a few years ago.


GB300 NVL72 is 50% more expensive than GB200 I've heard.

It has stopped. Demand is now rising faster than supply in memory, storage and GPUs.

We see vendors reducing memory in new smart phones in 2026 vs 2025 for example.

At least for the moment falling consumer tech hardware prices are over.


Memory and storage has always been very cyclical. This is nothing new

In the GP's analogy, the Bentley can be rented for $3/day, but if you want to purchase it outright, it will cost you $3,000,000.

Despite the high price, the Bentley factory is running 24/7 and still behind schedule due to orders placed by the rental-car company, who has nearly-infinite money.


On consumer side looking at a few past generations I question that. I would guess that we are nearing some sort of plateau there or already on it. There was inflation, but still not even considering RAM prices from last jump gains relative to cost were not that massive.

Please show me where any AI company is currently turning a profit with their current offering and price structure, then let's have that conversation.

Recent price trends for DRAM, SSDs, hard drives?

Short term squeeze, because building capacity takes time and real funding. The component manufacturers have been here before. Booms rarely last long enough to justify a build-out. If AI demand turns out to be sustained, the market will eventually adapt by building supply, and prices will drop. If AI demand turns out to be transient, demand will drop, and prices will drop.

Cars have also been dropping in price.

And knives apparently.

I recently encountered this randomly -- knives are apparently one of the few products that nearly every household has needed since antiquity, and they have changed fairly little since the bronze age, so they are used by economists as a benchmark that can span centuries.

Source: it was an aside in a random economics conversation with charGPT (grain of salt?).

There is no practical upshot here, but I thought it was cool.


Yeah I’d definitely take that knife thing with a grain of salt. I have most of a history degree, took a lot of Econ classes (before later going back for CS), and it’s a topic I’m very interested in and I’ve never heard that (and some digging didn’t find anything).

It’s also false that the technology has changed very little.

The jumps from bronze to iron to steel to modern steel and sometimes to stainless steel all result in vastly different products. Not to mention the advances in composite materials for handles.

Then you need to look at substitute goods and the what people actually used knives for.

A huge amount of the demand for knives evaporated thanks to societal changes and substitute goods like forks. A few hundred years ago the average person had a knife that was their primary eating utensil, a survival tool, and a self defense weapon. Knives like that exist today but they’re not something every household has or needs.

This is a good example of why learning from ChatGPT is dangerous. This is a story that sounds very plausible at first glance, but doesn’t make sense once you dig in.


Interesting. I am glad you commented. It's nice getting grounding from someone with a real background in the area.

With that said, if it is a hallucination (and it sounds like it was), it's one of the more interesting ones I have encountered. It almost has the shape of a good idea.

Blade and handle material has certainly changed over the years, but I think good arguments about how relevant that is could be made both ways. They remain handled cutting tools, used in the same general way, for the same general purposes (though as you posted out, some use cases have gone away). Basically anyone from any of these periods would recognize a knife from any other, and be able to pick it up and make immediate use of it for all their normal knife related purposes.

To be clear though, I am now siding with the clankers and arguing for a hallucination. It's an interesting thing to think about, but it sounds like it's not an established concept in any way shape or form.


Evidence for this claim?

I had a 1990 Ford Taurus as my first car. I had got it used and I remember it being completely impossible to afford a new car at the time.

It was sticker price of $33,000 adjusted for inflation:

https://en.wikipedia.org/wiki/Ford_Taurus_%28second_generati...

I don't think it would even feel safe to drive at all compared to what we have got use to with modern cars. It broke down 3 times while I had it and stranded me on the road. No cell phone of course to call anyone.

These were the mythic "good ol days".


A few generations ago almost nobody could afford a car, now many low income families afford two.

Maybe cars are not cheaper, just easier to finance due to the modern credit systems?

I like this analogy.

I also think we're, as ICs, being given Bentleys meanwhile they're trying to invent Waymos to put us all out of work.

Humans are the cost center in their world model.


the path is by charging just a bit less than the salary of the engineers they are replacing.

After hearing this 10 times a day for the last 5 years I'm starting to get a bit tired. Do you have a rough time for when this great replacement is coming? 1 year? 2? 5? If it's longer than that can we shut up about it for a few years please.

It’s happening already? Ask any new CS grads about how good the job market is.

A poor economy that is still dealing with a decade+ of ZIRP, COVID shock, tariffs, and political strife; I don't see how AI has much, if anything, to do with this when compared with other options.

If AI was truly this productive they wouldn't be struggling so hard to sell their wares.


how do I understand what is the sustainable pricing?

That why they need widen the moat; it appears not giving us access to hardware might be that moat.

They desperately need LLMs to stay rentier and hardware advances are a direct attack on their model.


“If it was up to Stephen [Miller], there would only be 100 million people in this country, and they would all look like him.”

To accomplish things like that, a lot of us are going to be removed. I don't think these are jokes, it's a pattern of statements to condition and normalize. A thing he has done over and over.


What are you quoting? I mean, that sounds like what Stephen miller believes, but who said it?



Trump, ~6 months ago


... a Temu Fredo Corleone with a Nazi haricut ...


It's the servers specifically the parallelization with more cores and better math functions like AVX512.


if DOGE data + AI decided your WOKE.. maybe this won't say your a citizen one day


that is exactly where this is going. who needs pink triangles and yellow stars with ice cameras everywhere.


I think it's going to come with eradicate the 'wastrel' mindset.


Paywall all the things... all the things.


> I can't consider relying on probabilistic algorithms controlled by 3rd parties to be a wise strategy.

That's pretty much all of the AI industry and clients.


Pretty much how the whole world works and why ads are multi-trillion dollar business.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: