Hacker Newsnew | past | comments | ask | show | jobs | submit | dawnerd's commentslogin

Or npm being allowed to run arbitrary post install scripts

I dunno, if I spent even a couple years building something and could sell it for a million relatively quickly I probably would too unless it's something I'm really passionate about. I've sold side projects for waaaay less.

Have you tried to shift through a whole lot of vibe coded slop? It’s really mentally draining to see all of the really bad techniques they fall back on just to brute force a solution.

> Yes, my coding skills probably aren't as sharp as they used to be

If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?


Namecheap has had its own host of issues like a few years back breaking hsts and causing tons of sites to break for quite a while and their response was basically oh well. That incident along made me move my domains off to porkbun.

Porkbun uses cloudflare as their DNS backend, and has accidentally issued certs for domains hosted on them (https://news.ycombinator.com/item?id=40455508 was one instance).

Since cloudflare is basically the only registrar that will not allow you to host nameservers anywhere else I'd be weary to use them (even indirectly).


Realistically you should never use the registrars dns to begin with. But you can set your own dns with porkbun, I have customs dns on all of my domains. I especially have been doing that since the Namecheap hsts issue. Can't trust any of them.

> Realistically you should never use the registrars dns to begin with

Could you elaborate why?


You’d be surprised how many enterprises use them. Also their managed hosting support is surprisingly competent. I’m not a fan of their service but some of our clients use them and anytime their servers have had issues support was quick to fix. Way nicer than having to jump in and do it myself. And so far it’s all been local support and not offshore.

I agree except for the monitor attached part. There’s so reason my iPad Pro with that expensive keyboard and trackpad can’t run macOS. I had such dreams Of using it as a laptop replacement and all it’s ended up being is a very expensive portable monitor.

I have been loving my new doctors recording and making everything available in the patient portal. No more trying to remember what they said. That’s huge, especially when dealing with elderly patients and being able to have their caregivers have access to it.

I see that with openai too, lots of responding to itself. Seems like a convenient way for them to churn tokens.

A simpler explanation (esp. given the code we've seen from claude), is that they are vibecoding their own tools and moving fast and breaking things with predictably sloppy results.

None of these companies have compute to spare. It’s not in their interest to use more tokens that necessary.

Sure it is. They're well aware their product is a money furnace and they'd have to charge users a few orders of magnitude more just to break even, which is obviously not an option. So all that's left is.. convince users to burn tokens harder, so graphs go up, so they can bamboozle more investors into keeping the ship afloat for a bit longer.

If this claim is true (inference is priced below cost), it makes little sense that there are tens of small inference providers on OpenRouter. Where are they getting their investor money? Is the bubble that big?

Incidentally, the hardware they run is known as well. The claim should be easy to check.


To be clear, I'm talking about subscription pricing. API pricing for Anthropic is probably at-cost.

I dare you to run CC on API pricing and see how much your usage actually costs.

(We did this internally at work, that's where my "few orders of magnitude" comment above comes from)


It's an option and they are going to do it. Chinese models will be banned and the labs will happily go dollar for dollar in plan price increases. $20 plans won't go away, but usage limits and model access will drive people to $40-$60-$80 plans.

At cell phone plan adoption levels, and cell phone plan costs, the labs are looking at 5-10yr ROI.


Not true - they absolutely want to goose demand as they continue to burn investor dollars and deploy infra at scale.

If that demand evens slows down in the slightest the whole bubble collapses.

Growth + Demand >> efficiency or $ spend at their current stage. Efficiency is a mature company/industry game.


That doesn’t mean they also can’t be wasteful. Fact is, Claude and gpt have way too much internal thinking about their system prompts than is needed. Every step they mention something around making sure they do xyz and not doing whatever. Why does it need to say things to itself like “great I have a plan now!” - that’s pure waste.

> Why does it need to say things to itself like “great I have a plan now!”

How else would it know whether it has a plan now?


Are you saying these companies don't want to sell more product to us? Because that's the logical extension of your argument.

No, the argument is they want to sell more product to more people, not just more product (to the same people.) Given that a lot of their income is from flat-rate subscriptions, they make money with more people burning tokens rather than just burning more tokens.

After all, "the first hit's free" model doesn't apply to repeat customers ;-)


You don’t have to use compute to pad the token count.

All the labs are in a cut throat race, with zero customer loyalty. As if they would intentionally degrade quality/speed for a petty cash grab.

This, so much this!

Pay by token(s) while token usage is totally intransparent is a super convenient money printing machinery.


The problem though is when its from a gov agency it validates previous breach data making it more valuable.

Depends. According to DOGE, voter registration databases have people listed as 150 years old or deceased people receiving monthly government checks. Obviously a different govt than TFA, but govt databases are no less prone to inaccurate data. They are still run/managed by humans regardless of the govt in question

That DOGE info was a very small portion of the data and considering who it came from you have to take even that with a grain of salt. There's always going to be inaccuracies in any dataset, no avoiding that.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: