Hacker Newsnew | past | comments | ask | show | jobs | submit | nake89's commentslogin

Yeah, it's quite bad. Just some of the classics:

- "Why This Matters"

- "That's accurate, but it's only half the answer — and the less interesting half"

- "this isn't an edge case. It's routine."

I'm at the point, I would just rather read something somebody actually wrote even if it's not grammatically perfect and has lots of spelling mistakes.


Unfortately the expectation of readers, and algorithms, at large is perfection.

If this contained various grammer mystaeks, but interesting content, it wouldn't have been flagged. As usual with LLM, it is based on other content. Show me the source, we used to say to binaries... ¿Que pasa?

So the upvotes were for? Anyway, we disagree — thats normal.

> As usual with LLM, it is based on other content.

Show me where else on the internet someone waxed poetic about a conceptual separation of transport and function regarding WireGuard. I dare you.

Show me another client library like the one in the article? That’s the double-dare.

Did you even read it?


Since you didn't think it was worth writing it yourself, I don't see how you can expect others to think it's worth spending their time to read.

So no, then? Thanks for your thoughtful engagement.

> So the upvotes were for?

People getting tricked? Who knows?

> Did you even read it?

I quit when I figured it was written by an LLM. I'm not interested in reading LLM 'content' without it providing a source.

I am willing to generate some of my own sauce with a prompt, and then requesting the sources. That way, I know at least some parameters of the input and output.

But with your article, I do not know which sources were used as reference, I do not know which prompt you used.

As for HN, they're busy with tackling the LLM problem. They know it is a problem.


Again, this was novel content. If you find a source of anything similar let me know. I'm belaboring this point for one important reason: content matters. I want to see new thoughts, not repetitive mindless drivel in personal "voice".

There has to be a balance.


One thing I've seen before is people being upfront about using LLMs (at the top of the content). That way, those who dislike it will feel less tricked.

The balance at least on this site is strongly in favour of humans writing things.

You’re belabouring the point because you don’t believe that by filling the internet with slop you’re doing anything wrong when actually it’s antisocial and wrecks the commons.

If you think content matters so much then just invest the time in writing it yourself rather than trying to convince others that it is ok that you didn’t.


The pot calling the kettle black, methinks. How are you improving the internet by vilifying new ideas?

No. It’s authenticity instead of llm-generated blogvertising.

When I ask an LLM, one that’s vaunted here for it’s skill on code, to “clean up obvious errors and improve readability” how is that “LLM generated”?

Yes it’s advertising in that I believe in my product and write about it.


Dude. Give it a rest. You had the LLM write an article, you posted it here. You got called out.

Just write your own blog and this won't happen in future.


Sigh. I did write it, then I used an LLM to clean it up. Seriously, if you can find anything else out there making a similar point or providing a similar library I'd love to hear about it.

Not scaling and high latency sound like a skill issue, not a PHP issue.

For me the number has been closer to 80% (my interestes/categories have been, foss, software dev, high tech specialization / academic work).

This is not surprising because I remember looking at the stats, females accounted for 2% of open source developers. And it was 7% female software devs (non-oss included, work etc). These stats are a few years old. I wonder if we can still find stats about cis-women in FOSS. Have the stats increased? Or is it too much of political hot potato to gather such stats?

How many males are naturally fascinated with computers? I know I was. Therefore it makes sense that if we see a female name, it is not just possible but likely that, that person is trans. Nowadays when I see a female name, I don't make the same happy assumptions I used to. That said, I welcome anybody interested in computers with open arms.


This came up in the recent Rust community survey too, a slightly higher percentage of people identifying as trans than identifying as women: https://blog.rust-lang.org/2026/03/02/2025-State-Of-Rust-Sur....

In some of my circles it's a runing joke that Rust, along with Haskell and some other adjacent FP languages, are favored by trans women; and also when I think about the women I know personally who write Rust code in some kind of professional or hobbyist capacity, I think literally all of them are trans.

There's a quote from the ctrlcreep short story Knowing One's Place (https://ctrlcreep.substack.com/p/knowing-ones-place) that has stuck in my mind since I first read it:

> The doctors are sympathetic, and I think some of them even understand—regardless, they can offer no solace beyond the chemical. They are too kind to resent, but my envy is palpable. One, a trans woman, is especially gentle; perhaps because her own frustrations mirror mine, our cognitive distance sabotaging her authenticity.


Could the LLM rewrite it from scratch?


boss, the models can't even get all the api endpoints from a single file and you want to rewrite everything?!

not to mention that maybe the stakeholders don't want a rewrite, they just to modernize the app and add some new features


That's crazy. I'm already barely willing to pay $10/month on Github Copilot. A product I love. Best value for money.

If they pump it up to $200 (or to $20). I'll simply use crappier local model. It won't be as good. But I already own my gaming PC that can run local models, and electricity is cheap.


> I'll simply use crappier local model. It won't be as good. But I already own my gaming PC that can run local models,

this is UNIX and Linux all over again lol. It's pretty amazing and nostalgic.


I'm paying 10 dollars per month on GitHub Copilot. Gives access to good enough models. Not best, but great value for money.


https://news.ycombinator.com/item?id=46936105 Billing can be bypassed using a combo of subagents with an agent definition

> "Even without hacks, Copilot is still a cheap way to use Claude models"


So, asking an 2b parameter LLM if it is conscious and it answering yes, we have no choice but to believe it?

How about ELIZA?


Slightly off topic. I had a hard time getting models to run with ollama, and I thought that my computer (32gm ram, GTX4070 12Gb vram) just couldn't do it. The I tried LM Studio and after fiddling with some settings, I got models running and quite fast. I didn't try GLM-4.7 flash but I did GLM-4.6v flash and it was amazing to see it be able to analyze all kinds of images (since it has vision support). I was simply stunned. I can't believe that a simple gaming machine can do many of the things I used cloud models for. It was absolutely strikingly good at guessing locations of photos. Even vague ones. Deducing landmarks, writings, types of traffic signs. I need to try 4.7 flash. Hopefully it can ran fast with my machine.


Do you not might ad companies tracking everything you do?


When has ublock origin browser extension ever stopped working? On a locked down mobile OS like iOS you can use the Brave browser. No cat and mouse game.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: