Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed. Here is a recent litmus test: https://news.ycombinator.com/item?id=47051852. How can we filter the lightweight stuff while still benefiting from posts like these?

(a bit more about this at https://news.ycombinator.com/item?id=47056384, with a reply from the OP)



One thing we did at reddit for a while was put posts from new people in "jail". They would show up in a special yellow box at the top of the home page to accounts that tended to be early upvoters of things that became successful later (our Nostradamusus so to speak), and then if it got enough upvotes from that group it got out of jail and placed on the regular /new page.

So maybe some sort of filter like that? Only show it to those kinds of accounts at first?

The downside is that if that group isn't big enough you get a lot of groupthink, but if your sample is wide enough, it can be avoided. To be honest, I don't recall why we stopped doing it.


Just sharing observations it may help, it may not…

what I’m seeing is new or sleeper accounts that have been idle for over a decade with low (<99) karma getting into comment circles. Over the last couple of weeks i’ll see several top comments on articles with back and forth between other similar accounts… it’s got to the point that I check a user habitually before I even bother reading… and I have never hidden so many comments before getting to something substantive in the comments…

Like many here, I don’t wish to limit new users, but this does seem from my armchair perspective to be a pattern to be on the look out for.


This is interesting. Can you link to some of these?

I've noticed this kind of behavior on Reddit but never on HM


Maybe have a signup flow where you can skip the new account restriction by putting some file on a website of some currently trending link. And then the restriction is lifted temporarily for the thread linking to it?


Not every post is from the website of the person who is the topic of it. It's common to have e.g. a blogpost about $thing and then a new account chimes in with "Hey, I authored $thing 10 years ago when I was working for $company, someone linked me this post. [some contributions to the topic]"


I have often heard that vote rigging is detectable on HN because the site software penalizes voting from accounts at the same IP address.

Rumor had it that there is also some kind of social-network metric detecting when socially adjacent accounts (or alts) are engaged in astroturfing, the practice where a small cabal tries to pass themselves off as a broader grassroots campaign.

Flip that around though and the same metrics might allow new accounts to be meaningfully vouched for by existing ones.


I think vote rigging detection might be based on the length of your session


Sorry, I need to ask the dumb question: Is that Show HN (AsteroidOS) post written by an LLM or not? Honestly, I cannot tell.

A few people in these comments seem wildly confident that it is written by an LLM. If anything, I hope it was written by a human as an elaborate troll to trigger these so-called immaculate LLM detectors.


Interesting litmus test, as the post isn't just green, it's riddled with LLM copyediting. Doesn't read as if originally composed by an LLM, so there's that.

Would seem to require some discernment to classify. Not all assistive use is slop.


Some litmus test. I am sooo tired of statements like "No x. No y. No z." and then optionally "Just Foo.".

Who aside from Fred fucking Durst writes like that?

Ugh... Clearly llm generated. This is how internet has become. 90% of posts are variations of tropes like these.


    > I am sooo tired of statements like "No x. No y. No z." and then optionally "Just Foo.".  Who aside from Fred fucking Durst writes like that?
I disagree. This is a classic humor template in popular magazines from the 1990s and 2000s. The New Yorker's "Talk of the Town" probably has/had this style frequently. Also, (Timothy) McSweeney's Quarterly Concern is basically an extended trope of exactly this type of writing from 1990s and 2000s.


I mean I guess you're right - I didn't notice it, because the community reaction to the project was so positive.

> Not all assistive use is slop.

That's right, and the key is to discern which posts/projects are interesting.


The discussion about the LLM assisted/written submission at the time, with replies by the author: https://news.ycombinator.com/item?id=47055300 The defence given was essentially "just reformatted it for better grammar"

It's obviously says LLM to me at first read through.

I suspect that:

a) less people are willing to expend a bit of energy to notice LLM usage given how much of it is. ("we've lost" theory)

b) that people are losing the ability to detect LLM submissions. ("we're cooked" theory)

or c) that people don't care about the use of LLM. ("who cares" theory).

Personally I've been feeling less invested, because it seems as if most users don't care and even the main users of the site don't notice it.


Do you have any good links to guides on how to spot? I would like to care, but its hard to tell. and then what do we do when we spot it?


One guide that i hope is kept up to date: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing . Generally though it's a kind of pattern recognition which for some patterns seems visible to me.

I should clarify and revise my thoughts and initial comment. I do not think that not being able to detect it leads to lack of care. I actually think that many things have passed me by and in the future this will be even more as LLMs improve ("we're cooked").

As to "what do we do when we spot it" - you hit the nail on the head of the feelings I felt as I was writing the comment. What do we actually do, what can we change and should we attempt futile things?

And even the example dang gave - the actual submission as very good. Is any amount of LLM use okay and what's the level? I use LLMs at work but I don't like writing readmes or blog posts with it. But others might like writing code at work by hand and don't like writing text so use LLMs for that. Maybe I lower my expectations!


Or even train LLM to catch LLMs. Like that old adage, use criminals to catch criminals.


You would need, say, a StackExchange-like crowdsourced moderation system whereby users with relatively high karma are randomly selected to check posts from new account, by casting votes to reject or keep.


HN already has something like that -- high-karma accounts can flag comments/posts which are a poor fit for HN. It's just a blacklist, not a whitelist.


>How can we filter the lightweight stuff while still benefiting from posts like these?

Well, the simplest automated method would be to run the post and comment together through an LLM with a prompt that's roughly:

"Is this person claiming to be the author or co-creator of the work discussed in this submission?"

Only green accounts subject to it. I predict you'd probably have a very low false positive and false negative rate.

It's of course a terribly slippery slope. My perhaps overly-cynical take is that once the infra is place some of your bosses would be prone to eventually abusing it.

Personally I'm here for it: Dang, moderator turned whistleblower—on the run from dark VC money—in a race against time to save freedom. Still working on a title for the film.


I can't wait to watch this movie. I'm already sold by plot!


Tentative working title is:

McDang II: You Have the Right to Remain Shadowbanned




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: