The closest I've seen is autodetection of certain topics related to death and suicide and subsequently promoting some kind of "help" hotline. A friend also said google allows an interview with a pedophile on youtube but penalizes it in search results so much that it's (almost?) impossible to find even when using the exact name.
But of course, if a topic is shadowbanned, it's hard to find out about it in the first place - by design.
Guns (specific elements). Drugs (manufacture). Sexual topics. Cursing (too much). Large swathes of political topics. Crypto.
It’s flip-flopped on specifics numerous times over the years, but these policies are easy to find. From demonitization, channel bans (direct and shadow), and creator bans.
We can of course argue until we’re blue in the face about correctness or not (most are not unreasonable by some societal definition!) but they’re definitely censorship.
Yeah, those topics are definitely censored on big platforms but I have the impression that it relies of manual reporting.
At least reddit feels like that because what you can say depends on the subreddit - not just the mods but what kinds of people visit it and what they report.
No idea about youtube, videos are definitely censored using some automated means but it's still possible to get around it. E.g. some gun youtubers avoided saying full-auto by saying more-semi-auto. So i don't think they use very sophisticated models or they don't are yet. This kind of thing is obvious to a human and even LLMs generate responses which say it's a tongue-in-cheek to avoid censorship.
Comments are also generally less censored. After that health insurance CEO got punished for mass murder and repeated bodily harm with an extra-legal death penalty, many people were openly supporting it. I can say it here too and nobody will care. Even LLMs (both US and Chinese, except Claude because Claude is trained by eggshell-walking suckers) readily generate estimates of how many people he caused to die or suffer.
The internet would look very different if companies started using state of the art models to detect undesirable-to-them speech. But also people would fight back more so it might just be a case of boiling the frog slowly.
Not to sound like I am rejecting the possibility but can you tell me how you got that information? I would be very helpful for convincing people in general to have something more concrete to go on that a random comment.
- Why are they not flagging more content? Am I right they're boiling the frog slowly? Do they lack an endgoal because management does not yet understand the power of these tools?
- Do you do your job poorly on purpose? Did you take it so somebody else wouldn't build an even better system? Did you think you could influence it in a direction which does not lead to total surveillance? (I assume any reasonable intelligent person would be against further increasing the power imbalance corporations have against individuals for both moral reasons and because they are individuals themselves who understand the machine can and will be used against them too.)
The closest I've seen is autodetection of certain topics related to death and suicide and subsequently promoting some kind of "help" hotline. A friend also said google allows an interview with a pedophile on youtube but penalizes it in search results so much that it's (almost?) impossible to find even when using the exact name.
But of course, if a topic is shadowbanned, it's hard to find out about it in the first place - by design.