I'm curious: If your boss emailed you and all of your coworkers with mass buyout offers and demanded that they quit their job how many do you think would take up the offer? 10%? 20%? Do you think it would be enough to cause significant organizational issues?
I don't think it's as much an 'attitude' problem as it is a 'wealth' problem.
The richest folk in this country have bought out every single media apparatus it can get its hands on and have spread decades of propaganda. The 'philanthropic' billionaire that spent wealth so that they could have a building or initiative named after them have vanished and gave their wealth to the methhead billionaires that rip up the wiring of the country to sell for pennies.
Reading this thread has definitely sheared off a few of my brain cells seeing people so collectively deluded about Chuck Norris. As you said he was a totality of capitalism, a product wrapped in human skin. He's only truly notable for the jokes people made (myself included) at the dawn of the early internet. As a person, what he actually accomplished is nothing at best and at worst actively damaging to multiple groups that didn't deserve the heat.
The only good thing out of this mess is that the universe felt cosmically aligned to have his death occur on the same day as Mr. Rogers birthday, someone who genuinely did fight for a better world.
Well, his administration has ignored the constitutional rights of this country multiple times at best, and at worst outright violated them resulting in killing American citizens with zero justice or recourse. There's a million different alternative reasons people could come up with, but we can just go with the classic 'treason' and line them up accordingly.
If you commission a baker, another person, with wants and desires of their own, is involved.
If you use an AI, there isn't.
Either way, it's clear that the author (yes, the author) put a lot of work into this by iterating and shaping it to what he wanted, and that's a lot more than sprinkles.
> If you commission a baker, another person, with wants and desires of their own, is involved.
> If you use an AI, there isn't.
What is the functional difference here? You are commissioning (see: prompting) someone (see: an AI) for a piece of work, or artwork or whatever. The output is out of your control; and I don't think the existence or lack thereof of a human on the other end materially matters.
If we had hyper-advanced ovens from The Jetsons where we could type a prompt using a fold-out keyboard and it would magically generate whatever cake we ask of it: did we or did we not bake that cake? And I do not think it is clear the author put a lot of work iterating and shaping it into what he wanted; we have zero insight into that.
I didn't say the difference was functional. If you don't think the presence of a human on the other end matters (materially or not), feel free to continue this conversation with an LLM simulation of me. You can even prompt it so that you logically triumph and convince "me".
I'm asking you to explain what the actual difference is and you're avoiding the question.
If we had a complete black box where you submitted Prompt and out came Thing, and you had zero clue what said black box actually did, could you claim creation over Thing? What does knowing that it's a human vs LLM make materially different in terms of whether or not you created it?
Because 'quality' is a misnomer. LLM writing has quality in the same way that a press release from a big company has quality, or a professional contract written by a lawyer has quality. It is functional, generally typo-free and conforms to most standards but that doesn't mean it has flavor or spice to it.
Creative writing is the intent to convey feelings, thoughts, to create atmosphere. Here's a great example of the failure to do so here, in a way that even most terrible writers would avoid.
> “It just said harvest,” she told Tom. She was sitting in one of the plastic chairs, holding a cup of the adequate coffee.
The coffee in this story is conveyed as being 'perfectly adequate'. But how do you convey adequacy? When you simply just say 'the coffee is adequate' there's nothing there. It could be conveyed by establishing that the coffee is always perfectly room temperature, or with the mere hint of bitterness and sweetness, or that it tastes like every other brand out there. In many respects this story is the exact same as the 'perfectly adequate' coffee: functional, unexciting and ultimately flavorless.
This "flavorlessness" is all over the story, and paired with the obviously genAI images is how I realized as I read that this was either generated or at the least deeply driven by AI.
It constantly described facial expressions, tones of voice, and other emotional cues in generic, dry terms that communicated nothing but the abstract notion of "this person felt a particular way about what happened and it's up to you, the reader, to imagine what that feeling was."
It felt very much like it was prompted to "show, don't tell," by someone who has no idea what that phrase actually means.
As a professional programmer with a deep background in literature and music, this is yet another example that if you aren't an expert in a field, you will get mediocre results at best from an LLM, while being deceived into thinking they're great.
Five years ago and before, the blog post author would have gone to Fiverr and asked for an artist from a developing country to create some illustrations. There are many, many images on the Internet from five years (and before) that look similar. I object to your use of the adverb "obviously".
No, I clocked the AI images before I noticed the text. I think the "obviously" is earned.
You are correct that a previous era would have included a bunch of Fiverr images that would be in sort of that style, but it's not the style that's the problem. None of the images say more than the text that they're illustrating. It's subtle, but once you notice the lack of information density it becomes starkly apparent.
I took that phrase differently. The story makes the point that the AIs fail when metrics of quality can't be expressed in words. The use of a bare "adequate" reinforces the opacity of the coffee's quality. Certainly it would have worked well to use more words to convey specifics of the "adequacy" as you mention, but IMO that would have undercut the link back to the theme of human ineffability.
Obviously everyone's mileage may vary, but I didn't see this as a huge defect, and actually felt it worked pretty well.
In the hands of Douglas Adams or Kurt Vonnegut it could be spun into a whole recurring motif.
In this case it's merely...adequate. Almost captures the density of ideas packed into something like "The ships hung in the sky in much the same way that bricks don't" but doesn't quite manage the same effect.
> But on reflection and discussion with the author, we decided that enough HN users may find that it gratifies intellectual curiosity, because it's interesting to see how a human and an AI bot can collaborate to create writing like this.
I can't say I agree, at all. This is essentially just your average post on Facebook or Linkedin made relevant on HN through telling a story about software mechanics. I don't find it interesting to 'read' collaborations between human and AI bots there and I would greatly prefer it if they don't infest HN as well.
That's fine. Nothing on HN is of interest to everyone. But the post spent 20 hours on the front page and earned over 450 upvotes and 300 comments. It was clearly interesting to a lot of the community and activated a worthwhile discussion.
> I would greatly prefer it if they don't infest HN as well
We are actively working against AI-generated/bot-posted comments "infesting" HN. LinkedIn-style marketing slop has always been unwelcome on HN, whether it's AI-generated or not. In this case a collaboration between a human and AI produced an interesting result, as evidenced by the community's response.
For reference, I moved to Austin in 2018, my rent for my apartment was about 1200/month. In 2022 (the year I left), my rent jumped suddenly to 1600/month despite new apartments near me, and all of the apartments I looked into had similar jumps. And anecdotally speaking my coworkers all reported similar massive rent spikes.
It feels more like this is associated with the tech industry cooling significantly in Austin so they can't get away with pricing bumps. This isn't to say new housing doesn't help, but it certainly didn't prevent me from getting fucked on rent.
The speed with which LLMs rot peoples brains is really quite stunning. This is just one of the many reasons why I can't trust anyone whose holding the bag for AI stuff, anyone knee deep in this mess is likely unable to see the horizon.
Unfortunately this is an argument from the wrong angle, because it assumes what the pronatalists 'mean' by their belief. It's the same way that arguing with Musk about being a free speech maximalist is fundamentally a failed argument, because he doesn't actually believe in free speech.
The silicon valley pronatalist stance is because they want to be patriarchs in full control of their family. They want absolute control over women and absolute control over their kids. Or they want to exert control over particular minority groups.
I believe in, quite simply, the fact that their actions outline what they truly believe. Elon Musk said he was a pro gamer who was top of the ladder in Path of Exile 2, then he was found to be cheating having hired folks to play the game for him.
If someone calls themselves a free speech maximalist followed by banning people who criticize him, then he cannot by definition be a free speech maximalist.
Correct. Pronatalism is a just a front, sometimes for pure racism. Remember that Musk grew up in Apartheid South Africa. They're worried about demographic shifts away from white dominance of the US.
Also, according to the article, Musk "called children and called declining birth rates a much bigger risk to civilization than global warming," which is not so much pro-natalism as it is dismissive of global warming, because Musk no longer cares about electric cars and has pivoted to ventures that are much less friendly to the environment such as AI and mass rocket launches.
> Remember that Musk grew up in Apartheid South Africa
And cited his opposition to apartheid as the central reason that he left the country as soon as he could, at age 17, because he didn't want to be a part of that system.
There are so many legitimate reasons to criticize Musk, but this isn't one.
You didn't mention how "opposition to apartheid" also meant avoiding mandatory military service. Interesting coincidence, I would say. Serious question: if one cared about ending Apartheid, wouldn't it be much more effective to do that from within South Africa than from across the ocean?
Considering who he is now, what he wants politically, who he supports and how he treats his employees ... is there really anything about him that makes it sound like a real reason?
There are tons of (valid) reasons for and against boosting birthrates, but you have to break it down to the actual reasons that people are "natalists" or not.
Throwing all (anti-)"natalists" into the same pot makes as much sense as labelling communists, fascists and anarchists "anti-capitalists" instead; yes your label technically applies, but the group it describes is so heterogenous that you can't meaningfully talk about it anyway.
Edit for failing to address your actual question: No and no (people are not anti-nativists by default and shouldn't be).
If "anti-nativist" means someone that wants to keep birthrates below 2/womanlife long-term, than this is basically advocating for suicide at a species-level, and "unhealthy" from an evolutionary point of view.
But is that actually what your "anti-natalist" believe? If people just live lifes that lead to <2 children/woman, but don't really care or consider the whole question, does that make them anti-natalists, too (I don't think so)?
>The silicon valley pronatalist stance is because they want to be patriarchs in full control of their family.
I am not sure what % of pro-natalists that applies to, exactly, but keep in mind most people in Silicon Valley voted for Clinton/Biden/Harris in 2016, 2020, and 2024 and most are not weird traditionalist cultural conservatives. There are many progressive left-liberal pro-natalists who just 1) don't want humanity to go extinct and 2) know that population decline in a country can lead to various issues, including economic problems. Immigration can help with some of that, but reproduction rate is declining or low in basically every single country and so immigration will eventually also not be a sustainable solution.
I think the majority of vocal pro-natalists are probably right-wing/racist/misogynistic, but the core pro-natalist stance in itself (as opposed to a stance of "whites are being out-reproduced", or something) is, in general, still a completely reasonable and I'd argue moral position.
reply