> what actual climate scientists and orgs talk about isn't (mostly) what AI is consuming now, it's what the picture looks like within the next decade
that's the point - obviously the planet is not dying _today_, but at the rate at which we are not decreasing emissions, we will kill it. So no, "killing the planet" is not misinformed or misleading.
> Nobody, from what I remember, got angry at the actual, physical metal structure.
Nobody's mad at LLMs either. It's the companies that control them and that are fueling the AI "arms race", that are the problem.
>So no, "killing the planet" is not misinformed or misleading.
When we talk as if a few years of AI build‑out are “killing the planet” while long‑standing sectors that make up double‑digit shares of global emissions are treated as the natural background, we’re not doing climate politics, we’re doing scapegoating. The numbers just don’t support that narrative.
The IEA and others are clear: the trajectory is worrying (data‑center demand doubling, AI the main driver), but present‑day AI still accounts for a single‑digit percent of electricity, not a primary causal driver.
>Nobody's mad at LLMs either. It's the companies that control them and that are fueling the AI "arms race", that are the problem.
That’s what people say, yet when asked or given the opportunity, the literature shows they’re perfectly willing to “harm” and “punish” LLMs and social robots.
Corporations are absolutely the primary locus of power and responsibility (read: root of all evil) here, none of this denies AI’s energy risks, social harms, or the likelihood that deployments will push more people into precarity (read: homelessness) in 2026. The point is about where the anger actually lands in practice.
Even when it’s narratively framed as being “about” companies and climate policy, that anger is increasingly channeled through interactions with the models themselves. People insult them, threaten them, talk about “punishing” them, and argue over what they “deserve”, that's not "Nobody being mad at the LLMs", that's treating something as a socially legible agent.
So people can say they’re not mad at AI models, but their behavior tells a very different story.
TL;DR: Between those who think LLMs have “inner lights” and feelings and deserve moral patient‑hood, and those who insist they’re just “stochastic parrots” that are “killing the planet,” both camps have already installed them as socially legible agents and treat them accordingly. As AI “relationships” grow, so do hate‑filled interactions framed in terms of “harm,” abuse, and “punishment” directed at the systems/models themselves.
that's the point - obviously the planet is not dying _today_, but at the rate at which we are not decreasing emissions, we will kill it. So no, "killing the planet" is not misinformed or misleading.
> Nobody, from what I remember, got angry at the actual, physical metal structure.
Nobody's mad at LLMs either. It's the companies that control them and that are fueling the AI "arms race", that are the problem.