As someone from the Humanities who isn't in the tech industry, I'm just absolutely electrified by how strong the division this technology has generated in the social dynamic is. Like I mean second-order and third-order effects. Human beings aren't even always able to create a rift this deep. It's fascinating.
Data centers account for roughly ~1% of global electricity demand and ~.5% of CO2 emissions, as per your link. That's for data centers as a whole, as IEA and some other orgs group "data-centres, AI, and cryptocurrency" as a single aggregate unit. Alone, AI accounts for roughly ~10-14% of a given data center's total energy. Cloud deployments make up ~54%, traditional compute around ~35%.
The fact is that AI, by any definable metric, is only a sliver of the global energy supply right now. Outside the social media hype, what actual climate scientists and orgs talk about isn't (mostly) what AI is consuming now, it's what the picture looks like within the next decade. THAT is the real horror show if we don't pull policy levers. Anyone who says that AI energy consumption is "killing the planet" is either intentionally misleading the argument or unbelievably misinformed. What's actually, factually "killing the planet" are energy/power, heavy industry (steel, cement, chemicals), transport, and agriculture/land use. AI consumption is a rounding error compared to these. We'll ignore the fact AI is actually being used to manage DC energy efficiency and has reduced the energy consumption at some hyperscale DC's (Amazon, AliBaba, Alphabet, Microsoft) by up to 40%, making it one of the only industry sectors that has a real, non-trivial chance at net-zero if deployed at scale.
The most interesting thing about this whole paradigm is just how deep of a grasp AI (specifically LLMs) have on the collective social gullet. It's like nothing I've ever been a part of. When Deep Water Horizon blew up and spilled 210M gallons of crude into the Gulf of Mexico, people (rightfully so) got pissed at BP and Transocean.
Nobody, from what I remember, got angry at the actual, physical metal structure.
> what actual climate scientists and orgs talk about isn't (mostly) what AI is consuming now, it's what the picture looks like within the next decade
that's the point - obviously the planet is not dying _today_, but at the rate at which we are not decreasing emissions, we will kill it. So no, "killing the planet" is not misinformed or misleading.
> Nobody, from what I remember, got angry at the actual, physical metal structure.
Nobody's mad at LLMs either. It's the companies that control them and that are fueling the AI "arms race", that are the problem.
>So no, "killing the planet" is not misinformed or misleading.
When we talk as if a few years of AI build‑out are “killing the planet” while long‑standing sectors that make up double‑digit shares of global emissions are treated as the natural background, we’re not doing climate politics, we’re doing scapegoating. The numbers just don’t support that narrative.
The IEA and others are clear: the trajectory is worrying (data‑center demand doubling, AI the main driver), but present‑day AI still accounts for a single‑digit percent of electricity, not a primary causal driver.
>Nobody's mad at LLMs either. It's the companies that control them and that are fueling the AI "arms race", that are the problem.
That’s what people say, yet when asked or given the opportunity, the literature shows they’re perfectly willing to “harm” and “punish” LLMs and social robots.
Corporations are absolutely the primary locus of power and responsibility (read: root of all evil) here, none of this denies AI’s energy risks, social harms, or the likelihood that deployments will push more people into precarity (read: homelessness) in 2026. The point is about where the anger actually lands in practice.
Even when it’s narratively framed as being “about” companies and climate policy, that anger is increasingly channeled through interactions with the models themselves. People insult them, threaten them, talk about “punishing” them, and argue over what they “deserve”, that's not "Nobody being mad at the LLMs", that's treating something as a socially legible agent.
So people can say they’re not mad at AI models, but their behavior tells a very different story.
TL;DR: Between those who think LLMs have “inner lights” and feelings and deserve moral patient‑hood, and those who insist they’re just “stochastic parrots” that are “killing the planet,” both camps have already installed them as socially legible agents and treat them accordingly. As AI “relationships” grow, so do hate‑filled interactions framed in terms of “harm,” abuse, and “punishment” directed at the systems/models themselves.
Its nearly a guarantee TSMC was going to be exempt. The dismal performance of their AZ location and Samsung's TX location shows how the US wouldn't even be anywhere near 20% domestic manufacture by 2035. Capital≠Competency. The 70k staffing shortage, difference in culture is the real issue here. This tariff would have turned into a kamikaze maneuver had it not been for the carveouts.
I was stunned when it was announced. I mean having contradicting tariffs and goals is one thing, but semis?! Wonder if this will finally get Intel their first Intl. buyer for their semis.