Hacker Newsnew | past | comments | ask | show | jobs | submit | thunfischtoast's commentslogin

The argument is that vibrations of the wind power plants at sea disturb the whales.

Paying for the gas itself would not be an externality. Externalities are for example the worldwide damages caused by extreme weather which is caused by climate change, health problems caused by air pollution or the usage of clean water for cooling


Normally, I'd agree, but in this age of massive corruption, you need to be careful figuring out who the stakeholders are. The purpose of these new fossil plants is not to produce electricity. Their purpose is to line the pockets of the ultra-wealthy people building them, and of the politicians accepting bribes to get them built.

Fueling the power plant is an externality for the people building the power plant. You could argue that it increases their costs, but these things are monopolies, with prices set by bought-off politicians. The plant + fuel costs much more than renewables (so ratepayers get screwed), but I'll wager the plant without fuel is still a bit less than solar or wind construction.


> The argument is that vibrations of the wind power plants at sea disturb the whales.

It's an argument that's always reeked of whale shit to me. If they really cared about marine sound pollution they'd go after super yachts first.


Call me incompetent, but I don't get it.

> I switched the build to SWC, and server restarts dropped to under a second.

What is SWC? The blog assumes I know it. Is it https://swc.rs/ ? or this https://docs.nestjs.com/recipes/swc ?


Both links are the same, SWC in this context is probably Speedy Web Compiler. It transpiles really fast but doesn't do any type checks.

> It transpiles really fast but doesn't do any type checks

What's the point of using it during development, then?


Transpilation is here a necessary step to test the application because e.g. his browser won't be able to parse raw TypeScript code.

Typechecking is not: the browser doesn't care about it, it's mainly to help the developer verify its code.

So to speed-up the build during development (to have faster iterations) the idea is often to make the building process only about the build by removing "unnecessary" steps like type-checking from it, while having a separate linting / typechecking etc. process, which could even run in parallel - but not be necessary to be able to test the application.

This is often done by using tools like a bundler (e.g. esbuild) or a transpiler (babel, swc) to erase the types without checking them in your bundling process.


Pretty sure they're the same thing. The second link is on how to use swc with nestjs.

In the links you provided, swc is the same entity.

The project life cycle cost: yes. The birds and whales: no. But neither do the fossil power plants.

Chip of Theseus

Ship?

It's a word play

Or random telemetry, or random network usage of some anonymous service that leaves you unable to pretty much do anything internet-related until that service has done it's thing.

I use both, Windows 11 and Pop! OS (which is Ubuntu, with Cosmic).

Pop! OS is definitely not ready for the average user in my opinion. Some common work-related apps I need like Citrix Workspace straight up don't work. Audio + Camera randomly give out in the middle of video conferences, only fixable by a complete reboot. Some things are only fixable in the terminal.

I use it as much as I can, but there is still work to be done. I agree that Windows is on the wrong path.


> they know windpower and solar are not viable long term

Why?


Steelman: in the 2000's and 2010's China did not know if wind power and solar were viable in the long term. They put a lot of money in wind & solar, but also lots of alternatives: nuclear, coal, hydro, geothermal.

By 2020 it was obvious that wind & solar were viable long term, so investments in nuclear et al dried up. But they weren't convinced that batteries were viable long term, so they built a lot of coal peakers for night power.

By 2025 it became obvious that batteries were more viable and cheaper than coal peakers, so they've started to build battery storage at a vast scale.

So steelman is that the OP's viewpoint is ~10 years out of date.


They know that sometimes it's not windy, and they know about night.


>They know that sometimes it's not windy, and they know about night.

they also know about batteries


Fortunately they also know about batteries.


> These days, exposing an immature brain to the raw internet is basically just handing the brain and personality over to be molded by large corporations and algorithms.

You make the case of todays internet being insuitable for young children. But has this been different, ever, maybe apart from the very first days of the internet? While access through phones has reshaped the internet fundamentally, I'd propose that it has always been dangerous. When I was 12, a single wrong click could destroy your machine, or lead to a physical bill being sent to my parents home (which has happened), or lead to most disturbing pictures and videos.

So I think it's not the case that we should allow kids completeley unsupervised access (like it always has been), but it's also naive to think that we can regulate our way out of this (on state or household-level, like it always has been).


When my generation "accessed the internet", there was a massive dial-up sound and the single family PC was in the living room, visible to everyone.

Even later when the computer was in my room, I still had to go look for the creepy shit, it didn't appear in my email inbox.

Kids this age browse the internet through algoritmic apps built to maximise engagment in a corner on their bed in their room. Parental controls for most apps and operating systems are a fucking joke.


Agreed, but isn't this a parental issue? Why aren't parents moving back to a "shared pc in the living room" model?

I absolutely would not allow a kid to have an unregulated smartphone and then further compound the problem at home by allowing them to access it privately and without interruption. Device management enrollment is trivial on iphones.


Having a smart phone is required for taking part in society.

Monitoring said devices is a lot harder, enrolling to device manager doesn’t let me monitor the content of specific apps


I think there is a drastic difference between being once off exposed to bad images, and an algorithm making a choice of whether to subtly over time expose the Pokemon-interested child to racist Pokemon videos vs non-racist Pokemon videos on Tiktok. (Or anorexic Pokemon videos, or..)

Amount of time spent and repeated exposure being the key.

The question is really what kind of human is raised, rather than raw exposure as such.

So for that reason things are different IMO than than 20 years ago.

Yes, of course some people would fall into internet forum rabbit holes 20 years ago, and papper-letter-friend-induced rabbit holes 100 years ago. But it did help that it was like 5% of the population instead of 95% of the population spending their time there.

Regarding your last point, I don't necessarily disagree (again I didn't check up on this law, I care more about the laws in my own country), but I think arguing against the law will go better if one does not display naivety when making the arguments

Don't say "it will be better if all kids are exposed to everything early" (it won't), instead say "the medicine will not work and anyway the side-effects are worse than the sickness it intends to cure" (if that is the case).


But the algorithm stuff is bad for everyone, and makes a lot of money, so it's obviously never ever going to be part of any regulation.


But adults (who have fully developed brains, unlike adolescents) can choose not to engage with the algorithm stuff.


Australia banned social media under 16, and many other countries are looking on variations on this.

In the US, perhaps not...


This feels like an extremely naive take.

Even as late as the mid-aughts the internet was mostly nerdy technical information, real people sincerely discussing various topics, and the very worst thing was a little bit of (mostly still-image) porn if you were looking for it.

Kids back then weren't targeted by a stream of continuously A/B tested algorithmic content intended to tell them what to think and shape their brains. Overwhelming evidence exists that social media (as it exists today) is bad for the mental health of young people (and probably adults, too, but at least adults have the presence of mind and lack of social pressure to delete Facebook).


> Even as late as the mid-aughts the internet was mostly nerdy technical information, real people sincerely discussing various topics, and the very worst thing was a little bit of (mostly still-image) porn if you were looking for it.

This is the naive take. In the early to late 2000's, you could buy drugs on the clearnet. You could discuss taking those drugs on forums and sites like Erowid.

This was the age of shock sites, gore, extreme porn, 4chan, etc.

At one point a pornstar actress crushing kittens to death was a meme. 2 girls 1 cup was a meme. Tubgirl was a meme. Goatse was a meme. Ogrish, LiveLeak, etc were all open access. I once watched someone get burned to death for being a witch*.

These are all things that were one click away, your friends would send them to you for the lulz.

* I am actually glad I saw that. It showed me that those types of things were not in the distant past, civilized people can still be driven by moral panics to do horrific things. Discriminatory ideology still exists, and gone unchecked, leads to wanton violence and reprehensible things, some things that I've experienced myself, but not to that extent. It served as a potent reminder of human nature, and I've watched its template play out over and over again. The delight I saw in the faces of those who perpetrated it is the same delight you see in the faces of those engaging in today's secular witch hunts, moral panics, hate crimes, etc.


This story has some parallels to our current usage of chatbots. We've also seen already a display of limited agency (in the sense of proactive actions) through the recent emergence of Clawdbot and alike. What would happen if a model that had direct access to the billions of private chats could give proactive decisions to governments on how to handle single citizens like in the story?


Hackers sometimes create things not to actually use them, but because they are curious how something works and to see if they can


dansup, the guy behind this, is the same guy that runs Pixelfed (a federated Instagram alternative)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: