Hacker Newsnew | past | comments | ask | show | jobs | submit | fooop's commentslogin

I, for one, am glad that the rationality-bubble is popping.


the world these days is becoming more and more medieval: many different actors, all competing with one another, at different levels of power.

Imo, it's a good thing: regular ppl will be more suspicious about power overall and not make as many bad safety/convenience trades.


Assuming people are educated and aware enough to begin with to recognize and then deal with it.

The right to read was literally only locked up a few hundred years ago.


these things are much more transparent than us experts like to think they are - people making "uneducated" decisions are often just making decisions that don't fall in line with the expectations of the "educated". Power is legible regardless of whether you know how to read or not (or code (or do immunological research)).


nor should they, since the public is indeed being lied to.


About…? By whom..?

Whatever it is, know that science as a whole is giant, diverse, and self correcting over the long term.


I don't agree.

When you have lots of people whose livelihoods depend on the gravy train, who can't be sure that what they are working on is fraudulent because they are so specialised, who would take that risk?

Its all about funding. And in the US basically all funding comes from the same source - government, military or corporations - what I think of as the governance system.


Over time, yes. But over, say, three weeks, science can do a lot of damage to an economy before it self-corrects.


It’s very weird to me that science now equals vaccines and home-mandates, and it also equals advice given with little data. Science is way way bigger, and is an idea not a particular set of people or current beliefs. It also has brought untold value of taking us out of the dark ages but apparently that’s been normalized so much it’s no longer valued.


Speaks more to a fundamental misalignment between societal good and technological progress. The narrative (first born in the Enlightenment) about how reason, unfettered by tradition and nonage, is our best path towards happiness no longer holds. AI doomerism is an expression of this breakdown, but without the intellectual honesty required to dive to the root of the problem and consider whether Socrates may have been right about the corrupting influence of writing stuff down instead of memorizing it.

What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.

Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.


you mean, making everybody else aware of how smart the poster is


> academic publishers are vultures that serve no purpose

This is a naively optimistic view of how knowledge production actually operates. Sure, the scientific endeavor is constrained by what is actually the case (i.e.reality), but without some kind of editorial oversight imposed from above nothing coherent nor useful will be produced. A thousand well-trained, well-intentioned researchers toiling away at a problem will not magically self-organize such that their collective efforts make tangible progress on large, intractable problems. This will be true regardless of how many fancy web2.0/3.0 niceties one throws at the problem, since experience has shown that such solutions only make the problems of social dominance worse, not better. In the end, this sentiment is nearly identical to people complaining about "capitalists".

Do capitalists and academic publishers have purposes to fulfill? Yes. Do they fulfill that purpose well these days? Absolutely not. Like many of our social institutions these days, the people who run them seem to fundamentally misunderstand what their roles are, deferring to some vague, financialized liberalism that believes all problems can be addressed by maximizing human-freedom, with no regard to bootstrapping problems. Because the institution ceases to perform it's role, people begin to believe it has no role. Worse yet, now that people have no idea what the institution's role even is, they have even less of a clue as to how to fix it.


> without some kind of editorial oversight imposed from above nothing coherent nor useful will be produced.

True, but academic publishers charge an absurd amount of money in return for very little value. The publisher provides a platform for "editorial oversight" by peer reviewers, but they do not pay the peer reviewers. I would argue that "editorial oversight" in the form of peer review may be worth thousands of dollars per publication, simply providing a platform for that review and profiting from volunteer work should not be compensated as highly as it is right now.


The business fundamentals and the emotional reality seem to be inherently at odds. I'll bet that this is a leading indicator for what's gonna happen in people medicine.


> leading indicator for what's gonna happen in people medicine.

It's been happening for a long time now: https://www.nytimes.com/2023/06/15/magazine/doctors-moral-cr...


ah yes. The solution for too much technology is obviously more technology. I'm so glad that we have all these smart people working for us.


If we go fast enough, surely we'll fly instead of crash and burn!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: