Hacker Newsnew | past | comments | ask | show | jobs | submit | quirkot's commentslogin

regarding #2: how many serfs came home after re-digging the toilet hole to eat a meal of hand-milled grain bread and old vegetables with the members of the family that survived infancy and thought "life just doesn't get any better than this"? Probably almost all of them

Magawa cleared 1,517,711 sq.ft of land. He could work at a pace of 2,808 sq.ft (a doubles tennis court) every 20 minutes. If he maintained that pace, he worked 180.2 hours. Let's assume, with hazardous terrain, he worked 25% that speed on average. If that's the case he worked ~720 hours during a 5-6 year career. A different rat, Ronin, that found more stuff found a total of 124 explosive devices. So Magawa found no more than 1 explosive for every 5 hours and 45 minutes of searching. Or approximately one device every 17.25 tennis courts of searching.

Real needle in a haystack stuff, wow


Another comment pointed out that a at least one de-mining expert is skeptical:

https://news.ycombinator.com/item?id=47680882


Money is the sledgehammer of incentives. Above a reasonable amount of pay, it's overkill and makes lots of collateral problems. The really effective incentives are status based and situational to the group dynamic


Can you give an example please. How do you do this without introducing bad vibes?


counterpoint: if I have to treat the computer like a person, what's the point of talking to a computer in the first place? Particularly when there are so many other systems that can provide answers without the runaround


Humans cost $xx,yyy a year.

Claude max-x20 is $2,400 a year.

I talk to the computer like a person to get the computer to do things that humans used to do. Having managed people before, I'm going all in on AI.


You're limiting the frame to an employment situation. Higher quality sources of knowledge are free: Wikipedia, public libraries, etc. Similar quality sources of information are also free: human relationships.


Now we watch this viewpoint proliferate thousands and thousands of times over, even if it's less commonly stated so baldly, and yet people still wonder where the doomer viewpoints stem from?


Yes, but I am full in on simulation hypothesis, and people are going to enter the matrix... willingly.

https://nexivibe.com/intj.html


While some of the ideas in this do resonate with me (or at least they're entertaining), it's unfortunate that's it's so obviously LLM generated. And some parts of it, like the INTJ exceptionalism, reek of LLM sycophancy, which then turned into to some kind of god complex...


observation a: Document title is about a minority's rightful supremacy

observation b: document says "this is not political" then dives into persuasive speech

conclusion: this document was written by the bad guys


i just actually read that and it is possibly the most morally abominable screed I've come across in a long time. Shocking that its acceptable to share in polite company


Oh, then you will get a kick out of this for sure: https://nexivibe.com/winter.html


Train the algorithm so that you can be the sort of product you want to see in the world


the issue brought up in the article isn't that "the algorithm is biased" but that "the algorithm causes bias". A feed could perfectly alternate between position A and position B and show no bias at all, but still select more incendiary content on topic A and drive bias towards or away from it.


Sounds a lot like Marx's theory of alienation


We already were living in an alienating society. This is mass psychosis.


What? A pump and dump scheme? I am shocked I tell you, shocked


>> Building 29 separate settings with confusing and overlapping effects is less work than making a single setting of: [Local Only]?

Yes, absolutely. 29 separate overlapping settings likely match up precisely to arguments in various APIs that are used. On the other hand, what does local only even mean? No wifi? No hardwired connection? LAN only? Connection to the internet for system updates but not marketplace? Something else? All with a specified outcome that requires different implementation depending on hardware version and needs to be tweaked everytime dependencies change.


Having a separate setting for unconditionally disabling all wireless communication would be helpful. The other stuff you mention can be separate settings if it is useful to have them. (A setting to unconditionally all disable wired connections is less important since you can just avoid connecting it.)


>>what does local only even mean?

Let's start with this: Design the architecture so the core system works fine locally. Features requiring Internet connection are in separate modules, so they can be easily turned on/off, and designed so they are still primarily local.

E.g., store all current status locally and if requested another module sends it to the cloud, instead of cloud-first.

E.g.2, install updates by making a pull of all resources and then doing the update instead of requiring continuous communication.

Allow user control with options to completely shut off, whitelist, blacklist, etc.

Simple design decisions up front to make a software package meeting the user's local needs first, THEN allowing controlled access to the internet, under the USERS' control, instead of designing every feature to contact your servers first and compromising both usability and control at every step.


I think he's trying to differentiate himself from all the people who are not AI engineers


Problem is, he isn't even remotely an AI engineer™ himself.

The entire article is a complete joke and is ragebait.

Flagged.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: