The interface is amazing. Clean, it doesn't leave huge amounts of empty space that scream "this was made for phone users, everyone else might fuck off", and the subforums already hint "we want to gather political dissidents". It feels like the 00s forums without looking like one, it's the best of both worlds.
I tried to register with a weak password (on purpose) to check security. It works; four tries and three different errors (capital letters required, special characters required, min length required). However, I feel like a user hitting this issue accidentally would've given up after the third try. Perhaps it could be worth to check for multiple errors at once, and output them all to the user; e.g. "The password must mix case, and contain special characters, and have a minimum length of 8". Just an idea/feedback, mind you.
This one is super annoying. A long password without special characters is not any less secure than a short password with one special character added because it was required.
Better than arbitrary requirements like this would be to estimate the entropy and then just prevent low-entropy passwords (or only tell the user - not everyone needs the same level of security for everything).
I can relate to most of what he wrote. And I've also noticed the same pattern that he points out in
>Whenever I post of a cognitive bias or logical fallacy, my replies are soon invaded by leftists claiming it explains rightist beliefs, and by rightists claiming it explains leftist beliefs.
where both sides [often correctly] point out the fallacies of the other side, but fail to acknowledge their own.
>Since you’re reading about intelligence right now, you’re likely above average in intelligence, which means that you, whatever you believe, should be extra vigilant against your intellect being commandeered by your animal impulses.
I fucking love this slap on the face of the reader.
______
I feel like there's something else though. Frankly I wouldn't call someone engaging in wishful belief "intelligent" by any measure; intelligence requires the ability to entertain multiple concurrent lines of reasoning, and in plenty of them your belief is wrong. [I can go further on that if anyone wants.] It's the same deal with some basic fallacies (mostly false dichotomy, four terms, and appeal to origins) that are often used to protect those stupid beliefs.
> Frankly I wouldn't call someone engaging in wishful belief "intelligent" by any measure; intelligence requires the ability to entertain multiple concurrent lines of reasoning
Not as defined by article author:
> intelligence is nothing more than the effectiveness with which an agent pursues a goal. Rationality is intelligence in pursuit of objective truth, but intelligence can be used to pursue any number of other goals.
So your intelligence definition is more like rationality. I see a lot of arguments going nowhere just because two sides have different definitions of the thing they argue about.
All that said, I agree with you, it is not rational to engage in wishful belief, but it's a kind of energy saving measure, so that you don't constantly overthink "Am I really right about this?".
Yup, different definitions; I was going to bring this up but chopped it to avoid the wall of text. (I'm glad that someone caught it up though.)
My definition is roughly "ability to process information and generate useful conclusions as a result". Rationality would be a "side-effect" of intelligence, not part of the definition itself. I think that it's more useful than the def provided by the author because sometimes intelligent people with dumb beliefs will also do dumb shit, that clearly contradicts their goals. Does that mean that they aren't intelligent to begin with? Acc. to the author's definition, yes.
A good example of that would be Steve Jobs. It's hard to claim that Jobs wasn't very intelligent (even if I don't like him); and odds are that "to survive" was one of his goals. Then why the hell would he prefer alternative medicine over actual medical intervention, for something as serious as a cancer? (It's just an example, mind you, we could use others.)
However, once you shift the definition of "intelligence" to the one I'm using, there's no paradox: he was intelligent, sure, it's just that his "processing ability" was not directed towards that specific goal. And sometimes it might've been directed against the goal.
It's like something is diverting that processing ability from the personal goals to something else. Dawkins' memeplexes might be the answer here: the memeplex "alternative medicine" was competing with the goals of the individual, and leeching off his processing ability to its own end, like a parasite of the mind.
So, alternatively, and in addition to what the author said, we get another line of thought: sometimes smart people have dumb beliefs because intelligence does not immunise you against parasitic memes. Perhaps it even makes you more vulnerable, as those memes will be abler to successfully reach you. And then, [this point agreeing with the author], once those parasites are installed, they'll divert your intelligence towards their goals.
[Sometimes wall of text is necessary to convey proper depth of thought]
> It's like something is diverting that processing ability from the personal goals to something else. Dawkins' memeplexes might be the answer here: the memeplex "alternative medicine" was competing with the goals of the individual, and leeching off his processing ability to its own end, like a parasite of the mind.
I think this reinforces my argument about minimising time spent thinking. This is esentially a "stopping problem", i.e. when do you settle on a solution. When solution seems good enough, some people will stop critical thinking and looking for errors in that solution and just accept it as settled matter. Intelligent agents have to act on imperfect solutions because in many situations acting now is better than slightly better solution a hour later. This pattern can be used for all other problems, and is reason for "sometimes smart people act stupid". Plus sometimes people have a bad hour or day and don't have enough mental resources to properly make coffee without accidents.
Intelligence is also not universal, you can be more intelligent in one domain and less intelligent in other domain (street smart vs educated), which is a reason for "for some problems smart people act stupid" which is shown again and again when you ask technology experts about social issues.
This sounds a lot like satire. This excerpt for example is blatantly self-contradictory:
>We’ve found zero [scenarios] that lead to good outcomes. // Most AI researchers think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.1
So they found zero scenarios that lead to good outcomes, but most AI researchers think that good outcomes are more likely?
Brushing off a majority view as w*shful "thinking", and then backing up the argument with a... majority view?
__________________
Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".
Previous surveys of this kind have suggested that most AI researchers aren't actually thinking about these questions very hard (e.g. rephrasing a question a bit can get you a very different answer). So it doesn't seem at all surprising to me that the majority view is out of sync with what a careful analysis shows.
>Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".
Essentially a "guns don't kill people, people kill people" argument.
I think this argument breaks down as weapons get more powerful, e.g. if I could walk down to my local car dealership and buy a cheap tank powerful enough to level a city, it seems good to focus more on "ease of tank purchase" than "culpability for tank drivers".
I think the argument also breaks down as AI gets more powerful.
>Essentially a "guns don't kill people, people kill people" argument.
Not quite. It's more like "a gun cannot be held morally responsible for its actions, so actual people should".
The difference is important here because, depending on the situation, you might still want to blame people who allowed the shooter to have a gun, not just the shooter.
>I think this argument breaks down as weapons get more powerful, e.g. if I could walk down to my local car dealership and buy a cheap tank powerful enough to level a city, it seems good to focus more on "ease of tank purchase" than "culpability for tank drivers". // I think the argument also breaks down as AI gets more powerful.
Note how we're still blaming people: the car dealer and the driver. Not the "it" = tank.
And it's the same deal with the AI. If you use an AI system in a way that harms people, sometimes the "car dealer" (the ones coding the AI) should be held responsible, sometimes the driver (you), sometimes both. But never "neither", i.e. "the AI is at fault".
Perhaps it's a side effect of our desire for different flavours, that would encourage us (in a wild environment) to seek a diversified diet that provides multiple types of nutrients.
it said that herbs and spices are not distinguished, and I pointed out that they are. I didn't mention condiments; I did offer a detailed qualification of seasonings.
This is just a guess but I think that there are two problems here, not one: 1) inefficiency strictu sensu (more operations required for the exact same task), and 2) lower diminishing returns (that kitchen sink being included weights far more than the rest of the project, but maybe you should still not remove it because it still provides a small return of user utility).
I think that some users here in the comments are bloody missing the point.
It's easy to figure out what the authors are saying, provided that you guys have something called "basic reading comprehension". Here's a TL;DR: "software is being judged by the wrong criteria. Focus on the users, dammit. Software should be judged by its usability, speed, bug-freeness, and innovativeness." The authors aren't really picking a bone against structured programming (or object-oriented programming, or whatever), those bullet points sound more like the type of excuse for crapware that you'd hear back in the day.
Also look at the references; the newest one is from '94. This text is probably from '94-'00. Tech reference there should be contextualised to those times, not to 25~30 years later aka now.
Finally, the general tone being used by the text is not serious, it's cheeky and troll-ish. Odds are that the authors intended this as food for thought, not as a dissertation that should be analysed and replied with "ackshyually, this specific example is 0.573% inaccurate lol lmao".
It’s okay to disagree with others in the thread, but there is no need to be insulting about it and accusing people of not having basic reading comprehension.
You might have been serious or not when writing that, but framing your comment in a more positive manner will result in better discussion.
The reason that I'm scolding those users is not "disagreement". Disagreement implies that they have something to offer - an opposite point of view, or perhaps info conflicting with what I said. They don't, because they didn't even get the point of the text linked in the OP. Or the context where it was written, even if it's blatantly obvious for anyone actually reading it.
>You might have been serious or not when writing that, but framing your comment in a more positive manner will result in better discussion.
Frankly, the users who might get their very, very precious feelings hurt with this "learn to read" are most likely the ones who won't contribute jack shit to the discussion, no matter how polite of a tone you might use with them.
___________________________
Now, yet another thing that those users didn't get is that this text is two, perhaps three decades old. Things have changed and nowadays developers put a bit more of thought into the users. Even then, the general idea - "who cares about your data structure, show results that the users benefit from!" is still important.
>Yes, it is sophisticated. Phishing emails are opened more often then regular mail and click-rate is pretty high.
You're confusing "sophisticated" with "efficient". Phishing is efficient but unsophisticated; it boils down to one of the oldest tricks ever, to make you believe that $foo is $bar.
>No, Reddit. Phising is not a sophisticated attack.
You're right, it isn't. It's just Reddit admins lying through their teeth, it's their usual. Almost like they take most of the current Reddit userbase as braindead. (Spoilers: they might be right.)
In this specific case they're lying to not make it so obvious that they're pretty much incompetent to manage their site.
This sort of "chrust me, I have kwalifikashuns" is rather fitting the overall poor quality of your comment.
There are a thousand things wrong with this application. Viability of the process is NOT one of them, as already attested in the literature. Refer for example to
The reaction boils down to R'-CH₂-CH₂-R" → R'-CH=CH₂ + H-R". It's practically the reversal of the polymerisation that created the plastic on first place. It's specially obvious by the third paper that I've listed as example, since they're generating butadiene and olefins from polyethylene.
It is by no means on the same level as "selling you a way to change Iron into Gold".
>Plastics have very LONG carbon chains, usually with many double bonds, and very little hydrogen
General purpose polymers like polyethylene and polypropylene do not usually have double bonds. Specially not in the main chain (that you need to cleave to get smaller molecules). You'll only get double bonds in the main chain if your monomers had a triple bond in its place, as one of the bonds is broken in the polymerisation. Most monomers however start with a double bond and the resulting polymers have single bonds. Here, let me show it to you:
>That is why, when you heat plastic it decomposes into char -- as in charcoal.
Yeah, because the hydrogens magically disappear. The presence of nearby oxidants (specially one that you, QuackOfAllTrades, should stop consuming, for a better humankind) is totally unrelated.
_____________________
For actual criticism of the application:
You'll get nasty junk oil as output. That oil will have ONE purpose: to be burned down. If you're burning down the oil might as well burn the plastic directly.
This could be solved by fractional distillation... yeah good luck doing it at home.
What happens if some clueless individual tries to recycle PAN? Or even PVC, given how nasty organochlorines are. So there is both health and environmental concerns.
What's the catalyst being used? Plenty catalysts are environmentally nasty.
1KWh/kg is a lot of energy, and it will have an environmental footprint.
It is not trying to sell you a magical device that transforms iron into gold. Instead it is trying to sell you a device that converts good iron into crappy iron, and marketing the crappy iron as if it was more useful than it is.
I tried to register with a weak password (on purpose) to check security. It works; four tries and three different errors (capital letters required, special characters required, min length required). However, I feel like a user hitting this issue accidentally would've given up after the third try. Perhaps it could be worth to check for multiple errors at once, and output them all to the user; e.g. "The password must mix case, and contain special characters, and have a minimum length of 8". Just an idea/feedback, mind you.