I have to admit a weakness for reading not-quite-crackpot-but-likely-wrong theories. In particular, big fan of Julian Jaynes and The Development of Consciousness in the Breakdown of the Bi-cameral Mind, and the aquatic ape hypothesis https://en.wikipedia.org/wiki/Aquatic_ape_hypothesis
I get that they're probably not true, but I do enjoy reading novel thinking and viewpoints by smart people with a cool hook.
I think if you want to start down that sort of road, it's important to read lots of them. Read zero, you're probably fine. Read lots of them, you're probably fine. "One or two" is where the danger is maximized.
And I would agree with "likely" wrong. Some of them probably aren't entirely wrong and may even be more correct than the mainstream. Figuring out which is the real trick, though. Related to the original article, I tend to scale my Bayesian updates based on my ability to test a theory. In the case of something like the Breakdown of the Bi-cameral Mind, it ends up taking such a deduction as a result of that heuristic that it is almost indistinguishable from reading a science fiction book for me; fun and entertaining, but doesn't really impact me much except in a very vague "keep the mind loose and limber" sense.
I have done a lot of heterodox thinking in the world of programming and engineering, though, because I can test theories very easily. Some of them work. Some of them don't. And precisely because it is so easy to test, the heterodoxy-ness is often lower than crackpot theories about 10,000 years ago, e.g., "Haskell has some interesting things to say" is made significantly less "crackpot" by the fact that plenty of other people have the ability to test that hypothesis as well, and as such, it is upgraded from "crackpot" to merely a "minority" view.
So my particular twist on Scott's point is, if you can safely and cheaply test a bit of a far-out theory, don't be afraid to do so. You can use this to resolve the epistemic learned helplessness in those particular areas. It is good to put a bit down on cheap, low-probability, high-payout events; you can even justify this mathematically via the Kelly Criterion: https://www.techopedia.com/gambling-guides/kelly-criterion-g... If there is one thing that angers me about way science is taught, it is that it is something that other people do, and that it is something special that you do with either the full "scientific method" or it's worthless. In fact it's an incredible tool for every day life, on all sorts of topics, and one must simply adjust for the fact that the lower effort put in, the less one should trust it, but that doesn't mean the total trust must necessarily be uselessly low just because you didn't conduct your experiment on whether or not fertilizer A or B worked better on your tomatos up to science journal standards.
Same. I found a book back in college claiming (on the basis of some theory about the Egyptian pyramids) that if you made a pyramidal shape with certain dimensions out of cardboard, it would make plants grow faster and keep your razorblades sharp. I didn't believe it, but I did make one for fun. All my physics-major friends made fun of me for being gullible. I was like, isn't testing stuff what we're supposed to be doing here?
Is there solid evidence against aquatic ape? The only argument I've seen was that it's unnecessary because multitude of previous explanations of every single feature work just fine, thank you very much.
I get that they're probably not true, but I do enjoy reading novel thinking and viewpoints by smart people with a cool hook.