Rationalism in philosophy is generally contrasted with empiricism. I would say you're a little off in characterizing anti-rationalism as holding rationality per se in low regard. To put it very briefly: the Ancient Greeks set the agenda for Western philosophy, for the most part: what is truth? What is real? What is good and virtuous? Plato and his teacher/character Socrates are the archetype rationalists, who believed that these questions were best answered through careful reasoning. Think of Plato's allegory of the cave: the world of appearances and of common sense is illusory, degenerate, ephemeral. Pure reason, as done by philosophers, was a means of transcendent insight into these questions.
"Empiricism" is a term for philosophical movements (epitomized in early modern British Empiricists like Hume) that emphasized that truths are learned not by reasoning, but by learning from experience. So the matter is not "is rationality good?" but more: what is rationality or reason operating upon? Sense experiences? Or purely _a priori_, conceptual, or formal structures? The uncharitable gloss on rationalism is that rationalists hold that every substantive philosophical question can be answered while sitting in your armchair and thinking really hard.
It's pretty unfortunate that the Yudkowsky-and-LessWrong crowd picked a term that traditionally meant something so different. This has been confusing people since at least 2011.
Well empiricists think knowledge exists in the environment and is absorbed directly through the eyes and ears without interpretation, if we're being uncharitable.
Sure. The idea of raw, uninterpreted "sense data" that the empiricists worked with (well into the 20th century) is pretty clearly bunk. Much of philosophy took a turn towards anti-foundationalism, and rationalism and empiricism are, at least classically, notions of the "foundations" of knowledge. I mean, this is philosophy, it's all pretty ridiculous.
This is the most egregious one in my eyes, too. I've run A/B tests on a few signup forms and without fail it validates the standard practice: the lowest drop-off rate comes from removing every possible obstacle and distraction. I'd bet a few dollars (which is as much as I'll ever bet) that design update would perform worse. The tool is almost intriguing as a _reductio_ of certain design practices.
The "after" designs all replace the rather generic "SV startup with a tailwind UI" with this serif font, parchment color look. It looks very similar to Anthropic's branding. I guess it looks marginally more distinctive? Though it seems to replace one knock-off visual identity for another. But the claim is that the tool here is implementing best practices through a sophisticated "design vocabulary", and in that sense the examples strike me as manifest failures. I find the general legibility of the "before" designs to be much better.
Author here, fair feedback. These examples were rushed, and didn't come out great. For this particular one, the concept was 'trustworthy, expensive life sciences company" of sorts, but it's still not a great before/after example. Removed for now, and will switch out for better examples soon.
Web frontends have trended towards various forms of isolation (CSS scopes, shadow DOM), namespacing (CSS modules, BEM), or composition (tailwind etc.) because CSS cascading and inheritance cause more trouble than they're worth. So while you're correct, there are lots of available frameworks and patterns that provide a better dev experience, though of course there are tradeoffs involved in all of them.
ORMs come with a lot of baggage that I prefer to avoid, but it probably depends on the domain. Take an e-commerce store with faceted search. You're pretty much going to write your own query builder if you don't use one off the shelf, seems like.
I once boasted about avoiding ORM until an experienced developer helped me to see that 100% hand‑rolled SQL and customer query builders is just you writing your own ORM by hand.
Since then I've embraced ORMs for CRUD. I still double-check its output, and I'm not afraid to bypass it when needed.
Not really. ORMs have defining characteristics that hand-rolled SQL with mapping code does not. Eg, something like `Users.all.where(age > 45)` create queries from classes and method calls, while hand-rolled SQL queries are...well..hand-written.
It's amusing to consider how much of a Rorschach test this article must be. But it's a great point, even if it arms us to abusively write off unwelcome ideas as scams. As the author points out, Pascal's reasoning is easily applied to an infinity of conceivable catastrophes - alien invasions, etc. That Pascal specifically applied his argument to the possibility of punishment by a biblical God was due to the psychological salience of that possibility in Pascal's culture - a truly balanced application of his fallacious reasoning would be completely paralyzing.
The authors here are claiming, as your quote states, that biological evolution is just one instance of a more general phenomenon. I'm not sure that's contrary to the views you're expressing. You wrote:
> The expectation that life is somehow special is wrong. There is, as far as we can see, no difference in the quarks in a dog and those in a rock
But the authors' examples do include the "speciation" of minerals! As I read it, the authors describe:
- some initial set of physical states (organisms, minerals, whatever)
- these states create conditions for new states to emerge, which in turn open up new possibilities or "phase spaces", and so on
- these new phase spaces produce new ad hoc "functions", which are (inevitably, with time and the flow of energy) searched and acted upon by selective processes, driving this increase of "functional information".
I don't think it's saying that living things are more complex or information dense per se, but rather, that this cycle of search, selection, and bootstrapping of new functions is a law-like generality that can be observed outside of living systems.
I'm not endorsing this view! There do seem to be clear problems with it as a testable scientific hypothesis. But to my naive ear, all of this seems to play rather nicely with this fundamentally statistical (vs deterministic) picture of reality that Prigogine described, with the "arrow of time" manifesting not just in thermodynamics and these irreversible processes, but also in this diversification of functions.
Making a career out of making the case for air pollution. I hope the money is worth it. This guy should have to live and raise his kids next to a coal plant.
This is a great demonstration of the fact that people coming from very different perspectives can, through good faith inquiry, find much to agree on. I think there are a lot of thoughtful arguments and conclusions in here even though I generally find the catholic church's metaphysical pyrotechnics to be fairly ridiculous. It goes to show that E.O. Wilson's concept of "consilience" can apply even outside of sciences - just as different lines of scientific inquiry converge on a common reality, so can very disparate forms of moral inquiry converge because they both proceed from a shared human experience of what's good and bad in life.
Yeah! Perhaps a bit naively, as a Highly Opinionated Person (HOP) on this topic I was ready for this to have something controversial to say about the nature of intelligence.
It's not out of the ordinary for even Anglosphere philosophers to fall into a kind of essentiallism about intelligence, but I think the treatment of it here is extremely careful and thoughtful, at least on first glace.
I suppose I would challenge the following, which I've also sometimes heard from philosophers:
>However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally? Or that it isn't already equivalent to something out there in the vector space that machines already utilize? People are constantly rotating through essentialist concepts that supposedly reflect an intangible "human element" that shifts the conversation onto non-computational grounds, which turn out to simply reproduce the errors of every previous variation of intelligence essentialism.
My favorite familiar example is baseball, where people say human umpires create a "human element" by changing the strike zone situationally (e.g. tighten the strike zone if it's 0-2 in a big situation, widen the strike zone if it's an 3-0 count), completely forgetting that you could have machines call those more accurately too, if you really wanted to.
Anyway, I have my usual bones to pick but overall I think a very thoughtful treatment that I wouldn't say is borne of layperson confusions that frequently dog these convos.
Yep I think that is an interesting point! I definitely think there are important ways in which human intelligence is embodied, but yeah - if we are modeling intelligence as a function, there's no obvious reason to think that whatever influence embodiment has on the output can't be "compressed" in the same way – after all, it doesn't matter generally how ANY of the reasoning that AI is learning to reproduce is _actually_ done. I suppose, though, that that gets at the later emphasis:
> Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform
One might concede that AI can produce a good enough simulation of an embodied intelligence, while emphasizing that the value of human intelligence per se is not reducible to its effectiveness as an input-output function. But I agree the vatican's statement seems to go beyond that.
As an aside, and more out of curiosity, I want to mention a tiny niche corner of CogSci I once came across on YouTube. There was a conference on a fringe branch of consciousness studies where a group of philosophers hold a claim that there is a qualitative difference of experience based on material substrate.
That is to say, one view of consciousness suggests that if you froze a snapshot of a human brain in the process of experiencing and then transferred every single observable physical quantity into a simulation running on completely different material (e.g. from carbon to silicon) then the re-produced consciousness would be unaware of the swap and would continue completely unaffected. This would be a consequence of substrate independence, which is the predominant view as far as I can tell in both science and philosophy of mind.
I was fascinated that there was an entire conference dedicated to the opposite view. They contend that there would be a discernable and qualitative difference to the experience of the consciousness. That is, the new mind running in the simulation might "feel" the difference.
Of course, there is no experiment we can perform as of now so it is all conjecture. And this opposing view is a fringe of a fringe. It's just something I wanted to share. It's nice to realize that there are many ways to challenge our assumptions about consciousness. Consider how strongly you may feel about substrate independence and then realize: we don't actually have any proof and reasonable people hold conferences challenging this assumption.
It's going to sound rather hubristic, being that I'm just a random internet commenter and not a conference of philosophers, but this seems... nonsensical? I don't understand how it isn't obvious that the new consciousness instance would be unaware of the swap, or that nevertheless the perspective of the original instance would be completely disconnected from that of the new one.
It seems to be a question that many apparently smart people discuss endlessly for some reason, so I guess I'm not surprised by this proposal in particular, but it's really mystifying to me that anybody other than soulists think there's any room for doubt about it whatsoever.
Completely agree. I'm interested in the detour, perhaps as much fascinated in the human psychology that prompt people to invest in these debates as anything about the question itself. We have psychology of science and political psychology and so it seems like a version of that that attempts to be predictive of how philosophers come to their dispositions is a worthy venture as well.
And then Marvin Minsky asked: what if you substitute one cell at a time with an exactly functioning electronic duplicate? At what point does this shift occur?
Sounds like an experimental question. Maybe 99%, maybe 1%, maybe never.
Can you suggest another way to answer your question other than performing an experiment? Can you describe how to perform an experiment to answer your question?
Would you agree to be the subject of such an experiment?
>I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally?
Well, Searle argued against it when presenting the case for the Chinese Room argument, but I disagree with their take.
I personally believe in the virtual mind argument with an internal simulated experience that is then acted upon externally.
Moreso, if this is the key to human like intelligence and learning in the real world, I do believe that AI would very quickly pass by our limitations. Humans are not only embodied, but we are prisoners to our embodiment and we only get one. I don't see any particular reason why a model would be trapped to one body, when they could 'hivemind' or control a massive number of bodies/sensors to sense and interact with the environment. The end product would be an experience far different from what a human experiences and would likely be a super organism in itself.
Experience is biological, analog, computers are digital; that's the core of the problem. It doesn't matter how many samples you take, it's still not the full experience. Witness Vinyl.
This is just so story more than it's an actual argument and I would say it's exactly the kind of essentialism that I was talking about previously. In fact, the version of the argument typically put forward by Anglo-sphere philosophers, and in this case, by the Vatican, are actually more nuanced. The reference to the "embodied" nature of cognition at least introduces a concept that supports a meaningful argument that can be engaged with or falsified.
It could be at the end of the day that there is something important about the biological basis of the experience and the role it plays in supporting cognition. But simply stipulating that it works that way doesn't represent forward motion in the conversation.
I believe parent is referring to the HN crowd's, which interestingly is rather diverse reacting regarding this post (though I could be wrong and they could be referring to the document and its sources).
Either way, I must admit that, as a Catholic I appreciate the great discussion here. There are of course the usual snarky comments you would expect regarding the Church and religion (which is fine by me) but overall it's a well grounded discussion.
I'm personally enjoying reading the thoughtful perspectives of everyone.
"Empiricism" is a term for philosophical movements (epitomized in early modern British Empiricists like Hume) that emphasized that truths are learned not by reasoning, but by learning from experience. So the matter is not "is rationality good?" but more: what is rationality or reason operating upon? Sense experiences? Or purely _a priori_, conceptual, or formal structures? The uncharitable gloss on rationalism is that rationalists hold that every substantive philosophical question can be answered while sitting in your armchair and thinking really hard.
reply