Hacker Newsnew | past | comments | ask | show | jobs | submit | pkal's commentslogin

No, I just guess that most people who dislike it (like me) dislike it on an "eyeroll" level, where you wouldn't use it yourself but don't have the energy to make a fuss about it.

But in 1976 Emacs was implemented in TECO. In 1984 it was implemented in Lisp, because Multics Emacs _or_ EINE/ZWEI (Lisp Machine editors) were using Lisp as an extension language, which apparently has shown itself to be useful.


A point of clarification: GNU ELPA (https://elpa.gnu.org/) is part of Emacs, and you have to sign the copyright assignment to submit packages an to contribute to packages. NonGNU ELPA (https://elpa.nongnu.org/) doesn't have this restriction.


From the historical sources I could find online, it appears that Rust's borrow system was independently invented, or at least they don't mention linear logic or anything substructural. This is kind of interesting to me, especially given the reactions in this thread, and ties into the general difficulty of PL research to find acceptance among practitioners, especially when presented by researchers (which I think is regretful, I like the ideas in the article!). Perhaps we really should stick to terminology like "function colors" to make effect systems more popular (or not, because the color framing makes it sound bad to have different colors in a program, IIRC).


It's the jargon, I think. PL research is in an awkward position, where the jargon is not shared with the much wider community of people using programming languages daily. From the other side, it looks like there is a small body of theoreticians using impenetrable language for discussing topics I'm supposed to be familiar with, because they are a core part of my day job. It's much easier to accept jargon, when it's used in a clearly separate field.

Some of the terminology is just unfortunate. For example, I have an intuitive understanding of what a type means. The meaning used in PL theory is somehow wider, but I don't really understand how.

And then there is my pet peeve: side effect. Those should be effects instead, because they largely define the observable behavior of the program. Computation, on the other hand, is a side effect, to the extent it doesn't affect the observable behavior.

But then PL theory is using "effect" for something completely different. I don't know what exactly, but clearly not something I would consider an effect.


Man who uses arithmetic upset at research mathematicians for using words like R-module when they clearly do not mean a module in C++

More at 11


I don't remember where I read it, but I think Rust cited Cyclone as an influence, a variation of C with "region-based" memory management - more or less the literature name for "lifetimes". I think Rust may be the first to use it directly for stack variables, however.


Rust's discussion boards has an idea of "keyword generics" for expressing some of these concepts. The idea is that a function can be generic over const, async or some other keyworded effect. I like this description. It shows the benefits without too much theory.


I don't think it is not maintained, there is plenty of activity going on in the repo: https://repo.or.cz/tinycc.git, they just don't seem to be cutting releases?


To each his own; I really like his presentation style and the humor!


According to https://algol68-lang.org/, and as expressed in the recording, the contributors (specifically Marchesi) believe that ALGOL 68 continues to have advantages over other languages to this day ("more modern, powerful and safe" and "without successors"). One mentioned in the video is that the more complex, two-level grammars allow properties that would usually be described in the semantics of a language to be formally expressed in the syntax (the example he gives is the behaviour of numeral coercion). I guess this is not a surprise, as van Wijngaarden grammars are known to be Turing complete, but nevertheless it seems like something interesting thing to investiagate! There is a lot of lost wisdom in the past, that we dismiss because it doesn't fit into the language we use nowadays.


That isn't totally true, even on Linux we have had https://jmvdveer.home.xs4all.nl/en.algol-68-genie.html for years.

Also, most languages trace back to ALGOL 60 (the C family tree goes ALGOL 60 -> BCPL -> CPL -> B -> new B -> C -> ANSI C -> ..., though there was some influence such as the idea of "casting", but apparently C only has a castrated version of what ALGOL 68 had) and Pascal is if anything negativly influenced by ALGOL 68 due to Wirth's disagreements with van Wijngaarden: https://dcreager.net/people/wirth/1968-closing-word/.


I recently realized that "pure functional" has two meanings, one is no side-effects (functional programmers, especially of languages like Haskell use it this way) and the other is that it doesn't have imperative fragments (the jump ISWIM to SASL dropped the non-functional parts inherited from ALGOL 60). A question seems to be whether you want to view sequencing as syntax sugar for lambda expressions or not?


Who uses the second meaning?

In my experience, "purely functional" always means "you can express pure functions on the type level" (thus guaranteeing that it is referentially transparent and has no side effects) -- see https://en.wikipedia.org/wiki/Pure_function


I'm working with Python and I'm sympathetic to the problem so I'd be curious if you have examples of what Python issues are fixed with OCaml.


A few ways in which Python is not really functional:

The scoping rules of Python are not lexical

Lambdas in Python are not multiline

Recursion is not a practical way to write code due to stack overflows

Monkey patching


Pure functional doesn't mean no side effects but controlled side effects.


Then perhaps "Did you learn X stating your opinion on it as though it were comprehensive and authoritative"?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: