Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All six features listed are very redundant or useless, in my opinion.

Contracts? how they're different or less verbose than plain asserts? what they do better?

"reactive programming"? if remove that strange code editing "replace", just a chain of definitions instead of variables in, say, ruby, gives you basically the same effect.

etc.

What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

and, somewhat related (both are using backtracking), adoption of some of the ideas of https://en.wikipedia.org/wiki/Icon_(programming_language), not on a whole language level, but in some scoped generator context would be nice for some tasks I had to do.



I'm taking "reactive" here to mean the language tracks data dependencies, so expensive computations are automatically made incremental. Facebook had a research language attempting to do "reactive" programming called Skip (originally Reflex), which is now defunct. The runtime made the language like textual statically-typed Excel.

The use-case was to abstract away having to manually figure out where to put caches and how to invalidate them on front-end servers. Rather, have the runtime figure out how to cache content and page fragments when e.g. rendering the Facebook timeline. However, it was too difficult to bridge the gap to the existing Hack codebase, iirc in particular to the entity system used. There were also a lot of headaches trying to figure out how to power the underlying cache invalidation system.

https://www.youtube.com/watch?v=AGkSHE15BSs

https://web.archive.org/web/20200219222902/http://skiplang.c...

The author I think means something slightly different though, closer to prologue where you define facts and then ask the runtime to make an inference about those facts.


There are a ton of reactive languages though ? QML is a mainstream one used in plenty of UIs and shipped as part of Qt (https://qmlonline.kde.org), there's ReactiveML, Céu...


my take on a reactive language was a tiny AST manipulator language. Since `a=b+3` assigned the ast `b+3` to `a`, you would implicitly get `a==4` when `b=1`. There was also an "eval" operator for when you really wanted `a=b+3` to just assign the evaluation of `b+3` (a single number) to a.


> Contracts? how they're different or less verbose than plain asserts? what they do better?

How do you turn asserts into generative tests, or assign blame? Clojure's spec has support the former (https://clojure.org/guides/spec#_generators), racket's contracts have support for the latter (https://www2.ccs.neu.edu/racket/pubs/popl11-dfff.pdf). Also, many popular languages have a pretty broken assert statement (C, C++, python come to mind) which conflates optimization levels or/debugging with assertion support. Rust gets this right.

> What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

Lua sort of does this with LPeg. There's also https://rosie-lang.org/, which is more special-purpose.


Perl6 is now Rakulang.

https://raku.org/


When I need to parse text with Lua, the first thing I reach for is LPeg. It's great when you can create a standalone expression to parse, say, an IPv4 address, then reuse that in a larger expression. And the data can be transformed as it's being parsed (say, converting a string of digits into an actual integer value).

I have a bunch of Lua modules based around LPeg: https://github.com/spc476/LPeg-Parsers


Contracts, if done right, could be used by the compiler or some dedicated linter or tester before execution. This could open up safety guarantees that are way beyond what we can currently use. The question, of course, is what kind of constraint language is both useful and solvable. Unfortunately, people have been focused on the "dynamic" typing (i.e., no type checks) side of things for so long that static checking lags behind in usability (Rust is on a good way to improve things, though).

Regarding first-class grammars you have to understand that it basically prohibits any other tool to parse your language. This means everyone has to fully implement the language to create any kind of small helper tool. In turn, your language might easily fall into the "exotic" camp (like, e.g., TeX - this language effectively has first-class grammars, albeit at a low level).


> Unfortunately, people have been focused on the "dynamic" typing (i.e., no type checks) side of things for so long that static checking lags behind in usability

The longer I see things like mypy and typescript evolve, I’m actually really glad that people have been focused on gradually typing dynamic languages. As far as I can tell, really useful contracts (or really flexible types) are super burdensome in the most general cases (e.g. `refl` brain twisters in dependent types), but still insanely useful in frequent cases. It reminds me of what people say about static typing’s benefits and burdens.

So I’m hoping to see the gradual typing equivalent for contracts and verification. Start with usefulness and add a spectrum of optional safety that the programmer can ignore or use as they see fit. Personally, at least, that would be my ideal scenario.


Kotlin has some functionality for this that help the compiler. A good example is the isNullOrBlank() extension function on String? (nullable String). It has a contract that treats the string as not null after it returns false. So if you do a null check, it smart casts to a non nullable string without generating a compile error.

There are a few more variations of that in the Kotlin standard library and you can write your own contracts as well. There's just not a whole lot you can do with it other than stuff like this. But it's useful.


Types in Idris sound like the contracts you mention. I learnt about them in the book "Type-Driven Development with Idris".


There is a fairly close relationship between a dependently typed language, like Idris, and 'contracts' (really, pre- and post-conditions plus other propositions) in languages like Ada/SPARK, Dafny, and Frama-C.

The major differences are that most (all?) dependently typed languages are functional and require the programmer to prove the correctness of the contract in (an extension of) the language itself, while the others typically use special annotations in normal Ada/C/a generic ALGOLish language and dump the proof onto external tools like SMT solvers, all resulting in a more 'normal' programming experience.


I think liquid types, ala Liquid Haskell are a preferable middle ground in this scenario. The SMT is built into the type checker and the refinements are limited to a linearly decidable proof fragment. Dominic Orchard has done some work generalizing the capabilities of these types of refinement by showing that the refinement fragment need to just be a semi-ring structure and the SMT can still resolve. This would cover a large portion of contracts and not impart the development process in large part.


There is ATS as an example of an imperative language with dependent types.


> Contracts? how they're different or less verbose than plain asserts? what they do better?

The difference to asserts is that they express a property over to points in the execution of a program, whereas an assert only states a property of one point in the execution in the program. Practically, that means that you can refer to the state before some code in addition to the state after in the post-condition of that code.


This reminds me of @NotNull/@NonNull annotations in Java. Those annotations may trigger, depending on how the method is called.

Then there's just a plain old notNull() static method, which will trigger.


Xerox had xfst, which replaced regular expressions by regular _relations_, and offered named sub-expressions, which solved the regex "write only" problem.

Xerox' original XRCE (Research Centre Europe) pages are gone, but other sites offer a glimpse, and FOMA is an open source implementation of the same language:

[1] https://sites.google.com/a/utcompling.com/icl-f11/home/xfst-...

[2] https://dsacl3-2018.github.io/xfst-demo/


> What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

Recently I've been tinkering with an indie Clojure-like Lisp called Janet. It implements Parsing Expression Grammars as a core library module: https://janet-lang.org/docs/peg.html


In SPARK contracts can actually be proven (though not all, and not all without effort). So they are a step above plain asserts, or really several steps. Contracts in SPARK are also checked at runtime, which can be disabled if you have proven them either with an automated prover or via some other mechanism (sufficient testing, manual analysis of the code, etc.). Though what can be proven right now is limited so you don't really get the full scope of Ada, but they are constantly working on extending what it can handle.

If you write tests now, the extra notations SPARK introduce aren't much more code than you're already writing, and really it's just entering into the code what you (hopefully) have in your head or on paper elsewhere.


One of the really fun experiences I've had with SPARK/Frama-C/some dependently typed languages was moving runtime tests into the contracts.

Your function only works on arrays of a certain length? Rip out the test-and-return-error code. Skip asserts (that get removed if you optimize). Put the check in the contract and the code won't compile unless the array is appropriate---which might involve a test at a higher level, where the error is easier to handle---and you get zero run-time cost.


> Inheritance and interfaces are relationships between classes. But what about relationships between functions?

Classes ARE functions. Conceptually, they are closures with some automatically generated features and language semantics.

https://www.youtube.com/watch?v=mrY6xrWp3Gs

Classical inheritance is BAD and you wouldn't want that kind of relationship for functions. Ofc, what the article suggests is basically assertions - why execute code once, when you can do it twice!?

Modern languages should have function signatures. After I write tests (a full testing framework should also be native to a language), being able to get a function signature which ensures a pairing with a specific set of tests would be great. No more side effects added in without breaking tests, ensuring you would have to make a test change, even if it's introducing some new side effect that doesn't break an existing test.


What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

There is recent research in this area.

https://conservancy.umn.edu/handle/11299/188954


> What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

Why not parser combinators as a library? The language has to be sufficiently advanced to allow that, but many are these days. E.g. F# has FParsec.


ANTLR can embed “normal” code into the grammar itself, it is a really great tool that can be like a superpower for some specific problems.


Refinement types (for example) can be used to prove the code correct before even running it. Assertions typically can’t do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: