Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Noulith: A new programming language by the current Advent of Code leader (github.com/betaveros)
220 points by janvdberg on Dec 14, 2022 | hide | past | favorite | 127 comments


I just checked the 2nd place[^0] person and they... also have their own programming language...[^1]

What's going on here? Are they just extra motivated to show off their languages? Is making your own programming language more common than I realized?

EDIT: same with the 4th place person[^2]

EDIT2: same with the 7th place[^3]. Btw this is the 4th person that's clickable so actually 3/4 leaders with their GHs linked have their own programming languages

EDIT3: jonahx pointed out I made a mistake and the 2nd place person didn't actually create Vyxal, just used (and contributed) to it

[^0]: https://adventofcode.com/2022/leaderboard

[^1]: https://github.com/Vyxal/Vyxal

[^2]: https://github.com/leijurv/Kitteh2

[^3]: https://github.com/nim-ka/nim


AoC problems often benefit from knowledge of lexers, parsers, and other text processing gymnastics.

People who’ve made compilers and interpreters are both attracted to the problem solving element of AoC and good at solving those problems.

Beyond that, making your own compiler or language is fun — more people should try it.

These are great starting points:

https://craftinginterpreters.com/

https://compilerbook.com/


You just need Lisp then, and you can have all your DSLs you want for the problem at hand, instead of writing a new tokenizer, lexer, parser, interpreter every time. ;-)


Coalton is a good option to explore if you fancy trying AoC with Lisp:

https://github.com/coalton-lang/coalton


The problem, as far as AoC is concerned, is that Lisp tend to produce lisp-like DSL, so won't match AoC specs. Then again AoC specs are simple enough that you can still get away with using Lisp.

Obviously, the OP targeted their language for AoC because, come on, whoever would want to have the ability to swap the precedence and meaning of operators? That would be a fresh hell that would make early-90s abuse of operator overloading in C++ seem like a joyful time.


I was very excited about lisp and "live" environments... I spent a while playing with various lisps, specially Clojure and ClojureScript.

I still had to restart the REPLs from time to time! I came to the conclusion I enjoy more a language with near 0 startup time as opposed to live environments. Extremely loose typing is not a good compromise for me... but I also came to dislike the "straight jacket" of extremely strict types systems (like Haskell's and friends!).

I think these days Golang is a good compromise between practicality and safety. It's fast, cross platform, it now supports generics (since 1.18) and it is overall OK to work with. I feel it falls in an OK sweetspot of features.


> near 0 startup time as opposed to live environments

depends what you do, but many Lisp-based live environments have a near 0 startup time in a terminal. Try SBCL (a version of Common Lisp providing fast native compilation, incremental and batch) for example. SBCL also saves all memory (on demand) and can restart it, so one is in the same environment in milliseconds.

https://sbcl.org


TL;DR: golang tooling and environment (libraries, open source projects, ppl, etc) is way better than SBCL's.

I did play with SBCL a bunch, and other a bit more "esoteric" lists too! [0]. I just feel like langs that are not in some form of top ten and with big corporate sponsoring are pretty much always super behind when it comes to tooling.

You can get one or two or three AMAZING golang IDEs to work with (VS Code with Vim plugin is what I use). For SBCL, it HAS to be emacs. And I have mixed feelings about emacs, even when using Evil mode.

0: https://extemporelang.github.io/


Personally I prefer the interactive development style of SBCL. Thus I would easily prefer SBCL and GNU Emacs over VS Code, especially given that VS Code uses telemetry to spy on its users.


well... your HN handle _is_ lispm, so you may be a lil bias lol :-p

--

PS. I know I know it is the other way around... it is just an attempt at a joke :-)


I disagree


OK, you've convinced me.


On a related note, I would pay good money for "Crafting Interpreters Part II: Implementing Debuggers and Adding Visual Studio Code support for your language".


hey that's me in 7th. that language you're looking at is a short lived project i made when i was 12 lol. it's actually the source of my username, i didn't name it after myself but the other way around. anyway yeah it's crap and barely turing complete and definitely NOT what i use for aoc by any stretch.

> Is making your own programming language more common than I realized?

yep it's just that


Impressive project for a 12 year old! haha, though I am so confused opening up a `.nim` file and not seeing Nim ala nim-lang.org ;)


Hah, you're here as well. Found you on youtube after you scored #1 and am mighty impressed of your game.


Some armchair philosophy here. I think languages are fundamentally related to how we think about, process, and express ideas. When you spend a lot of time doing something, you naturally come up with the words to express complex and abstract ideas, which lets you process information faster. These competitive programmers have the experience to realize certain syntax, patterns, common functions better helps them solve the problems they usually encounter, so they build these DSLs to serve that purpose.


Yes and: Reading someone else's code is akin to mind reading.


I’d go further: Reading someone else’s writing is akin to mind reading.


reading is mind reading


I think it's just very common. When I was in high school, I built my own C compiler just to learn how things work under the hood.

I'd assume that it's similar for others in that their programming languages aren't up to date or practically usable, but more like cool side-effects of their learning journey.


Small correction: hyper-neutrino is a code-golfer and user of Vyxal, but the creator of Vyxal is: https://github.com/Lyxal


Ah my mistake. Thanks for the correction. Though it looks like they've made some contributions


He's written a few of his own programming languages too: https://github.com/hyper-neutrino/proton


well crap. I can't edit the op anymore. I guess next time I just need to not rush to comment


* Advent of code is about solving puzzles.

* Making a programming language is like solving a puzzle.

* People that like solving puzzles like puzzles of all kinds, so the likelihood of a top advent of code solver to also have worked in the "puzzle" of making a programming language is big.

Feels pretty logical to me :-)


This year, I also started to develop a language (with a simple syntax) from scratch (in C) to solve the puzzles. It is a lot of work. So, I quit after having solved three puzzles. I did it in a kind of literary programming style using MarkDownC (https://github.com/FransFaase/IParse/blob/master/README.md#m...). You can see the page with the code at: https://github.com/FransFaase/AdventOfCode2022/blob/main/Lan...


I've done AoC using my own language before. As a task it's at a sweet spot for finding weaknesses in the language/library/implementation: real and varied enough to exercise your system, small chunks of work, lots of code to compare yours to, with fun and competitive juices.

The first time I did it it forced me to fix some major problems. My language would still be a handicap for me now, albeit a comfy one.

fwiw: https://github.com/darius/cant/tree/master/examples/advent-o... (haven't done this year's so far)


Hey! hyper-neutrino here - I notice you've already clarified but yes, Vyxal is originally made by code-golf user Lyxal and not me; I've contributed minimally to it but I would definitely recommend checking it out as it's a great project and goes a lot more in-depth and is way more organized than what I've done :) hence why I've pinned it to my profile.

I do have my own programming languages though; my best one so far is proton[^0] though it's pretty buggy because I wrote it in high school before I took the uni-level CS classes that taught things like language design and how to make actual parsers.

[^0]: https://github.com/hyper-neutrino/proton


Leijurv isn't using Kitteh2 for AoC (YouTube videos indicate python). I think it was just a project.


Yep, he uses plain Python.


yeah sorry. didn't mean to imply that they are


I think it's quite common for competitive programmers and code golfers to have their own language, often inspired by APL. They often do not require any ecosystem as they are supposed to only solve puzzles, so it's really easy to develop them.


Back in the day, I would prototype in python and code in C. Maybe now I should try prototyping in an idiolectal golflang and coding in python?


That means they got a plenty of free time for both the AoC and their language :)


I think the point of being on the top of the AoC is that they don't have spare time so have to do it in seconds :P


Building a compiler is usually a part of Computer Science, so most people with a CS degree will technically have some own programming language (although probably a very limited and bad one).


Making your own programming language is not that hard. It's understandable to think that it is, because large, common languages have millions of lines of code and hundreds of thousands of commits authored over decades. But don't compare yourself to them - just making a scrappy language to fit your precise problem domain can be done in a couple of thousand lines of code and a couple of weeks. It's also super fun! (I've done it before...)


"same with the 7th place[^3]. "

No, that's not the Nim[0].

[0] https://github.com/nim-lang/Nim


My theory: (Competitive) programming speed is largely a function of how intimately you know your chosen set of tools and standard libraries. Time spent reading docs is "wasted" time.

If you've written your own language, assuming it's a decent one, then:

a) It has all the tools you need to be productive.

b) You know exactly how it works and what the APIs are (assuming the implementation details are still fresh in your memory).

c) The features are tailored to exactly how you like to use them, personally.


I use a custom Python preprocessor for Advent of Code (n.b. I don't come very close to winning). It definitely beats writing pure Python for me but given that I only use it once a year it's hard to keep all of it in my head. So YMMV, I guess.

IMO, high-speed competitive programming is part knowing your tools well, but also a lot about coming up with abstractions on the fly very quickly. If you watch the top Advent of Code solvers they'll carve up the problem in seconds, and they're really good at picking just the right amount of complexity for the problem at hand and not investing any more than that. Coupling that with a touch of cleverness ("let's eval the input", "who needs a tree when I can shove everything into a dictionary") and a very low error rate (I would hit the leaderboard but almost always lose a significant time to debugging…) and they come out on top.


> If you watch the top Advent of Code solvers they'll carve up the problem in seconds, and they're really good at picking just the right amount of complexity for the problem at hand…

Any recommendations of who/where to watch? I’d love to see what their process looks like.


https://youtube.com/@jonathanpaulson5053 is usually near the top of the leaderboard and takes time to explain his approach after each solve


Sure, here's an example of a speedrun (done after the fact): https://www.twitch.tv/videos/854280596


> My theory: (Competitive) programming speed is largely a function of how intimately you know your chosen set of tools and standard libraries.

Python is a very popular language to use in AoC. My theory: If Guido van Rossum were competing, he would not be on the leaderboard at this point.


Guido van Rossum did not build python with speed-programming competitions in mind.


I quoted the relevant portion of your comment. Guido knows his chosen set of tools and standard libraries like very few.

He certainly knows Python much better than everyone using Python who is currently on the leaderboard.

He wouldn't be on the leaderboard because "knowing your tools and standard libraries" is only a small component of what makes you good at competitive programming.


I don't think anyone in this thread was implying that language devs are usually good speed-programmers, the relationship was in the other direction.


Same thing as programmers making tools; if you know you have a defined set of inputs and outputs that you need to use over and over, it might make sense to write your own functions, methods, classes, macros and programs. Taking that one step further, it might then morph into a full blown custom programming language. Domain specific languages are after all fairly common in many domains.


Being good at parsing is most of what you need to be a top AoC contender, so there's a strong connection there.

On a sidenote, I've had a good time reformatting input as python expressions and using eval()


A language made to win hacking competitions prioritize speed and power as in solving a problem fast, rather then readability, debugging, performance, etc.


I think AoC isn't that far removed from code golfing, and a lot of people have custom languages for code golfing.


what's with the carets [^0] instead of [0] in your citations?


The few examples of markdown systems I've come across that support references use the carrot. Including GitHub flavored markdown. It's also closer to how restructured text does it

I actually haven't come across any markdown flavors that support references and don't use a carrot


implied superscript


so I was going to give you actual superscripts: ⁰¹²³⁴⁵⁶⁷⁸⁹ but (I guess for legacy reasons?) they display with uneven weights?


Even weights on my machine, I guess the font at the top of your font stack has poor coverage of newer codepoints.


It's worth noting that betaveros won several previous Advents of Code without using Noulith. You can't attribute too much of this year's success to the language. Still, it's pretty cool, and I've really enjoyed reading his solutions. Noulith feels like walking through a syntactic candy store and eating whatever you want.


More exactly winning 2019, 2020, 2021, and runner up 2018, which seems to be the first year participating.

By the looks of it right now, Betaveros will win 2022 too. One could say that Betaveros dominates AoC.


Including winning 2021 and 2020! I'd guess that exclusively using a new programming language that you recently invented is actually an impediment to speed, though apparently not a significant enough one.


Having the syntactic sugar might make up a little bit for the overhead of using a recently invented language though.


the language seems designed for advent of code speed, so it might be a win


Very cool. I also made a language specifically for AoC, with some similarities: https://github.com/lukechampine/slouch

Example of solving a (non-AoC) problem: https://youtu.be/i_zDbInYOpQ

One of my big takeaways is that the "IDE" plays a big role in how fast you can solve. Recomputing the expression on every keystroke seems a little insane, but the instant feedback you get is priceless.


Insane in the best possible meaning of the word. Man, that is so cool.

In my opinion AOC is a very good example for "if all you have is a hammer". For AOC you don't need to write a program that finds a solution for any possible input, you need a calculator to compute the solution for your input. This is such a calculator, and a very nice one.

Suggestion: Flip the assignment operator. You keep writing an expression left to right, but then you have to scroll all the way back to the left to make a variable. This could be streamlined by making it e. g. "EXPR -> VAR_NAME" instead of "*VAR_NAME EXPR".

Why don't you make a Show HN submission? I'd upvote you. :)


> Immutable data structures (but not variables)

Why not not immutable variables?

About 90% of my variables I only assign once, even when writing ordinary imperative (not functional) code. Wehever a variable is re-assigned that usually is a mistake or or something unexpected going on.

I find it maddening most of the languages don't have single-assign values like Scala's val (const doesn't count because you can't initialize it with a runtime-computed value).


I would guess because when you're doing a hacky thing, it's convenient to go back and make a change like "x = something with x" rather than updating all occurrences.

Source: I do that all the time when trying to solve AoC problems :)


> const doesn't count because you can't initialize it with a runtime-computed value

In what language? C's const is an unmodifiable runtime value for example, as is Zig's. Go is one language that I can think of where const is compile-time though.


Also Nim.


In Nim, let is an immutable runtime value, but const is an immutable compile time value, sort of like constexpr in recent c++.


C and C++ "const" is weird because it's not constant, it's just a weird name for immutable - K&R C doesn't have const, it's an invention of C++ and thus modern C.

Bjarne's book calls these immutable variables "constants" which is completely crazy, but I guess indicates that the name is on purpose. This leads to the usual C++ insanity shortly afterwards in the book...

> An object that is a constant when accessed through one pointer may be variable when accessed in other ways.

Yeah, no. If the object were actually constant you can't do that, your language has betrayed the poor programmer. You can't "access" constants, you can use them to initialize variables, parameters, and so on, but they can't be "accessed" and that's where you've gone wrong here.

Rust's Quiz has a example showing what happens with actual constants, and it's not that


You are merely very hung up on nomenclature. They had to choose a keyword, they chose const. It's very common: people have mental models and pre-conception about what word means. From that point, when in a specific context where their innate expectation are not met, they can have one of two reactions: adapt to the specific meaning in that specific context (mental flexibility) or reject and rant that things are wrong (mental rigidity).

Different people will have different breaking points about when things have been stretched too far, meaning they are not willing to compromise on. Working with people who have not the same level of flexibility or rigidity or differ in different contexts can be a day-to-day pain.


Nomenclature is certainly not great, but the trouble as so often goes deeper. Bjarne has these "constants" and so he can't see any reason he'd need actual constants which are, you know, constant. If he recognised that he's got immutable variables, and not constants, then the need for actual constants is a little more obvious.


What do you consider #define to be doing? Simple usage arguably provides a way to have actual constants.


But not runtime constants. I do agree with GP in that it would've been nice to have actual constant values, but it is what it is. The last thing C and C++ need is more features. C because we don't want to ruin it, and C++ because we want to slow the death.


It's fun how you refer to anything after K&R as "modern C"; since ANSI C was first specified in 1989, this means that any version since then is modern. Great!

I do agree that "immutable" would have been a better name, but I guess that angle hadn't been invented yet. :)


Something, something, get off my lawn? Actually I wasn't getting paid to write C back in 1989, but I definitely did write C in the era when ANSI C wouldn't have been generally accepted.

Immutable variables are a good idea (indeed I agree with Rust's choice to make variables immutable by default), but, they're not constants, and so this is an unnecessary confusion.


It's a weird name for "readonly", not "immutable", as evidenced by this example:

   int x = 0;
   int const *px = &x;
   ++x; // *px changed
There's no way to indicate true immutability in C++, unfortunately. The language does decree that any object (which in C++ parlance includes ints etc) with a const type cannot be mutated without triggering UB; but there's no way to declare a pointer or reference that can only point to such objects.


I like this idea. I think I've considered adding immutable variables, but haven't prioritized them because other things I wanted to work on have a better expressiveness or bug-catching ability to effort ratio. Plus I haven't thought of a good syntax for them. But I might eventually get around to it.


Because you cannot have loops in an immutable universe? Sure, you can map functions, but sometimes loops are fun too.


It's actually better if you have immutability here, languages which try to instead have a single variable for the values, mutating that variable each time around the loop, get into some trouble.

Compare Rust's for loop:

  for k in 0..10 {
    // k is an immutable variable, brought into existence for one   iteration
    // next time around the loop that's a different immutable k
  }
Versus the C for loop:

  for (k = 0; k < 10; ++k) {
    // k is a long-lived mutable variable, we can change k inside the loop and it'll remember!
  }
We can tell Rust we want a mutable variable, if we want, but it's not the same:

  for mut k in 0..10 {
    // k is a mutable variable...
    // but still brought into existence for each iteration
    // changing will work in this scope... but it won't last
  }


Cool, I didn't know that Rust did that (or have forgotten)!

But it sure looks like a map where we throw away the result. What I mean with that is that the loop construct must be served the iterations from an iterator, which may or may not use immutable data. I don't think you can make an immutable while loop, for example?

EDIT: Also, you better have a side-effect in that construct, otherwise it has no effect, right?


I mean both ways should be available. It generally is reasonable to support mutable variables but immutable values should be supported as well. E.g. Scala has 2 separate keywords: var and val (the difference is you can't assign a val again once it is initialized). I want the same in C#, Python and other languages.


also JS let/const


Hehe. I am a fan of powerful languages, and never saw the benefit of Java protecting you from yourself, or the angst from C++ power. But swapping operators and their precedence sounds like the coolest footgun I ever saw! IOW, you can also overdo it... :-)


The really interesting thing here is that the creator of this language (betaveros) is using this specific language in the Advent of Code contest where he is currently number one on the leaderboard: https://github.com/betaveros/advent-of-code-2022

Which is an interesting take on the concept of domain specific language.


I think it takes a lot more than a language, no matter how efficient to get top 10 in AoC - its more about being able to parse the problem exceptionally quickly, or rather recognise practically from memory the class of problem that the day is asking for and be halfway through coding it before you have even finalised the specifics.

I managed to get 47th on one of the days once... and that was after solving it in under a minute.


> and that was after solving it in under a minute

That depends a lot on the day. Towards the end it takes longer. Today's 47th place solved the problem in 11:54


Consistently being able to grasp the problem very fast, and then write a solution is what impress me the most.

If you check the results for the events the last couple of years, you see the same set of persons being up at the top 10-20. Basically solving hundreds of problems. Problems that span a quite a large set of CS, math problem domains.


I thought the leaderboard was dominated by automated GPT-chat bots.

Did they get banned or couldn’t they keep up with the puzzles?


After about day two, ChatGPT didn't produce anything useful anymore iirc; at least when asked to write the solution in Haskell.

Interestingly, it still produced internally consistent solutions with valid Haskell code, they just didn't have much to do with the given problem. And I feel like that sums up one of the main issues with ChatGPT quite well: if you scrolled through StackOverflow and saw a really polished solution for something, you'd assume it was mostly correct, because for people, 'visible effort' put into code usually correlates with correctness/ourselves. But ChatGPT always produces really polished solutions really confidently, even if they are complete nonsense.


Yup, in other domains ChatGPT will confidently give answers that sound vaguely plausible but miss out key things a human would definitely highlight, or just get the specifics wrong.

There's a video of Harstem I think, following build orders ChatGPT came up with for Starcraft II. ChatGPT doesn't say "I have no idea how to play this game" instead it confidently offers advice that is... not good. Harstem wins at least one game anyway, but you know, not from the great advice just from being a good player.


I’ve had that signaling discussion here a few times. With a person, usually they’ll give you some indication (explicitly or not) that they’re uncertain of the correct answer, or they’ll have a relevant track record to help you evaluate their honesty.

No such luck with ChatGPT.


People are confidentally wrong all the time, and ChatGPT does have a relevant track record to help evaluate whether it’s apparent confidence is justified (at that track record, as is true for many people, is “highly mixed, so you shouldn’t implicitly trust it”.)


People were asking chatgpt multiple times, and then taking the most common working answer. Guess that's kind of a method to estimate it's confidence.


> if you scrolled through StackOverflow and saw a really polished solution for something, you’d assume it was mostly correct

IME, that should, at best, be a very tentative assumption for StackOverflow.

Heck, for me it is also a very tentative assumption when I see a very polished solution in official documentation from lots of providers (AWS, I’m looking at you.) Though in that case I assume it was correct at some point in time, for some (possibly not publicly released) version of the software/service.


GPT only got a good placement on Day 3 Part 1. It seems to not have been very useful in any other days (except for people maybe using Copilot).


or a way to get a k or g implimenation.


I honestly can't imagine a better tool for AoC problems than stock Haskell. I'm at only 245 lines of code total after day 14 this year; 2021 required 594 altogether. List comprehensions, pattern matching, a rich standard library, and strong typing with inference make solving these little problems with GHC almost feel like cheating.


Would you happen to have a link to your solutions (Github, Gitlab, wherever)?



Dynamically typed and very flexible syntax with a lot of edge cases?

The language doesn't look bad in comparison to a lot of others, but the combination above is a nogo for me when it comes to any kind of production code.


I mean that's probably because it's specifically geared towards exercises like this. Languages built for advent of code will probably prioritize stuffing as much syntactic sugar as possible rather than writing maintainable code

Your criticisms are valid, but only if you're approaching it from the perspective of using it for a full-fledged project.

More likely use cases are quick scripts you wanna write or little thought experiments you wanna try out. For that, it's likely a great choice of language


Yeah that makes sense.


Agreed. I don’t mind dynamic typing at all, particularly when facilities exist to establish contracts around what the data you’re passing around looks like. But I don’t think this is a brilliant solution to Python’s problems. Python has issues, don’t get me wrong, but it’s a very very mature language without a lot of WTF moments. On the flip side, a brand new language with this level of freedom means you really need to sit down and grok everything from the inventors perspective before you can be proficient. You need to absorb the entire language for it to work. Meanwhile, and I’m hesitant to beat this drum, a lisp like Clojure is not at all confusing or unclear. The syntax might be hard to grok initially, and the short function names or departure to literal functional programming might throw you off, but after that you realize the air is crisp. The language and stdlib are small and relatively old for a reason - the ecosystem fosters this. You can do and build anything without fuss. It’s wonderful.


This language isn't an attempt to solve Python's problems. At best it's an attempt to get around some minor speed bumps I personally experience when writing short Python scripts. I did not create this language with the expectation or hope that even a single other person in the world would want to learn it, much less become proficient. Clojure is a great language! It's just that I, personally, need more mental effort to translate thoughts to code using prefix notation and other aspects of Lisp syntax, and decided that reducing that kind of mental effort for myself was a goal of this language.


Author here. FWIW I 100% agree with your assessment. This would be a horrible choice for anything resembling production code and I hope nobody considers it. I'm not really even sure I'd recommend anybody other than me use the language for anything; there are a lot of decisions informed by how I, specifically, think about and write code. For me, this language works well when I want to write scripts that are <100 lines or so, and for that alone I think it's achieved its purpose. I think of it as a "home-cooked" programming language, a la <https://www.robinsloan.com/notes/home-cooked-app/>.


Amused at "everything is an expression", but also apparently there exist both if-statements and ternary-expressions for some reason. :P


Several really cool things in the language. I'm trying to wrap my head around the concept of precedence order being dynamic. And to come up with a use case when it would be an advantage to change precedence order dynamically. I'm sure people here at HN can come up with some good examples.


Hi, author here. I'll probably go into this in more detail in a blog post later, but the short reason for runtime precedences (briefly discussed much further down in the README) is that I wanted to be able to write chained comparisons like `x < y < z` without privileging comparison operators in the syntax, so `less_than := <; x less_than y less_than z` also has to work. This means that you can never resolve the structure of a chain of binary operators while parsing, so you have to wait until runtime. The fact that precedence is mutable at runtime is not at all important, but given the above decision, I thought I might as well go the full mile. (Also, it's funny :)


Very cool! Thanks for the explanation. And making my mind explode. ;-)


Disclaimer: entirely off the cuff

Although the creator demonstrates this with `*` and `+` I could imagine this being useful within a set of higher level operators (functions)

eg. perhaps you want an argument of (infix) `map` to evaluate before `map` itself

Or more generally, you might have a bunch of scalar operators which you'd like to have higher precedence than collection operators which should act on the results


Yes, but tht sounds like something you decide on once. Not, dynamically and more than once. More like "in this function call given these operands, I want '+' to be evaluated before '/'."

Or I'm probably missing something.


Imagine you're writing a lot of one off solutions (say, for AOC) you may end up building a small set of helpful functions you compose together

Being able to customise how they compose (through precedence) lets you build a kind of DSL for your problem space

No idea how often that would be used - but I can imagine hobby scenarios



Neat. Lots of good stuff.

What's the interplay between infix notation and operator precedence? Expressions are not infix?

Also, please explain the dictionary and set duality.

FWIW, in my toy language, I use one syntax for sets, lists, and arrays, delimited with brackets '[' and ']'. Whereas maps, dicts, and (prototypical) objects use curlys '{' and '}'. A la JavaScript et al.

Lastly, wrt using '++' for concatenating lists, there have been times I've wanted sugar for prepending to a list. So maybe support both 'atom ++ list' and 'list ++ atom'?

For future updates, please us posted.


I'm the author but not the OP. Thanks!

I'm not sure I follow the question about infix notation and precedence, but: Trains of identifiers like `a b c d e f g` are always parsed as `b`, `d`, `f` being infix binary operators; their precedences are looked up at runtime to decide how that expression evaluates. Explicit function calls and indexing always bind more tightly, and operands in those trains can contain either; an expression like `a(arg) b c(arg1, arg2) d e[i] f g[j:k]` has the same binary operators as before. But operators are only single identifiers and don't contain function calls, so `a b (c)` is still the binary operator `b` with arguments `a` and `(c)`.

"Sets" are just dictionaries with set elements as the keys and null as every value. This makes sense because you want testing for set membership to be fast like locating a key in a dictionary and because, like in Python, iterating over a dictionary iterates over its keys.

We also already have `.+` for prepending a list element to a list and `+.` for appending. (These are the analogues of Scala's `+:` and `:+`.)


> No classes or members or whatever, it's just global functions all the way down. Or up.

I can happily report that I now know the equivalent feeling to driving too fast over a speed bump, but for reading.


Wait, do you mean the top of the leaderboard is filled with actual persons and not with competitive programming teams trying to dominate the most visible event? If so, why?


> competitive programming teams trying to dominate the most visible event

What fantasy world is this? :D

Competitive programming teams, afaik, exist in the context of universities and they want to win ACM-ICPC, that's pretty much it. Otherwise, competitive programming is largely a solo affair.

In an event that yields no prize whatsoever, winning is just for bragging rights. There are no bragging rights in winning as a team against solo competitors.

Also, you only have one problem per night and AoC problems are too easy for it to make sense to have a team collaborating. The only thing a team would do is take the minimum solution time of their participants.


I expect a single individual can move with more agility which is faster for the relatively small-scopes problems AoC prevents.


AoC is too easy. There's no benefit to more than one person.


this is very cool and I’m stoked to follow along but every time something comes along like this I find myself thinking “lisp is the answer” and I guess I’m just that old crusty neckbeard hacker I always feared I would become


Excellent language design.

I might implement it in C, as soon as I get some time.


Can the title be un-editorialized? It makes the dev look arrogant for no reason


lmao 12k lines of code in lib.rs. Good at coding doesn't mean good at structuring code.

Waiting for someone to tell me there ain't anything wrong with 12k lines in a single source file, yet for some reason other utility functions get their own file.


They most likely don't care... :)

Also FWIW most software you use has 5k+ line source files in it.


i would really love when ai replaces cp...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: