This is not how I understand the performance model. Allowing invokation of the compiler at runtime is definitely not something that is done for performance, but for dynamism, to allow some code to run that could not otherwise be run.
In performant Julia code, the compiler is not invoked, because types are statically inferred. In some cases you can have dynamic dispatch, but that doesn't necessarily mean that the compiler needs to run. Instead you can get runtime lookup of previously compiled methods. Dynamic dispatch does not necessitate running the compiler.
I don't believe it, otherwise why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")
Perhaps there is something about subtyping which makes this answer ... not correct -- and if someone knows the real answer, I'd love to understand it.
I believe that this answer is because of performance -- if I can JIT at runtime, that's great -- I get dynamism and performance ... at the cost of a small blip at runtime.
And yes, "performant Julia code" -- that's the static subset of the language that I roughly equated to be the subset which is trying to be pried free from the dynamic "invoking the compiler again" part.
> why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")
This is exactly what the new AOT compiler (juliac) does. The original article is just a bit inaccurate.
The problem though, is that if you have a truly dynamic call-site where you have no idea which method body will be called, then the AOT compiler can't know if the right method specializations will survive the trimming process, so you'll get errors or warnings when compiling with the --trim feature active (--trim is what is used to make the AOT compiled binaries small).
However, there are still lots of cases where you can have a dynamic dispatch but can convince the compiler that there will be an already compiled method signature for every possible specialization. In that case --trimm will work fine and do exactly what you described above.
I'm not exactly sure what you don't believe, your comment is hard to follow, or relies on premises I haven't detected. What you are describing in your first paragraph is somewhat reminiscent of dynamic dispatch, which Julia does use, but generally hampers performance. It is something to avoid in most cases.
Anyway, performance in Julia relies heavily on statically inferring types and aggressive type specialization at compile time. Triggering the compiler later, during actual runtime, can happen, but is certainly not beneficial for performance, and it's quite unusual to claim that it's central to the performance model of Julia.
If you are asking why Julia allows recompiling code and has dynamic types, it's not for performance, but to allow an interactive workflow and user friendly dynamism. It is the central tradeoff in Julia to enable this while retaining performance. If performance was the only concern, the language would be very different.
I used Julia for 4 years. I'm not a moron: I'm familiar with how it works, I've written several packages in it, including some speculative compiler ones.
You claimed:
> Allowing invokation of the compiler at runtime is definitely not something that is done for performance, but for dynamism, to allow some code to run that could not otherwise be run.
I asked:
> why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")
Which can be done completely ahead of time, before runtime, and doesn't rely on re-invoking the compiler, thereby making this whole "ahead of time compilation only works for a subset of Julia code" problem disappear.
Do you understand now?
My original comment:
> The problem (which the author didn't focus on, but which I believe to be the case) that Julia willingly hoisted on itself in the pursuit of maximum performance is _invoking the compiler at runtime_ to specialize methods when type information is finally known.
is NOT a claim about the overall architecture of Julia -- it's a point about this specific problem (Julia's static ahead-of-time compilation) which is currently highly limited.
First of all, I think this sort of aggressive tone is unwarranted.
Secondly, I think it's on you to clarify that you were talking specifically and exclusively about static compilation to standalone binaries. Re-reading your first post strongly gives the impression that you were talking about the compilation strategy in general.
I would also remind you that Julia always does does-ahead-of-time compilation.
Furthermore, my limited understanding of the static compiler (--trim feature), based on hearsay, is that it does pretty much what you are suggesting, supporting dynamic dispatch as long as one can enumerate all the types in advance (though requiring special implementation tricks). Open-ended type sets are not at all supported.
> Julia is fastest with immutable structures--why provide a built-in syntax for complex assignment to mutable types, but then relegate lenses to a library that only FP aficionados will use?
This is not really accurate. Performance in Julia is heavily organized around mutability, in particular for arrays. The main reason Julia does not fully embrace immutability for everything is, simply, performance.
> 2. (minor compared to Overleaf) typst compiles faster.
I would argue that this isn't minor. At least in my opinion, it makes a big difference.
Overleaf, already 3 pages into a document, with a couple of TikZ figures, was getting slow, as in multiple seconds wait for each save.
Typst, on the other hand (Tinymist in VS Code) is really realtime. Text updating within some tens of milliseconds, and figures included in far below a second. It really _feels_ instant, and to me that changes the experience a lot.
I have laptop with a good-ish CPU that is only a few years old, and on page 3 tinymist is already starting to struggle. There is a noticeable input delay between me pressing a key on the keyboard, and the key getting typed & the preview updating. I think it's more of a tinymist issue though, as it has no debouncing and apparently also runs the preview updates on the same thread as vscode's input handling.
Interesting. I have not experienced that, except when trying out the pre-release version of tinymist, and did some messy multiple view+cropping into a big pdf (testing out the new pdf-image stuff.) I chalked it up to it being new and beta.
Admittedly, I have still not created large documents in Typst.
That is not really correct. Type instabilities tend to disappear at function boundaries, which is one of the reasons why using functions is so heavily promoted in Julia, it helps keep type instabilites 'localized'.
Then you are back to the "two language problem". I'm sure that's not a problem for you and for many others, but there is a reason it has its own, widely known name. It really is a problem for people who are mostly not software developers, but instead engineers or researchers.
Right, I guess my take on Julia is that it shows the concessions necessary to make a language “approachable” for scientists/engineers will inevitably lead to a language that is poorly suited for developing large, robust software projects.
"Clanky"? That is a word I would use when comparing Julia and Python, but I would reverse the roles. I mean, python works well, and has almost everything, but it really feels, well, clanky.
I'm currently working on a re-write of a small model from JAX to Julia and finding the Julia code so much easier to write, it's more concise, and find the debugging tools easier to work with in Julia.
I'm aware it uses the same algorithm, but the end result is still worse. Before Typst, I tried typesetting in HTML in an effort to escape LaTeX, and the results I got using an implementation of this same algorithm was very similar to what Typst ended up achieving, which is noticeably worse than LaTeX.
Mind you, I loved programming in Typst, and I wrote some plugins before it got its package manager, but I ended up moving back to LaTeX for this difference in quality of the final output. I should do some in depth testing at some point, because I am looking to switch back.
In performant Julia code, the compiler is not invoked, because types are statically inferred. In some cases you can have dynamic dispatch, but that doesn't necessarily mean that the compiler needs to run. Instead you can get runtime lookup of previously compiled methods. Dynamic dispatch does not necessitate running the compiler.