Does implementing algebraic effects requires stack switching support? If so, I wonder what runtime cost we must pay when heavily using algebraic effects. Is there any empirical study on the performance of algebraic effects implementations?
In OCaml 5, we’ve made it quite fast: https://kcsrk.info/papers/drafts/retro-concurrency.pdf. For us, the goal is to implement concurrent programming, for which a stack switching implementation works well. If you use OCaml effect handlers to implement state effect, it is going to be slower than using mutable state directly. And that’s fine. We’re not aiming to simulate all effects using effect handlers, only non-local control flow primitives like concurrency, generators, etc.
Suppose one of your effects is `read()`, and you want to be able to drop in an asynchronous implementation. Then you'll either require something equivalent to stack switching or you'll require one of the restrictions to asynchronicity allowing you to get away without stack shenanigans -- practical algorithms usually start requiring stack switching though.
You can get a lot of mileage out of algebraic effects without allowing such ideas though. Language constructs like allocation, logging, prng, database sessions, authorization, deteterministic multithreaded prng, etc, are all fairly naturally described as functions+data you would like to have in scope (runtime scope -- foo() called bar() -- as opposed to lexicographic scope), potentially refining or changing them for child scopes. That's a weaker effect system than you would get with the normal AE languages, but there are enough concepts you feasibly might want to build on such a system that it's still potentially worthwhile.
Netflix implements "imgsub"[1] - it actually delivers a zipped archive of transparent images to the player. So technically they can pre-render positioned typesetted subtitles on server and render them as images overlay, as long as there's no animated text effects.
In general, streaming services have to ensure maximum compatibility when playing their contents on all kinds of devices - high end and low end. For which on low end device it could be very resource constraining to render typesetted subtitles. There are other platforms where all video playback have to be managed by the platform system frameworks with limited format support, and streaming services can't do much about it.
The priority of streaming service is extending their market reach, and I think Crunchyroll itself is facing the same challenge of market reaching.
I think the right solution is trying to get typesetted subtitles, and the end-to-end workflow - creation, packaging, delivery, rendering with adaptation (device capabilities, user preferences, localizations etc) all standardized. A more efficient workflow is needed, so a single source of subtitle is able to generate a set of renditions suitable for different player render capabilities. Chrunchyroll should actively participate in these standard bodies and push for adaption for more features and support in the streaming industry.
Unfortunately, as the link describes, Netflix only makes this available for a very limited set of languages, while everyone else is stuck with the extremely limited text-based standards.
Frankly, those text-based subtitle standards are quite maddening on their own. Netflix's text-based subtitle rendering seems to support a much wider set of TTML features than what it actually allows subtitle providers to use - so if these restrictions were to be slightly relaxed, providers could start offering better subtitles for anime immediately with no additional effort from Netflix.
What Netflix supports on their main website might not be what they care about, though; you used to be able to watch Netflix on the Nintendo Wii, and they probably still have some users on stupidly old smart TVs.
Fast forward to 2025 and BBC's streaming app on ApppleTV only just added subtitles; vastly more powerful hardware but so many restrictions from Apple on how developers use it.
In 2008 I was watching fansubbed anime with decent typesetting on a netbook with a shitass-even-for-the-time Atom processor, so I don't buy for one second that this is a device capabilities issue.
> In general, streaming services have to ensure maximum compatibility when playing their contents on all kinds of devices - high end and low end. For which on low end device it could be very resource constraining to render typesetted subtitles. There are other platforms where all video playback have to be managed by the platform system frameworks with limited format support, and streaming services can't do much about it.
Surely if my mid-end phone from 2015 supported everything .ASS has to offer, they could do it either?
In any case... I don’t believe the problem is that Netflix and Crunchyroll have to support low-end devices, it’s that they don’t want to pay $$$ for typesetting. They are big enough now that they don’t have to care, so they don’t – just another example of enshittification.
I wouldn't bet that every smart TV Crunchyroll wants to be available on has more processing power than your phone from 2015 (some of those TVs might be older than that), but yes, it's probably less about hardware capabilities than about platform limitations that make the usual solution of compiling libass into a blob and integrating it into the player not so easy to implement.
I remember arguing with Ron on the TC39 disposable proposal that I think Go's `defer` is a better pattern than C#'s `using`, and he tried to convince me the otherwise.
I was surprised to see they choose Go instead of C# for the TypeScript compiler port. Microsoft has been trying to make ECMAScript look more similar to C#, and their Windows Universal SDK has made a lot of efforts to provide a seamless transition for developers to port their code between C# and TypeScript. And yet they still think porting TypeScript compiler to Go is easier to do than porting it to C#.
Despite my different tech view with Ron, I appreciate and respect the great work he has done to TypeScript & ECMAScript. And I wish him the best with his next adventure.
Hejlsberg's arguments for choosing go over c# sounded well founded and very pragmatic to me though, with things like battle tested AOT compilation (.NETs current iteration is very promising, but still relatively nascent) and the type system being a stronger fit (I really miss TS's structural typing sometimes in C# :) ).
As someone who has a lot of .NET projects at work it's a bit of a bummer since the dogfooding would have been a huge benefit for .NET, but I honestly can't argue with their choice.
This is also a result of the detachment of TC39 and the developer community. Just how many JS developers are participating TC39? I can recall multiple TC39 proposals that didn't even consult opinions from authors of notable open-source stakeholder libraries, and went straight into stage 3.
And btw, the TypeScript tooling scene is far from being able to get standardized. TypeScript is basically a Microsoft thing, and we don't see a single non-official TypeScript tool can do type-checking. And there's no plan to port the official tools to a faster language like Rust. And the tsc is not designed for doing tranditional compiler optimizations. The TypeScript team made it clear that the goal of tsc is to only produce idiomatic JavaScript.
I think React would get better developer experience and performance if they adopt language coroutine feature to implement direct style algebraic effect. In fact the React Fiber system is already an implementation of algebraic effect.[1] However, it’s “suspending” a routine by raising an exception. Thus unwinding all the call stack, therefore, it needs to re-run that same routine on resume. This is the core reason why they have a performance issue and why they created the compiler to cache values on reruns.
JavaScript has language level coroutine features like async/await or yield/yield* and we have seen libraries using these features to implement direct style algebraic effect. For example Effect[2] and Effection[3]. You don’t need to memoize things if the language runtime can suspend and resume your functions instead of throwing exceptions and rerun them.
reply