Crazy how much bigger modern games are … I wonder how many total pixels were shipped in the art assets of Warcraft 2 vs. StarCraft 2? My guess is at least 4 orders of magnitude higher for SC2
That seems very far away. My understanding is that these PETases digest plastic VERY slowly and need human engineering efforts to digest it in any appreciable amount of time (hours to days rather than years). And human bioengineering of these enzymes is still not to the point where it's actually usable at industrial scale. The paper just says they've discovered the variants, not "oh no all animal life on earth is now dependent on microplastics" :D
> What happens to the plastic economy when plastics are no longer useful because they'll be decomposed too quickly?
We already use lots of biodegradable things for crucial applications, such as the wood used in framing houses. Just because wood can rot in a damp forest doesn't mean that the wood inside your walls will rot away just because. There are conditions where it can start rotting, and we're aware of those conditions and how to prevent them, at least enough for a house to last for decades.
Elixir is the closest thing to OCaml that has a chance at semi-mainstream usage IMO.
It has basically all of the stuff about functional programming that makes it easier to reason about your code & get work done - immutability, pattern matching, actors, etc. But without monads or a complicated type system that would give it a higher barrier to entry. And of course it's built on top of the Erlang BEAM runtime, which has a great track record as a foundation for backend systems. It doesn't have static typing, although the type system is a lot stronger than most other dynamic languages like JS or Python, and the language devs are currently adding gradual type checking into the compiler.
I think that a better question is why F# isn't a more popular language, since it's much closer to OCaml, than Elixir and you can use the whole Dotnet ecosystem in F#, which is one of the weakest points of OCaml (no libraries).
The answer is (most likely) Microsoft, I kid you not. I've worked with F# professionally for many years is its incredible how they are literally sitting atop a gold mine that is the output of Microsoft research and do basically nothing with it. Even though it's sold as an ecosystem, .NET revolves around C# and its related tooling.
"Type Providers" are an example of such negligence btw, it's something from the early 2010's that never got popular even though some of its ideas (Typed SQL that can generate compile-time errors) are getting traction now in other ecosystems (like Rust's SQLx).
My team used SQL Providers in a actual production system, combined with Fable (to leverage F# on the front end) and people always commented how our demos had literally 0 bugs, maybe it was too productive for our own good.
I think it’s odd that elixir doesn’t get more love. It ticks a lot of boxes that folks here get excited about, has a great ecosystem and tooling, BEAM is great, and it’s much more syntactically familiar than erlang to my eye. I know of a couple companies aside from the perennial goto examples that built sizable saas products backed by it.
I always wanted to learn Elixir but never had a project where it could show it strengths. Good old PHP works perfectly fine.
Also corporations like their devs to be easily replaceable which is easier with more mainstream languages, so it is always hard for "newer" languages to gain traction. That said I am totally rooting for Elixir.
I think it’s great for boring stuff — the Phoenix web framework is really straightforward. Learning a whole new language paradigm and environment for a professional project you need to be productive in ASAP is definitely the worst kind of ‘not boring’ though.
I know of a Haskell shop and everybody said they’d have a hell of a time finding people… but all them nerds were (and are) tripping over themselves to work there because they love Haskell… though some I’ve talked to ended up not liking Haskell in production after working there. There seems to be a similar dynamic, if a bit less extreme, in Elixir shops.
The BEAM VM executing bytecode is slower than a compiled binary. Sure, it is great when compared to Python and other interpreted languages. Not so much when you need fast processing and raw CPU performance. It also loses out to the JVM in performance, but wins in memory consumption.
It is slower, but a big cause of the slowness is a lot of copying (immutable data structures and separate heaps for every process). Nowadays the BEAM has a JIT.
If you use type guards correctly the performance can be surprisingly good. Not C/C++/Rust/Go/Java good, but not too far off either. This is definitely a corner-case though.
But how many things involve fulfilling basic requests over a network pulling information from a database? Most things just don’t need that kind of performance.
Sure for basic, dead CRUD, the choice of middleware language rarely makes any difference.
But even bog-standard business processes eventually find the need for data-processing, crypto and parsing - the use-cases where people code Elixir NIF's. That is why for example you have projects like html5ever_elixir for parsing HTML/XML. Another use case is crypto - you have NIF's for several crypto libraries. Data processing - there are NIF's for Rust polars.
From a technical perspective, this is the overwhelming majority of what makes the Net happen. BEAM is great for that and many other things, like extremely reliable communication streams.
Use the right tool for the job. Rust sucks for high-level systems automation but that doesn’t make it any less useful than bash. It’s all about about use cases and Elixir fits many common use cases nicely, while providing some nice-to-haves that people often ask for in other common web dev environments.
Elixir is good at managing complexity. I love flattening hierarchy and dealing with edge cases right in function definitions using pattern matching. Every other language without it just feels crippled after getting used to the feature.
You don't get it from language tooling because you are compiling to a bytecode that runs in a virtual machine (BEAM).
The current tool to wrap your bytecode with a VM so that it becomes standalone is Burrito[1], but there's some language support[2] (I think only for the arch that your CPU is currently running? contra Golang) and an older project called Distillery[3].
I really wish people would quit pushing "functional". People equate that with "My programming is now always a recursive logic puzzle."
Talk about "immutable by default". Talk about "strong typing". Talk about "encapsulating side effects". Talk about "race free programming".
Those are the things that programmers currently care about. A lot of current Rust programmers are people who came there almost exclusively for "strong typing".
Having loops is not the defining feature that separates functional from imperative. Where did this idea come from? I'm suddenly seeing it in a lot of places.
It is not the defining feature but loss of the loop is one of the most obvious differences for people who look at a functional language Rust has immutability, pattern matching, etc, but it remains an imperative language with "some functional features". Or this is my subjective analysis.
So is Haskell not functional? Or an imperative language with some functional features?
-- ghci> example
-- Triangular number 1 is 0
-- Triangular number 2 is 1
-- Triangular number 3 is 3
-- Triangular number 4 is 6
-- Triangular number 5 is 10
example = runEff $ \io -> evalState 0 $ \st -> do
for_ [1..5] $ \i -> do
n <- get st
let msg = "Triangular number " <> show i <> " is " <> show n
effIO io (putStrLn msg)
st += i
where
st += n = modify st (+ n)
(This is not a trick question. Simon Peyton Jones described Haskell as "the world's finest imperative language" [1], and I agree. This code is written using https://hackage.haskell.org/package/bluefin)
[1] Tackling the Awkward Squad:
monadic input/output, concurrency, exceptions, and
foreign-language calls in Haskell
But, again, loops aren't automatically imperative per se. Consider loop / recur in Clojure for an example. For someone coming from JavaScript, it's not really much different from writing a while loop where every branch has to terminate with an explicit break or continue.
You can be functional "in spirit" more than purely functional. OCaml and Standard ML falls into this category. Ocaml has loops for instance. You might just not see many loops if code is written by OCaml developers, because there's frankly no need to use them in a lot of places. You often want to lift the abstraction level of iteration to an arbitrary data structure such that you get freedom of implementation. See Applicative and Monad.
BEAM performance model trades throughput for isolation, which hurts CPU-bound tasks, and ironically, whenever you need speed, you end up using NIFs that break BEAM safety guarantees, and reintroduce the exact fragility Elixir was supposed to avoid.
In 2025, Elixir is a beautiful system for a niche that infrastructure has already abstracted away.
> In 2025, Elixir is a beautiful system for a niche that infrastructure has already abstracted away.
Do you mean Kubernetes?
My mental model of Erlang and Elixir is programming languages where the qualities of k8s are pushed into the language itself. On the one hand this restricts you to those two languages (or other ports to BEAM), on the other hand it allows you to get the kinds of fall over, scaling, and robustness of k8s at a much more responsive and granular level.
> whenever you need speed, you end up using NIFs that break BEAM safety guarantees, and reintroduce the exact fragility Elixir was supposed to avoid.
That's like complaining that unsafe{} breaks Rust's safety guarantees. It's true in some sense, but the breakage is in a smaller and more easily tested place.
It's not isolation which hampers throughput. That's a red herring. In fact, isolation increases throughput, because it reduces synchronization. A group of isolated tasks are embarrassingly parallel by definition.
The throughput loss stems from a design which require excessive communication. But such a design will always be slow, no matter your execution model. Modern CPUs simply don't cope well if cores need to send data between them. Neither does a GPU.
The grand design of BEAM is that you are copying data rather than passing it by reference. A copy operation severs a data dependency by design. Once the copy is handed somewhere, that part can operate in isolation. And modern computers are far better at copying data around than what people think. The exception are big-blocks-of-data(tm), but binaries are read-only in BEAM and thus not copied.
Sure, if you set up a problem which requires a ton of communication, then this model suffers. But so does your GPU if you do the same thing.
As Joe Armstrong said: our webserver is a thousand small webservers, each serving one request.
Virtually none of them have to communicate with each other.
It's compiled to Erlang, not BEAM bytecode, as the latter is not a stable, backward-compatible API to target, and you lose all of the optimization work put into the Erlang compiler. It's also compiled to JavaScript, so it can run on both the front-end and back-end.
What you wrote is roughly equivalent to this in C:
{
const int x = 1;
{
const int x = 2;
}
}
which is to say, there are two different `x` in there, both immutable, one shadowing the other. You can observe this if you capture the one that is shadowed in a closure:
iex(1)> x = 1
1
iex(2)> f = fn () -> x end
#Function<43.113135111/0 in :erl_eval.expr/6>
iex(3)> x = 2
2
iex(4)> f.()
1
If the "check"/offline payment bounces, I wonder if it's the merchant that is out the money? Or is there any assurance from anyone else, like maybe the network would go halfsies?
Edit: on second thought, that doesn't really make sense and would be a great way to defraud the network of a ton of guaranteed money
People buying and selling goods and services within the US are contributing to the GDP. If some people take some of that money they would've spent domestically and instead use it to import extra stuff (which doesn't count towards GDP), the GDP will go down.
Most of the value of an iPhone is recorded as American GDP because American IP and software went into it, Apple is keeping a lot of the price for the phone, and only sending a bit off to China for assembly (and a larger bit off to South Korea, Japan, and Taiwan for components that China is assembling). What remains is still a lot of American GDP.
A small business who is contracting China to make the thing that they then sell is also generating a lot of GDP. Yes, they send some percent off to China to make the thing, but a majority is GDP generated here in the USA (the small business does the design, marketing, sales, etc...). If they go belly up because of the trade war (very likely, since no one else can make their thing, and before China developed this capability, making the thing wasn't even possible!), that GDP is gone, the people that small business was employing are unemployed.
The increased purchases should not decrease American GDP, unless consumers are buying directly from China using Temu and are not buying at Walmart.
Yes, selling things contributes to GDP. But importing things does not. If you take money that you would've spent buying something in the US and use it to import something that you haven't sold yet, you've decreased GDP wrt the counterfactual.
Its not just selling things, it is designing those things, marketing those things, writing the software for those things, all of that is high value stuff that Trump is basically ignoring. There is a good reason we got richer after China entered the WTO rather than poorer: we focused on high value goods, IP, and services, which was only possible because we were able to outsource low value assembly to China.
If you take that money you would have spent buying something in the US that has imported parts (like its assembly), and instead say go out and by a DJI drone on TEMU, yes, you've decreased GDP. If you simply have no money to buy anything because DOGE decided to cut your federal job, then that would also decrease GDP.
I see where I got this wrong now: companies are stocking up on imports, but the GDP for those imports don’t go positive until they are sold to consumers (then whatever the sale price differs from the import price).
You'd be hard pressed to find console support for much less than that - compare it with Unity or GameMaker's console support tier and you'll find it's pretty similar.
It can take hundreds of dev-hours to port your engine to consoles yourself, so having another company handle it for you, for all 3 consoles, for only $2000 is a pretty good deal!