I remember asking you for this, so Thank you so much!
It works quite well from what I can see.
Small UI issue: on Desktop, the left sidebar should be scrollable, because now on Firefox I can't reach the "Language" menu item in the search results view, unless I zoom-out.
They're probably using some features of LiveView; I'm not too familiar with how HTMX works, but with LiveView you can define all of your logic and state handling on the _backend_, with page diffs pushed to the client over a websocket channel (all handled out of the box).
It comes with some tradeoffs compared to fully client-side state, but it's a really comfortable paradigm to program in, especially if you're not from a frontend background, and really clicks with the wider Elixir/Erlang problem solving approach.
Hooks let you do things like have your DOM update live, but then layer on some JS in response.
For example you could define a custom `<chart>` component, which is inserted into the DOM with `data-points=[...]`, and have a hook then 'hydrate' it with e.g. a D3 or VegaLite plot.
Since Phoenix/LiveView is handling the state, your JS needs only be concerned about that last-mile JS integration; no need to pair it with another virtual DOM / state management system.
The big win for me has been the built-in PubSub primitives plus LiveView. Since the backend is already maintaining a WebSocket connection with every client, it's trivial to push updates.
Here is an example. Imagine something like a multiplayer Google Forms editor that renders a list of drag-droppable cards. Below is a complete LiveView module that renders the cards, and subscribes to "card was deleted" and "cards were reordered" events.
```
defmodule MyApp.ProjectLive.Edit do
use MyApp, :live_view
import MyApp.Components.Editor.Card
def mount(%{"project_id" => id}, _session, socket) do
# Subscribe view to project events
Phoenix.PubSub.subscribe(MyApp.PubSub, "project:#{id}")
project = MyApp.Projects.get_project(id)
socket =
socket
|> assign(:project, project)
|> assign(:cards_drag_handle_class, "CARD_DRAG_HANDLE")
{:ok, socket}
end
def handle_info({:cards, :deleted, card_id}, socket) do
# handle project events matching signature: `{:cards, :deleted, payload}`
cards = Enum.reject(socket.assigns.project.cards, fn card -> card.id == card_id end)
project = %{socket.assigns.project | cards: cards}
socket = assign(socket, :project, project)
# LiveView will diff and re-render automatically
{:noreply, socket}
end
def handle_info({:cards, :reordered, card_change_list}, socket) do
# omitted for brevity, same concept as above
{:noreply, socket}
end
def render(assigns) do
~H"""
<div>
<h1>{@project.name}</h1>
<div
id="cards-drag-manager"
phx-hook="DragDropMulti"
data-handle-class-name={@cards_drag_handle_class}
data-drop-event-name="reorder_cards"
data-container-ids="cards-container"
/>
<div class="space-y-4" id="cards-container">
<.card
:for={card <- @project.cards}
card={card}
cards_drag_handle_class={@cards_drag_handle_class}
/>
</div>
</div>
"""
end
end
```
What would this take in a React SPA? Well of course there are tons of great tools out there, like Cloud Firestore, Supabase Realtime, etc. But my app is just a vanilla postgres + phoenix monolith! And it's so much easier to test. Again, just using the built-in testing libraries.
For rich drag-drop (with drop shadows, auto-scroll, etc.) I inlined DragulaJS[1] which is ~1000 lines of vanilla .js. As a React dev I might have been tempted to `npm install` something like `react-beautiful-dnd`, which is 6-10x larger, (and is, I just learned, now deprecated by the maintainers!!)
The important question is, what have I sacrificed? The primary tradeoff is that the 'read your own writes' experience can feel sluggish if you are used to optimistic UI via React setState(). This is a hard one to stomach as a react dev. But Phoenix comes with GitHub-style viewport loading bars which is enough user enough feedback to be passable.
p.s. guess what Supabase Realtime is using under the hood[2] ;-)
> there's this trend of purego implementations which usually aim towards zero dependencies besides the stdlib and golang.org/x.
I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
IMO, it might be due to the fact that Go mod came rather late in the game, while NPM was introduced near the beginning of NodeJS. But it might be more related to Go's target audience being more low-level, where such tools are less ubiquitous?
> I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
I've also seen something similar with Java, with its culture of "pure Java" code which reimplements everything in Java instead of calling into preexisting native libraries. What's common between Java and Go is that they don't play well with native code; they really want to have full control of the process, which is made harder by code running outside their runtime environment.
I think it's important for managed/safe languages to have their own implementations of things, and avoid dropping down into C/C++ code unless absolutely necessary.
~13 years ago I needed to do DTLS (TLS-over-UDP) from a Java backend, something that would be exposed to the public internet. There were exactly zero Java DTLS implementations at the time, so I chose to write JNI bindings to OpenSSL. I was very unhappy with this: my choices were to 1) accept that my service could now segfault -- possibly in an exploitable way -- if there was a bug in my bindings or in OpenSSL's (not super well tested) DTLS code, or 2) write my own DTLS implementation in Java, and virtually guarantee I'd get something wrong and break it cryptographically.
These were not great choices, and I wished I had a Java DTLS implementation to use.
This is why in my Rust projects, I generally prefer to tell my dependencies to use rustls over native (usually OpenSSL) TLS when there's an option between the two. All the safety guarantees of my chosen language just disappear whenever I have to call out to a C library. Sure, now I have to worry about rustls having bugs (as a much less mature implementation), but at least in this case there are people working on it who actually know things about cryptography and security that I don't, and they've had third-party audits that give me more confidence.
> or 2) write my own DTLS implementation in Java, and virtually guarantee I'd get something wrong and break it cryptographically.
Java doesn't have constant time guarantees, so for at least the cryptographic part you have to call to a non-Java library, ideally one which implements the cryptographic primitives in assembly (unfortunately, even C doesn't have constant time guarantees, though you can get close by using vector intrinsics).
> I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
I think it's because the final deliverable of Go projects is usually a single self-contained binary executable with no dependencies, whereas with Node the final deliverable is usually an NPM package which pulls its dependencies automatically.
With Node the final deliverable is an app that comes packaged with all its dependencies, and often bundled into a single .js file, which is conceptually the same as a single binary produced by Go.
Can you give an example? While theoretically possible I almost never see that in Node projects. It's not even very practical because even if you do cram everything into a single .js file you still need an external dependency on the Node runtime.
> usually an NPM package which pulls its dependencies automatically
Built applications do not pull dependencies at runtime, just like with golang. If you want to use a library/source, you pull in all the deps, again just like golang.
Not at runtime no, but at install time yes. In contrast, with Go programs I often see "install time" being just `curl $url > /usr/local/bin/my_application` which is basically never the case with Node (for obvious reasons).
Go sits at about the same level of abstraction as Python or Java, just with less OO baked in. I'm not sure where go's reputation as "low-level" comes from. I'd be curious to hear why that's the category you think of it in?
I'd argue that Go is somewhere in between static C and memory safe VM languages, because the compiler always tries to "monomorphize" everything as much as possible.
Generic methods are somewhat an antipattern to how the language was designed from the start. That is kind of the reason they're not there yet, because Go maintainers don't want boxing in their runtime, and also don't want compile time expansions (or JIT compilation for that matter).
So I'd argue that this way of handling compilation is more low level than other VM based languages where almost everything is JITed now.
Once this is done, everything feels easy. The trick is to have a rough idea of what you'll say and just take the plunge. And practice.
We all fear rejection. Once you get past that fear, you realise most people are reasonable human beings just like you.
If you're curious about the topic, I recommend the book Rejection Proof, by Jia Jiang.
reply