Network roundtrips are unavoidable, but WASM could be used to parse a server response and generate custom HTML to use in replacing some portion of the DOM. It would likely be a lot faster than trying to do the same in pure JS, and it would obviate the use of over-complicated hacks like virtual DOM and the like.
No, parsing the response is usually way too fast to make a difference. Generating an HTML string is also usually pretty fast. The slowness happens when you ask the browser to parse that HTML string and generate the appropriate DOM, WASM is not going to get you out of that.
> The slowness happens when you ask the browser to parse that HTML string and generate the appropriate DOM
If you do it right, that step only has to happen once for each user interaction. You can entirely dispense with the need to do multiple edits to the DOM via pure JS.
It’s not “at the moment” but “continuously from the creation of the virtual DOM concept” - often slower by multiple orders of magnitude.
The misrepresentation of a virtual DOM as a performance improvement came from two things: people who were comparing virtual DOM code to sloppy unoptimized code which was regenerating the DOM on every change and React fans not wanting to believe their new favorite was a regression in any way (not to be confused with the actual React team who certainly knew how to do real benchmarks and were quite open about limitations).
There’s a line of argument that the extra overhead is worth it if the average developer writes more efficient code than they did with other approaches but I think that’s leaving a lot of room for alternatives which don’t have that much inefficiency baked into the design.
I think there’s a bit more nuance to it. React (and other vdom implementations) try do be as efficient as possible when diffing / reconciling with the DOM. Sometimes this can result in improved performance but there are also use cases where you’ll want to provide it with hints (keys, when to be lazy, etc.). https://reactjs.org/docs/reconciliation.html
Above all I would pragmatically argue (subjectively) that the main advantage is enabling a more functional style of programs w/ terrific state management (like Elm). This can lead to fewer errors, easier debugging, and often better performance with less effort.
> I think there’s a bit more nuance to it. React (and other vdom implementations) try do be as efficient as possible when diffing / reconciling with the DOM. Sometimes this can result in improved performance but there are also use cases where you’ll want to provide it with hints (keys, when to be lazy, etc.). https://reactjs.org/docs/reconciliation.html
The key part is remembering that every one of those techniques can be done in normal DOM as well. This is just rediscovering Amdahl's law: there is no way for <virtual DOM> + <real DOM> to be smaller than <real DOM> in the general case. React has improved since the time I found a 5 order of magnitude performance disadvantage (yes, after using keys) but the virtual DOM will always add a substantial amount of overhead to run all of that extra code and the memory footprint is similarly non-trivial.
The better argument to make is your last one, namely that React improves your average code quality and makes it easier for you to focus on the algorithmic improvements which are probably more significant in many applications and could be harder depending on the style. For example, maybe on a large application you found that you were thrashing the DOM because different components were triggering update/measure/update/measure cycles forcing recalculation and switching to React was easier than using fastdom-style techniques to avoid that. Or simply that while it's easy to beat React's performance you found that your team saw enough additional bugs managing things like DOM references that the developer productivity was worth a modest performance impact. Those are all reasonable conclusions but it's important not to forget that there is a tradeoff being made and periodically assess whether you still agree with it.
I agree. I am curious though about how substantial the memory and diffing costs are. I don’t mean that in an I doubt it’s a big deal way, rather I’m genuinely curious and haven’t been able to find any literature on the actual overhead compared to straight up DOM manipulation. I would imagine batching updates to be an advantage of the vdom but only if it’s still that much lighter weight (seeing as you can ignore a ton of stuff from the DOM).
> I would imagine batching updates to be an advantage of the vdom but only if it’s still that much lighter weight (seeing as you can ignore a ton of stuff from the DOM).
There are two separate issues here: one is how well you can avoid updating things which didn't change — for example, at one point I had a big table showing progress for a number of asynchronous operations (hashing + chunked uploads) and the approach I used was saving the appropriate td element in scope so the JavaScript was just doing elem.innerText = x, which is faster than anything which involves regenerating the DOM or updating any other property which the update didn't affect.
The other is how well you can order updates — the DOM doesn't have a batch update concept but what is really critical is not interleaving updates with DOM calls which require it to calculate the layout (e.g. measuring the width or height of an element which depends on what you just updated). You don't necessarily need to batch the updates together logically as long as those reads happen after the updates are completed. A virtual DOM can make that easy but there are other options for queuing them and perhaps doing something like tossing updates into a queue which something like requestAnimationFrame triggers.
So you could probably describe vdom as a smart queue. How smart it is depends on the diffing and how it pushes those changes. Abstracting this from the developer. Bound to be less efficient than an expert (like an expert writing assembly vs C) but just like any other abstraction having both pros and cons.
The question is whether the abstraction is worth the potential savings in complexity (which maybe is not the case, but I sure do love coding in Elm).
Also whether there are other abstractions which might help you work in a way which has different performance characteristics. For example, I've used re:dom (https://redom.js.org/) on projects in the past, LitElement/lit-html are fairly visible, and I know there are at least a couple JSX-without-vdom libraries as well.
There isn't a right answer here: it's always going to be a balance of the kind of work you do, the size and comfort zones of your team, and your user community.
Very interesting thanks for pointing out re:dom. I took a look at their benchmarks and some vdom implementations compare very well to re:dom. I was pleased to see elm’s performance. So it seems like it can be done well when you want it.
https://rawgit.com/krausest/js-framework-benchmark/master/we...
Forcing the browser to continually parse HTML and generate a new DOM tree, recalculate layout, etc. shouldn't be faster than updating specific nodes than need changes.