Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It sounds like you're bringing up a register-based vs stack-based VM argument, and claiming that register-based VMs have better performance because their model is closer to the hardware.

My understanding is that this intuition is usually untrue, because a JIT benefits from the stack-based code preserving code flow and thus allowing more efficient code generation.



No I'm not talking about register-based vs stack-based VM, that's a totally different topic. I'm just saying WASM is still quite distant from real hardware, making it a non-trivial task to performantly run the code. In fact if you look at the asm.js, which is the original inspiration of WASM, it is a much closer mode to real hardware.

And of course JIT can make WASM fast but if you look around, building a performant WASM JIT still remains terribly hard, some implementation even needs LLVM to perform optimizations. I'd say if this is the case, we must've chosen the wrong model.


> some implementation even needs LLVM to perform optimizations. I'd say if this is the case, we must've chosen the wrong model.

Why? Optimizing machine-independent code for a particular machine is part of the "core business" of LLVM, up to the point where a sufficiently capable bytecode/optimizer becomes comparable to LLVM.

OTOH, if the main argument here is the size/speed/other weight of LLVM, then of course the host machine that wants to run WASM only needs a tiny subet of LLVM: no frontend, single backend, only a subset of optimization passes... There is also a tradeoff to make to leave optimization passes out that actually improve the code a bit but are too heavy for the host machine.


There's nothing wrong with LLVM itself, my point is we could've picked a lower level model which don't need a complicated setup like LLVM. Or one that you can direct shipped optimized compiled result of LLVM, that will be a much better world


Doesn't ARM code emulated on x86 (and vice versa) perform even worse than WebAssembly? Isn't that essentially what you would get with a lower-level "optimized compiled result of LLVM"?


Does the fact that ARM is a bad choice disproves all other choices other than WASM? I'm not sure this is a good argument here. Lower-level bytecodes are more flourishing than just ARM.


I imagine it's more difficult to translate efficiently between two different low-level instruction sets (such as ARM, MIPS, x86, PowerPC, etc.) than to translate something slightly higher-level to the various low-level target instruction sets. Emulating ARM on x86 is usually slow (see the Android emulator) as is the reverse (see Windows 10 on ARM) and PowerPC on x86 (Apple's Rosetta) didn't seem particularly fast either.

Do you have an example in mind of a lower-level instruction set that can be efficiently translated to different real-world ISAs?


I believe that you are looking at wasm with a different priorities than intended. The two fundamental properties are that it must be fully portable and fully secure by default (as in any insecurity needs to be explicitly and statically declared in the bytecode).

Performance comes only after those two. LLVM as far as I know, has a completely different order of priorities.


Personally I don't see why we cannot get all three.


What major implementations are using LLVM? Firefox is using Cranelift, Chrome is using V8, both of these shouldn't be using LLVM, AFAIK, or am I wrong?


wasmer [1] has a LLVM backend, WAVM [2] uses LLVM as the bakend, I could be wrong but last time I checked, cranelift is only meant to be the next generation WASM engine used in Firefox, it is not yet in production.

And actually the argument is: all of v8, Firefox/Cranelift and LLVM used in wasmer requires non-trivial work to make WASM fast, which shouldn't be needed given a different model.

[1] https://github.com/wasmerio/wasmer/tree/master/lib/llvm-back... [2] https://github.com/WAVM/WAVM


I highly doubt that there is another model that would not require non-trivial work to be fast, while also being reasonably portable to different architectures.

Sure, we could be faster by just sending x86 machine code, but that isn't really the point.


LLVM more usable for cloud vendors and such who use wasm outside of the browser. So we do have 3 quality implementations already.


Why is it hard? Isn't wasm designed so you can statically and quickly compile pieces of it or the whole thing to native code, rather than needing to do all the tricks dynamic language runtimes do?


That is their very nice slogan, while in reality WASM still has quite a way to go to compete with native code.

Some shits I see these days are that when code speed is measured, people compare that with JS but not native code, when portability is talked about, the comparison is then made against native code, not JS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: