This has officially replaced Linuxbrew as my go-to solution for package management on multi-user Ubuntu machines at work. Now, I'm trying to see if I can't pair it with QEMU to replace Homebrew on my work laptop running macOS.
That is exactly the thought it my head. Linuxbrew just barely works and breaks frequently. I wonder if this would be more reliable. So you can run current tool versions of tools on an older machine.
I wonder if this works on older systems. A good use case would be if you are on a managed system and have an uncooperative admin, don't want to bother to ask for them to install new software, or a package is not available for your OS. However this seems to be built against a regular recent kernel?
A couple of years ago at the university I was stuck on an old CentOS / Scientific Linux release, but needed Skype and a newer version of Chrome. So I built glibc + GCC in my home directory, and used patchelf to make the binaries look for libraries in a relative path instead of in /usr/lib. It was annoying to get running but in the end worked like a charm.
I had exactly the same experience in building a glibc and gcc, and it was also on an old CentOS system on HPCs that are not updated and I do not have root privilege to. I used linuxbrew to build glibc and gcc.
In my case it was that VSCode remote delivers a node to remote machine that requires a new glibc, where the HPC does not have. I have to use patchelf to patch the node runtime in order for it to find the glibc libraries I built.
I am so glad there finally more products that allows me to easily run newer software, and glad to see it is Arch Linux, which always have latest software and is my go to for dev machines.
That must have been around 2011, 2012. I don't know which CentOS version it was. It was the current supported version, so not ancient, but not exactly bleeding edge. The problem was that closed source stuff like Chrome didn't work because they only considered Ubuntu and maybe Fedora?
You had to build glibc and GCC in unison, and building them is always difficult, especially the first time. But once I figured out the steps (configure, make, ...) it was easy. The only fiddling was to get all the executables to use "rpath=$ORIGIN" so I could just pop it in my home directory and it would reference the correct variables.
I didn't build Chrome (I'm not that crazy), in fact the above was done so I could avoid building Chrome and just run a Chrom(ium) binary. I think the main reason for this ordeal was to be able to run Flash :-D but I also learned a bit about GCC and so on.
> In fact, the purpose of JuNest is not to build a complete isolated environment but, conversely, is the ability to run programs as they were running natively from the host OS. Almost everything is shared between host OS and the JuNest sandbox (kernel, process subtree, network, mounting, etc) and only the root filesystem gets isolated (since the programs installed in JuNest need to reside elsewhere).
> This allows interaction between processes belonging to both host OS and JuNest. For instance, you can install top command in JuNest in order to monitor any processes belonging to the host OS.
So this is more similar to something like pip/npm/composer than to docker?
> The Linux namespaces represents the default backend program for JuNest.
So no, you can think of it like docker without the sandboxing from the host. It just swaps out the host environment for the junest one similar to a chroot but doesn't require root (unless you're using the chroot method)