Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're not familiar, the Atomic project is really interesting. Its focus is stability and reproducibility, trying to solve the fragility that can happen when the default way to use software in Linux is `sudo apt-get install`.

There's a community offshoot called Universal Blue (after the original Atomic image Silverblue). It uses the standards set for containerization to make userland configuration reproducible as well. There's a manifest (Containerfile) that enumerates all the modifications, which means an upgrade is bump the version of the base image and replay all the modifications from the manifest. It's also meant to limit `sudo` usage, so you're not in the habit of giving root to random software you downloaded from the internet.

Their most famous image is Bazzite, which will replicate the SteamOS experience on generic hardware. They also have Bluefin for software developers.

I haven't used it myself, but I find the concept fascinating. I expect Jorge and Kyle from that project will find their way to these comments.



> It uses the standards set for containerization to make userland configuration reproducible as well. There's a manifest (Containerfile) that enumerates all the modifications, which means an upgrade is bump the version of the base image and replay all the modifications from the manifest.

Is the containerfile syntax and reproducibility as good as configuration.nix / NixOS?

I love NixOS but it’s a very acquired taste, to the point where even I occasionally wish I was running something bog-standard. If this is similar to NixOS but closer to regular Linux, that’d be nice to recommend to friends.


It's not as exactly reproducible because there's no version locking, however after running the Containerfile you have a snapshot of the filesystem that is ready to be used and that you can save. Universal blue images use GitHub container registry, and it's 90 days of history to have at least 90 days of rollbacks available.

I'm currently setting up a Bazzite machine by using a GitHub actions to build every day an image from Bazzite's image and adding/removing packages and files on top. I have the DE, login manager and all its customizations in the image and for the CLI utilities and thing like that I use home manager.

I like this setup because you just need to know Linux to customize the image, Containerfile's are just series of commands or file copies from the repo, compared to nix it's easier.


Thank you for the info!


I've only used NixOS, but the Containerfile looks more like a shell script than a Nix config:

https://github.com/ublue-os/bazzite/blob/main/Containerfile


    AS incomprehensible && \
       as nix can be && \
       i would take it && \
       any day over && \
       configuring a desktop OS && \
       with the horrible && \
       Dockerfile syntax


Containerfile is just what the IBM containers group calls a Dockerfile. They are 99% compatible.


That looks like something I wouldn't wanna use, all imperative


Same. I can't be the only one who feels that Nix is doing the right thing the wrong way. The right thing being reproducible, declarative, composable environments; the wrong thing being its language and tooling. Too often I feel like serious Nix users spend a distressing amount of time manually doing package manager tasks, so the way forward is to stop doing exactly that. Going back to imperative composition is a step backward that will never help people free up time away from package management.


FWIW I am starting to use home manager on my new macOS workstation, and I haven't had to dig too deep into Nix, nixpkgs, or NixOS.

I might hit limits soon as I rice my neovim install.


Make sure to have a look at nixvim: https://github.com/nix-community/nixvim


The language is just JSON with functions. It's actually so nice to write configurations in that I wish it was more easier to use as a standalone thing.


I wish guix was a little more mainstream.


I will say, I prefer Bazzite for dev too.

In general Project Bluefin is newer, but seems to be trying to get into gaming too.

Likewise, Bazzite is considering developer images also. So I might switch to those.

But it's so easy to switch. I currently use Bazzite + Nix + HomeManager + Flatpaks and it has been fantastic. I only layer Tailscale and a few minor things that need be system level to operate right.


Bluefin is actually older, it's the base that kickstarted the whole development of Universal Blue. They are pretty much as mature, too. Bazzite is more gaming oriented, Bluefin more general purpose. Bazzite is way more popular since gaming attracts people.


Can you explain what you mean by:

"the fragility that can happen when the default way to use software in Linux is `sudo apt-get install`"


Yeah, I am wondering the same. Is this referring to some kind of versioning conflict (like the old Windows DLL Hell)? Does that regularly happen in any Linux distribution repository? Or is this a matter of people going cowboy and mixing in other random repositories on top of the distro? I see the whole role of the distribution maintainers being to provide a self-consistent repository that doesn't have this kind of problem.

And as a long-time Fedora user, I don't think I've seen such conflicts with the moral equivalent yum/dnf command. But, I am somewhat rigid about not adding third party repos or RPMs to my systems. The only two exceptions I've come to accept are repos from rpmfusion.org and postgresql.org.

While I have certainly seen some bugs in Fedora over the decades, I don't see how some "atomic" solution helps here unless it means reorganizing the community QA resources to test some "minor releases" which batch together a set of package updates, versus trying to support continuous integration where each package can update individually. That would actually worry me though, as my own career experiences cause me to prefer continuous-integration approaches like the traditional Fedora distribution.


> But, I am somewhat rigid about not adding third party repos or RPMs to my systems.

This is the reason why packages seem so stable, you're deliberately staying within a well-tested ecosystem.

Fedora release upgrades probably go well for you also!

Packages themselves are a perfectly fine distribution method [under the same guidance].

Once you start mixing packaging spec guidelines; packages of varying quality, you end up wanting compartmentalization like containers/bubblewrap

Off the cuff example: Fedora makes heavy use of macros in their RPM specs. Most third party packages don't.


Every package can do whatever it wants to your system, upgrades fail when surprising config changes occur, and it all hinges on maintainers knowledge of the state of the distro as well as the software they’re packaging


This issue is not specific to Linux and anyone who's worked in development should be pretty familiar with it. You get onboarded to project A which requires Node, okay, you install Node. Some time later you get onboarded to project B which requires a different version of Node. Fine, get nvm and jump between different versions of Node. A bit later you're asked to help out on legacy project C, which only works with Python 2, while you also need Python 3 for newer stuff. After that it's only a matter of time until you need something which requires Homebrew and then all the cards are off the table. Etc.


> trying to solve the fragility that can happen when the default way to use software in Linux is `sudo apt-get install`

What fragility is that?

Is it something outlined in https://wiki.debian.org/DontBreakDebian ?


“What fragility is that? The one described in detail in this document?”

Yes, indeed it is


so, installing random software from random repositories equals fragility? that doesn't seem specific to apt at all. however the article is written Fedora-specific, so maybe people don't like to point out that dnf/yum is susceptible to the same problem. In fact, the article doesn't even try to call out apt, or fragility.

There is a use case for immutable distributions, just as there is one for those distributions which are not immutable.

It is dishonest to attribute fragility as a basic flaw in apt, when system fragility is a consequence of ignorance.


Yes, I do think that is fragility. Immutable distros, iOS etc have it right - installing software shouldn’t be able to fuck up the system.

People gotta install from “random repositories” because shit they need is not in official repos, further showcasing the shortcomings of the entire setup and its reliance on maintainers. This derogatory statement only works against your argument, rather than supporting it.


Nobody's saying "apt is fragile." I used it as an example because it's the install command I'm familiar with, and the one I see most often in Linux install instructions. Ubuntu's popularity made it the default package manager when outsiders think "Linux."


Bazzite is pretty great (I have set up a Steam streaming VM using it). I am also using Silverblue as an ML sandbox, with good results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: