Hacker Newsnew | past | comments | ask | show | jobs | submit | cbrewster's commentslogin

Now you need to batch your requests to your intermediate server


No, you don't. You have one per 100 nodes or whatever. Not a single one for all nodes to talk to.


You can talk to support about missing the notification about your reservation and they'll still let you put in an order.


Author here. It's titled this way because Docker seems to come up in a lot of discussions around Nix and is often compared with Nix. Partly because of their overlap in functionality but also because people may not understand the difference between the two. The goal here was to a) highlight the different use-cases between the two tools b) compare the tools in the areas that they overlap and c) show how both tools can be used together.


Author here. In our case, we had a large base Docker image called Polygott (https://github.com/replit/polygott) it pulls in dependencies for 50+ different languages from various repositories. We would pin things where possible, but its still very difficult to ensure reproducible builds.

Additionally, docker builds have free access to the network to do anything it would like. Nix goes to great lengths to sandbox builds and limit network access. Anything accessed from a network requires a pinned sha 256 hash to ensure the remote data hasn't changed. (https://nixos.wiki/wiki/Nix#Sandboxing)


That makes sense. I think the real issue isn't Docker vs Nix, it is that some package managers are almost impossible to use to build reproducible images. I worked with debootstrap 10+ years ago trying to script reproducible builds and found it exceeding hard. Gentoo made it almost trivial (Google used it for ChromeOS so perhaps they felt similar). I will look into Nix.

It appears that with the proper package manager support, Docker would be fine?

I come from a hardware background and seem to be a lot more paranoid than most software folks. I would struggle to trust a build where so much is not pinned.


Why do you need reproducible builds for Docker? The whole point is that you build it once and then you use that container on as many servers as you want.


What happens when you need to update some dependency within that image? Now you have to do an image rebuild. If you're lucky only the top-most layers will be rebuilt and the base layers stay the same, if you're unlucky nearly the whole image is rebuilt. Usually we just want to update some subset of dependencies, but during the rebuild some other dependencies may get updated unintentionally (eg if they aren't pinned to a particular version). For most, this may not be an issue but at Replit, everyone's projects use this base Docker image. Unintended updates can cause breakage for our users.


That's not really what a reproducible build is though. Reproducible builds are you get the exact same thing from your build script today or three weeks for now. Getting unexpected changes with an updated dependency is a different problem than not having a reproducible build.


Fair, but its still a real issue and solved in a similar way: Nix has finer grained reproducibility -- not only at the environment level but also at the derivation level. Being able to pick and choose which dependencies to update while ensuring other packages are left exactly the same is valuable to us.


Author here. As with most things, its all about the trade-offs. Docker has certainly proved itself and that approach has worked on a massive scale. However, its not a silver bullet. For us at Replit, our Docker approach was causing issues: our base image was large and unmaintainable and we had almost no way of knowing what changed between subsequent builds of the base image.

We've been able to utilize Nix to address both of those issues, and others who may be in a similar scenario might also find Nix to be valuable.

Of course Nix comes with its own set of opinions and complexities but it has been a worthwhile trade-off for us.


Correct, that's one of the cases where docker's layered image system doesn't work well. Nix is almost the perfect tool to perform incremental builds and deployments for the Replit requirements.

I wish that docker has the ability to merge multiple parent layers like git, then you can build the gigantic image by just updating single layer.

The only hack the docker can do is multistage-build, however that won't work reliably in some cases such as resolving conflicts.


Disclaimer: the following is still experimental, and will probably remain so for a while.

There is actually the --squash command that you can use during builds, to compress all of the layers: https://docs.docker.com/engine/reference/commandline/build/#...

For example:

  $ docker build --squash -t my-image .
In practice it can lead to smaller images, though in my experience, as long as you leverage the existing systems in place efficiently, you end up shuffling around less data.

E.g.:

  - layer N: whatever the base image needs
  - layer N+1: whatever system packages your container needs
  - layer N+2: whatever dependencies your application needs
  - layer N+3: your application, after it has been built
That way, i recently got a 300 MB Java app delivery down to about a few dozen MB actually being transferred, since nothing in the dependencies or the base image needed to be changed since, it just sent the latest application version, which was stored in the last layer.

Also, the above order also helps immensely with Docker build caching. No changes in your pom.xml or whatever file you use for keeping track of dependencies? The cached layers on your CI server can be used, no need to install everything again. No additional packages need to be installed? Cache. That way, you can just rebuild the application and push the new layer to your registry of choice, keeping all of the others present.

Using that sort of instruction ordering makes for faster builds, less network traffic and ergo, faster redeploys.

I even scheduled weekly base image builds and daily builds to have the dependencies ready (though that can largely be done away with by using something like Nexus as a proxy/mirror/cache for the actual dependencies too). It's pretty good.

Edit: actually, i think that i'm reading the parent comment wrong, maybe they just want to update a layer in the middle? I'm not sure. That would be nice too, to be honest, though.


Those sound like issues with your Docker usage - there are options to keep base image quite streamlined (e.g. alpine or distroless images).


For context, I'm referencing our (legacy) base image for projects on Replit: Polygott (https://github.com/replit/polygott/).

The image contains dependencies needed for 50+ languages. This means repls by default are packed with lots of commonly used tools. However, the image is massive, takes a long time to build, and is difficult to deploy.

Unfortunately, slimming the image down is not really an option: people rely on all the tools we provide out of the box.


> For context, I'm referencing our (legacy) base image for projects on Replit: Polygott (https://github.com/replit/polygott/).

May I ask why you didn't use something like Ansible to build such a complex image? With appropriate package version pinning (which it's the real crux here) it should work well enough to get a reproducible build.

I understand it would already have been something different from a pure Dockerfile so it's not that fair to compare buuut...


> May I ask why you didn't use something like Ansible

They did; it's called Nix, and they wrote a blog post about it ;)


While you can copy files from different stages, I wouldn't consider this to be the same thing as composing two base images together. Like the example in the post, you can't take the Rust and NodeJS images and tell Docker to magically merge them. You can copy binaries & libraries from one to the other but that seems extremely tedious and error prone.

Whereas Nix makes it rather trivial to compose together the packages you need (eg Rust, NodeJS) in your environment.


It was more of a silly demonstration showing what's possible and less about being practical. :)


Yup, you can setup Haskell with Nix. I'm not too familiar with Haskell, but it looks like Nix even has a collection of Haskell packages (including lens). This looks like it is a good resource for getting started with Nix + Haskell https://notes.srid.ca/haskell-nix, you should be able to follow along in a Nix repl. I recommend forking on of the example repls from the blog post.


This is coming soon!


Here's a template for Racket: https://replit.com/@ConnorBrewster/racket


OK, experimenting with this example, I attempted to import a simple Racket web app (https://github.com/jarcane/RantStack). I've provided links to the REPLs too for both attempts.

attempt 1) I was able to import a Racket app from Github, added a similar nix config to it as described in the docs, but it would not run as it says nix-shell is not present. Apparently a repl needs to be blessed some how as a Nix instance to work, but there's no option to select that when importing or creating a repl. URL: https://replit.com/@jarcane/RantStack

attempt 2) Next I tried just forking your template, and literally just copy-pasting the code from my app (it's a single file anyway) into the main.rkt. This runs the app ... but I can't actually access it, because apparently the ports aren't being forwarded. Going to the URL as described in the repl.it docs just gives me an eternal "Repl waking up" screen that loads forever, but never resolves. URL: https://replit.com/@jarcane/racket


Sorry, building nix environments from scratch is still rough around the edges and we are working to improve that at the moment.

When hosting a web server, your app must listen on 0.0.0.0, adding that to your repl seems to make that work. I will make sure that is in our docs for web hosting.

Working Racket web app: https://replit.com/@ConnorBrewster/racket-server


Ahh! That was the missing piece of the puzzle, thank you.

Also played a bit with the Glitch import on an Elm import but found it broke npm in weird ways (something about a file being moved but not found?), but then the Github import of the same project wouldn't work either because of Node version incompatibilities that don't seem to be repl.it's fault exactly other than just "it runs node 12 and some stuff is broke on node 12".

Does look promising otherwise though, with the Nix support I can see soooo many things being possible that simply weren't on other similar options like this.


Yeah, we are really excited about the future Nix will bring for us at Replit. Soon, language version incompatibility will be a thing of the past on Replit.


Racket without graphics is ok but lacks a certain connection for teaching.

Maybe someone will come along and make the graphics for Racket work the same way i3 works (which is really cool).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: