In practice it can lead to smaller images, though in my experience, as long as you leverage the existing systems in place efficiently, you end up shuffling around less data.
E.g.:
- layer N: whatever the base image needs
- layer N+1: whatever system packages your container needs
- layer N+2: whatever dependencies your application needs
- layer N+3: your application, after it has been built
That way, i recently got a 300 MB Java app delivery down to about a few dozen MB actually being transferred, since nothing in the dependencies or the base image needed to be changed since, it just sent the latest application version, which was stored in the last layer.
Also, the above order also helps immensely with Docker build caching. No changes in your pom.xml or whatever file you use for keeping track of dependencies? The cached layers on your CI server can be used, no need to install everything again. No additional packages need to be installed? Cache. That way, you can just rebuild the application and push the new layer to your registry of choice, keeping all of the others present.
Using that sort of instruction ordering makes for faster builds, less network traffic and ergo, faster redeploys.
I even scheduled weekly base image builds and daily builds to have the dependencies ready (though that can largely be done away with by using something like Nexus as a proxy/mirror/cache for the actual dependencies too). It's pretty good.
Edit: actually, i think that i'm reading the parent comment wrong, maybe they just want to update a layer in the middle? I'm not sure. That would be nice too, to be honest, though.
There is actually the --squash command that you can use during builds, to compress all of the layers: https://docs.docker.com/engine/reference/commandline/build/#...
For example:
In practice it can lead to smaller images, though in my experience, as long as you leverage the existing systems in place efficiently, you end up shuffling around less data.E.g.:
That way, i recently got a 300 MB Java app delivery down to about a few dozen MB actually being transferred, since nothing in the dependencies or the base image needed to be changed since, it just sent the latest application version, which was stored in the last layer.Also, the above order also helps immensely with Docker build caching. No changes in your pom.xml or whatever file you use for keeping track of dependencies? The cached layers on your CI server can be used, no need to install everything again. No additional packages need to be installed? Cache. That way, you can just rebuild the application and push the new layer to your registry of choice, keeping all of the others present.
Using that sort of instruction ordering makes for faster builds, less network traffic and ergo, faster redeploys.
I even scheduled weekly base image builds and daily builds to have the dependencies ready (though that can largely be done away with by using something like Nexus as a proxy/mirror/cache for the actual dependencies too). It's pretty good.
Edit: actually, i think that i'm reading the parent comment wrong, maybe they just want to update a layer in the middle? I'm not sure. That would be nice too, to be honest, though.