Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's get concrete. Here's about as complex of a unit file as you will find for running a docker container:

    [Unit]
    Description=My Container
    After=docker.service
    Requires=docker.service
    
    [Service]
    Restart=always
    ExecStart=/usr/bin/docker start mycontainer
    ExecStop=/usr/bin/docker stop mycontainer
75% of this is boilerplate, but there's not a lot of repetition and most of it is relevant to the service itself. The remaining lines describes how you interact with Docker normally.

In comparison, here's a definition to set up the same container as a deployment in Kubernetes.

    apiVersion: v1
    kind: Deployment
    metadata:
      name: mycontainer
      labels:
        app: myapp
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: myapp
      template:
        metadata:
          labels:
            app: myapp
        spec: 
          containers:
            - name: my-container
              image: mycontainer
Almost 90% of this has meaning only to Kubernetes, not even to the people who will have to view this object later. There's a lot of repetition of content (namely labels and nested specs), and the "template" portion is not self-explanatory (what is it a template of? Why is it considered a "template"?)

This is not to say that these abstractions are useless, particularly when you have hundreds of nodes and thousands of pods. But for a one host node, it's a lot of extra conceptual work (not to mention googling) to avoid learning how to write unit files.



That's a great example! Thank you very much for sharing.

That said, it's been my experience that a modern docker application is only occasionally a single container. More often it's a heterogeneous mix of three or more containers, collectively comprising an application. Now we've got multiple unit files, each of which handles a different aspect of the application, and now this notion of a "service" conflates system-level services like docker and application-level things like redis. There's a resulting explosion of cognitive complexity as I have to keep track of what's part of the application and what's a system-level service.

Meanwhile, the Kubernetes YAML requires an extra handful of lines under the "containers" key.

Again, thank you for bringing forward this concrete example. It's a very kind gesture. It's just possible that use-cases and personal evaluations of complexity might differ and lead people to different conclusions.


> There's a resulting explosion of cognitive complexity as I have to keep track of what's part of the application and what's a system-level service.

If you can start them up with additional lines in a docker file (containers in a pod), it's just another ExecStart line in the unit file that calls Docker with a different container name.

EDIT: You do have to think a bit differently about networking, since the containers will have separate networks by default with Docker, in comparison to a k8s pod. You can make it match, however, by creating a network for the shared containers.

If, however, there's a "this service must be started before the next", systemd's dependency system will be more comprehensible than Kubernetes (since Kubernetes does not create dependency trees; the recommended method is to use init containers for such).

As a side note, unit files can also do things like init containers using the ExecStartPre hook.


For multi-container systems I really like Docker Compose and Swarm - a simple yaml file can define everything. It really is wonderfully simple.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: