Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With Kubernetes, a typical workflow might look like this:

1. Start kubernetes cluster

2. Build docker images

3. Deploy docker images (helm, kubectl, argocd, ...)

4(a) Run unit tests (kubectl exec -l k8s-app=web rake test)

4(b) Run e2e tests (kubectl run cypress)

4(c) Create ephemeral environment (EXPOSE WEBSITE localhost:8000)

Because we take memory snapshots after each step, you'd effectively get a fresh, fully-provisioned kubernetes cluster immediately after pushing instead of re-running all the steps every time (you'd skip to step 3) and then run 4{a,b,c} in parallel (they'd all "fork" the VM and get a separate copy of all of the resources)

Here are a few links that go into more detail:

- https://layerci.com/docs/tuning-performance/run-repeatable

- https://layerci.com/blog/ci-at-layerci/



Ok, so the more complex the infra, the more advantage there is in caching.


The idea is it starts as simple as a docker build (e.g., as easy or easier than GitHub actions, orbs, etc) but scales as your code/infra does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: