Like it just doesn't do anything other then clone a branch and run a blind kubectl apply - it'll happily wedge your cluster into a state requiring manual intervention.
> ArgoCD is the defacto gitops standard now and has the lions share of gitops deployments.
That only covers pull-based GitOps. Push-based GitOps doesn't require Kubernetes, let alone magical Kubernetes operators, only plain old CICD pipelines.
I often hear folks use terminology like “push GitOps”. But as far as I understand things, it’s only GitOps if you’re following the four principles described in TFA. Otherwise it’s just sparkling pipelines.
> I often hear folks use terminology like “push GitOps”. But as far as I understand things, it’s only GitOps if you’re following the four principles described in TFA.
Not quite. You hear gatekeeping from some ill-advised people who either are deeply invested in pushing specific tools or a type of approach, or fool themselves into believing they know the one true way.
Meanwhile, people who do push-based GitOps just get stuff done by configuring the pipelines to do very simple and straigh-forward stuff: package and deliver deployment units, update references to which deployment units must be deployed, and actually deploy them.
The ugly truth is that push-based GitOps is terribly simple to pull off, and doesn't require any fancy tools or software or kubernetes operators. Configuration drift and automatic reconciliation, supposedly the biggest selling point of pull-based gitops, is a non-issue because deployment pipelines are idempotent. You can even run them as cron jobs, if you'd like to. But maintaining trivial, single-stage pipelines does not make a impressive CV.
> Otherwise it’s just sparkling pipelines.
Yeah, simple systems. God forbid something is reliable, easy to maintain, robust, and straight-forward. We need full blown kubernetes operators running on timers instead, isn't it? Like digging moats around their job stability. It's madness.
Proper CI/CD for Gitops is actually really hard, how do you alert to failures that doesn't tie you to a specific provider and follows an alert chain? How do you chain dependent Gitops pipelines without coding a giant pipeline mess? How do you do healthchecks in a portable manner on what is deployed without giving the CI/CD runners basically global root?.. and on and on and on, this is all solved by Fluxcd or Argocd.
The problem with automatically applying whatever crap that is stored in git, means that you cannot reverse anything without a long heavyweight process, clone code, make a branch, create a pr, approve a pr, merge to master/main.
That can take hours, and that’s not acceptable when PROD is burning.
That’s why most places don’t dare syncing to prod automatically.
> The problem with automatically applying whatever crap that is stored in git, means that you cannot reverse anything without a long heavyweight process, clone code, make a branch, create a pr, approve a pr, merge to master/main.
Nonsense. Your process is only as complex as you want it to be. Virtually all git providers allow you to revert commits from their GUI, and you can configure pipelines to allow specific operations to not require PRs.
As a reference, in push-based GitOps it's pretty mundane to configure your pipelines to automatically commit to other git repos without requiring PRs. This is the most basic aspect of this whole approach. I mean, think about it: if your goal is to automate a process then why would you go through great lengths to prevent yourself from automating it?
> The whole premise of opengitops is heavily reliant on kubernetes.
There's indeed a fair degree of short-sightedness in some GitOps proponents, who conflate their own personal implementation with the one true GitOps.
Back in the real world, the bulk of cloud infrastructure covers resources that go well beyond applying changes to pre-baked Kubernetes cluster. Any service running on the likes of AWS/Google Cloud/Azure/etc require configuring plenty of cloud resources with whatever IaC platform they use, and Kubernetes operators neither cover those nor are a reasonable approach to the problem domain.
> I mean Crossplane is a pretty popular k8s operator that does exactly that, create cloud infrastructure from K8s objects.
If your only tool is a hammer then every problem looks like a nail. It's absurd how anyone would think it's a good idea to implement their IaC infrastructure, the one think you want and need to be bootstrapable, to require a full blown K8s cluster already up-and-running with custom operators perfectly configured and working flawlessly. Madness.
I hope someone somewhere has managed to run a K8s cluster on a bunch of EC2 instances that are themselves described as objects in that K8s cluster. Maybe the VPC is also an object in the cluster.