Hacker Newsnew | past | comments | ask | show | jobs | submit | gpi's commentslogin

In other words the blind leading the blind

Or, if you focus on the "slop" aspect of AI, the bland leading the bland ;-)

You may have to use Kargo as well, also by the makers of Argo

ArgoCD is the defacto gitops standard now and has the lions share of gitops deployments.

Which is funny because ArgoCD is...miserable.

Like it just doesn't do anything other then clone a branch and run a blind kubectl apply - it'll happily wedge your cluster into a state requiring manual intervention.


Yeah I have to use it at work currently, and it’s not great. Personally I find FluxCD so much better.

Less setup, faster, and in my experience so far, no “wedging the cluster in a bad state” which I’ve defs observed with Argo.


> ArgoCD is the defacto gitops standard now and has the lions share of gitops deployments.

That only covers pull-based GitOps. Push-based GitOps doesn't require Kubernetes, let alone magical Kubernetes operators, only plain old CICD pipelines.


I often hear folks use terminology like “push GitOps”. But as far as I understand things, it’s only GitOps if you’re following the four principles described in TFA. Otherwise it’s just sparkling pipelines.

> I often hear folks use terminology like “push GitOps”. But as far as I understand things, it’s only GitOps if you’re following the four principles described in TFA.

Not quite. You hear gatekeeping from some ill-advised people who either are deeply invested in pushing specific tools or a type of approach, or fool themselves into believing they know the one true way.

Meanwhile, people who do push-based GitOps just get stuff done by configuring the pipelines to do very simple and straigh-forward stuff: package and deliver deployment units, update references to which deployment units must be deployed, and actually deploy them.

The ugly truth is that push-based GitOps is terribly simple to pull off, and doesn't require any fancy tools or software or kubernetes operators. Configuration drift and automatic reconciliation, supposedly the biggest selling point of pull-based gitops, is a non-issue because deployment pipelines are idempotent. You can even run them as cron jobs, if you'd like to. But maintaining trivial, single-stage pipelines does not make a impressive CV.

> Otherwise it’s just sparkling pipelines.

Yeah, simple systems. God forbid something is reliable, easy to maintain, robust, and straight-forward. We need full blown kubernetes operators running on timers instead, isn't it? Like digging moats around their job stability. It's madness.


Proper CI/CD for Gitops is actually really hard, how do you alert to failures that doesn't tie you to a specific provider and follows an alert chain? How do you chain dependent Gitops pipelines without coding a giant pipeline mess? How do you do healthchecks in a portable manner on what is deployed without giving the CI/CD runners basically global root?.. and on and on and on, this is all solved by Fluxcd or Argocd.

The problem with automatically applying whatever crap that is stored in git, means that you cannot reverse anything without a long heavyweight process, clone code, make a branch, create a pr, approve a pr, merge to master/main.

That can take hours, and that’s not acceptable when PROD is burning.

That’s why most places don’t dare syncing to prod automatically.


> The problem with automatically applying whatever crap that is stored in git, means that you cannot reverse anything without a long heavyweight process, clone code, make a branch, create a pr, approve a pr, merge to master/main.

Nonsense. Your process is only as complex as you want it to be. Virtually all git providers allow you to revert commits from their GUI, and you can configure pipelines to allow specific operations to not require PRs.

As a reference, in push-based GitOps it's pretty mundane to configure your pipelines to automatically commit to other git repos without requiring PRs. This is the most basic aspect of this whole approach. I mean, think about it: if your goal is to automate a process then why would you go through great lengths to prevent yourself from automating it?


Last I saw, ArgoCD was heavily reliant on Kubernetes.

Has this changed?

(also, it seems like the site is heavily supporting ArgoCD)


ArgoCD only works on kubernetes. That hasn’t changed.

The whole premise of opengitops is heavily reliant on kubernetes.

> The whole premise of opengitops is heavily reliant on kubernetes.

There's indeed a fair degree of short-sightedness in some GitOps proponents, who conflate their own personal implementation with the one true GitOps.

Back in the real world, the bulk of cloud infrastructure covers resources that go well beyond applying changes to pre-baked Kubernetes cluster. Any service running on the likes of AWS/Google Cloud/Azure/etc require configuring plenty of cloud resources with whatever IaC platform they use, and Kubernetes operators neither cover those nor are a reasonable approach to the problem domain.


> and Kubernetes operators neither cover those nor are a reasonable approach to the problem domain.

I mean Crossplane is a pretty popular k8s operator that does exactly that, create cloud infrastructure from K8s objects.


> I mean Crossplane is a pretty popular k8s operator that does exactly that, create cloud infrastructure from K8s objects.

If your only tool is a hammer then every problem looks like a nail. It's absurd how anyone would think it's a good idea to implement their IaC infrastructure, the one think you want and need to be bootstrapable, to require a full blown K8s cluster already up-and-running with custom operators perfectly configured and working flawlessly. Madness.


Its more of a usecase for large platform teams that want to automate and enable hundreds of teams with thousands of disparate cloud resources.

You can have a small bit of terraform for crossplane then crossplane for the 99% of the other resources


I hope someone somewhere has managed to run a K8s cluster on a bunch of EC2 instances that are themselves described as objects in that K8s cluster. Maybe the VPC is also an object in the cluster.

The Argo project exclusively targets Kubernetes.


I believe they use Argo according to a previous post mortem.

https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-...



I've never felt comfort in a Miata haha


Close! They just updated their states and it's back to working on a fix

Update - Some customers may be still experiencing issues logging into or using the Cloudflare dashboard. We are working on a fix to resolve this, and continuing to monitor for any further issues. Nov 18, 2025 - 14:57 UTC


Alexandr


Thanks


The below amendment from the anthropic blog page is telling.

Edited November 14 2025:

Added an additional hyperlink to the full report in the initial section

Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"


> The operational tempo achieved proves the use of an autonomous model rather than interactive assistance. Peak activity included thousands of requests, representing sustained request rates of multiple operations per second.

The assumption that no human could ever (program a computer to) do multiple things per second, nor have their code do different things depending on the result of the previous request is... interesting.

(observation is not original to me, it was someone on Twitter who pointed it out)


Great point, it might be just pure ignorance. Even OSS pentesting tooling such as metasploitable have great capabilities. I see how LLM could be leveraged to build custom modules on top of those tools or how can you add basic LLM “decision” making, but this is just another additive tool in the chain.


There is absolutely no way a technical person would mix those up


Right! It's well known that technical people never make mistakes.


I think the expectation is more that serious people have their work checked over by other serious people to catch the obvious mistakes.


Every time you have your work "checked over by other serious people", it eliminates 90% of the mistakes. So you have it checked over twice so that 99% of mistakes have been eliminated, and so on. But it never gets to 0% mistakes. That's my experience anyway.


Every time you have your work "checked over by other serious people", it only means it's been checked over by other people. You can't attach a metric to this process. Especially when it comes to security, adding more internal eyeballs doesn't mean you've expanded coverage.

One of the things I enjoy about Penn and Teller is that they explain in detail how their point of view differs from the audiences and how they intentionally use that difference in their shows. With that in mind you might picture your org as the audience, with one perspective diligently looking forwards.


Serious people like to look at things through a magnifying glass. Which makes them miss a lot.

I've seen printed books checked by paid professionals that consisted a "replace all" populated without context. Creating a grammar error on every single page. Or ones where everyone just forgot to add page numbers. Or a large cook book where index and page numbers didn't mach, making it almost impossible to navigate.

I'm talking of pre-AI work, with publisher. Apparently it wasn't obvious for them.


But what about an ML person roped into writing an AI assisted blogpost about security


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: