Pin the action's version via a digest and use Renovate for updates.
You can run all your CI locally if you don't embed your logic into the workflows, just use CI for orchestation. Use an env manager(Mise, Nix etc) to install tooling(you'll get consistency across your team & with CI) and call out to a task runner(scripts, Make, Task etc).
I think the idea is GitHub actions calls "build.sh", or "deploy.sh" etc. Those scripts contain all of the logic necessary to build or deploy or whatever. You can run those scripts locally for testing / development, or from CI for prod / auditing.
Yes this is what I meant! If you structure it correctly using task runners and an environment manager you can do everything locally using the same versions etc. E.g.
> You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.
Yeah checking out everytime is a slight papercut I guess, but I guess it gives you control as sometimes you don't need to checkout anything or want a shallow/full clone. I guess if it checked out for you then their would be other papercuts.
I use their runners so never need to do any cleanup and get a fresh slate everytime.
Code search over all of Gitlab (even if available) wouldn't help much when many of the interesting repos might be on Github. To be truly useful, it would need to index repos across many different forges. But there's a tension in presenting that to users if you're afraid that they might exit your ecosystem to go to another forge.
It is fairly common pratice almost engineering best pratice to not put logic in CI. Just have it call out to a task runner, so you can run the same command locally for debugging etc. Think of CI more as a shell as a service, your just paying someone to enter some shell commands for you, you should be able to do exactly the same locally.
You can take this a setup furthur and use an environment manager to removing the installing of tools from CI as well for local/remote consistency and more benefits.
I mean, … the outages are a big part of it. But those outages also extending to taking out my own hardware (e.g., through bugs like the above consuming resources on my own compute) is just double the pain.
But as a product, it's just bad? Riddled with bugs? In no particular order:
* Artifact APIs will return garbage results during a run. Note that the APIs are separate from the GHA actions for interacting with artifacts, and the latter uses undocumented APIs, presumably b/c the documented APIs are buggy AF.
* needs.… will just return junk data, if you typo.
* Builds of actions are not cached, making them rather slow. Many GH official actions hack around this by pointing the tag/branch (e.g., @v4) at a pre-built artifact.
* The pricing is high, compared to other offerings.
* The interface is just FUBAR: e.g., stdin is a pipe, which will wreak havoc on some commands that change their behavior when piped to. stdout & stderr are pipes, which although GHA ostensibly supports colored output, this basically renders it useless.
* Jobs, steps, actions are conceptual mud. There's a few ideas, like "execute this thing" in there, but its all jumbled up/duplicated. Container settings are configured per-job, so if you want to run some steps in one container, and some in another, but in the same job, you're just going to be left out to dry.
* Secrets are hard to manage, and even harder to not leak.
* The expression language has all sorts of corners, like coerced types and functions with parse-time side-effects!.
> I mean, … the outages are a big part of it. But those outages also extending to taking out my own hardware (e.g., through bugs like the above consuming resources on my own compute) is just double the pain.
Never needed to run my own runner, but yes the outages are annoying.
> Artifact APIs will return garbage results during a run. Note that the APIs are separate from the GHA actions for interacting with artifacts, and the latter uses undocumented APIs, presumably b/c the documented APIs are buggy AF.
Never had an issue either I've only used the GitHub CLI to upload artifacts to releases.
> needs.… will just return junk data, if you typo.
Was not aware, but I have never typod. I'm wondering if a linter such as actionlint would catch this.
> Builds of actions are not cached, making them rather slow. Many GH official actions hack around this by pointing the tag/branch (e.g., @v4) at a pre-built artifact.
Is there not caching actions you can use?
> The pricing is high, compared to other offerings.
Try blacksmith.sh, half the price, faster and unlimited parallelisation etc.
> The interface is just FUBAR: e.g., stdin is a pipe, which will wreak havoc on some commands that change their behavior when piped to. stdout & stderr are pipes, which although GHA ostensibly supports colored output, this basically renders it useless.
I always call out to a task runner, I don't have command inside the workflow so never experienced this.
> Jobs, steps, actions are conceptual mud. There's a few ideas, like "execute this thing" in there, but its all jumbled up/duplicated.
Is it not each job has multiple steps, each step is an action?
> Container settings are configured per-job, so if you want to run some steps in one container, and some in another, but in the same job, you're just going to be left out to dry.
Yeah something you can't do, but never ran into this issue either. Ways around it, such as calling out to a script which does volume mounts and run things in a container using `docker run`. Or just cut up the problem so you don't need to and have multiple jobs or something.
> Secrets are hard to manage, and even harder to not leak.
For a personal account agreed, no way to set a secret for every repository. Recently I have been doing.
```
gh repo list --json name,owner --limit 100 | jq -r '.[] | "\(.owner.login)/\(.name)"' | while read repo; do
if gh secret list --repo "${repo}" --json name | jq -e '.[] | select(.name=="EXAMPLE")' > /dev/null 2>&1; then
gh secret set EXAMPLE --repo "${repo}" --body "${EXAMPLE}"
fi
done
```
> The expression language has all sorts of corners, like coerced types and functions with parse-time side-effects!.
Again I don't really have logic inside a workflow, I call out to Make or a script or something so it has never been an issue.
reply