top tip: make a repo in your org for pushing all these nonsense changes to, test out your workflows with a dummy package being published to the repo, work out all the weird edge cases/underdocumented features of Actions
once you're done, make the actual changes in your real repo. I call the test repo 'pincushion'
We maintain an internal service that hosts two endpoints; /random-cat-picture (random >512KB image + UUID + text timestamp to evade caching) and /api/v1/generic.json which allows developers and platform folks to test out new ideas from commit to deploy behind a load balancer in an end-to-end fashion, it has saved countless headaches over the years.
I think the idea is GitHub actions calls "build.sh", or "deploy.sh" etc. Those scripts contain all of the logic necessary to build or deploy or whatever. You can run those scripts locally for testing / development, or from CI for prod / auditing.
Yes this is what I meant! If you structure it correctly using task runners and an environment manager you can do everything locally using the same versions etc. E.g.
> Having your commits refer ticket ID from system that no longer exists is royal PITA
just rewrite the short links in your front-end to point to the migrated issues/PRs. write a redirect rule for each migrated issue/PR, easy
hard-coded links in commit messages are annoying, you can redirect in the front-end too but locally you'd have to smudge/clean them on local checkout/commit
I don't want to shit on the Code to Cloud team but they act a lot like an internal infrastructure team when they're a product team with paying customers
it's not the runners, it's the orchestration service that's the problem
been working to move all our workflows to self hosted, on demand ephemeral runners. was severely delayed to find out how slipshod the Actions Runner Service was, and had to redesign to handle out-of-order or plain missing webhook events. jobs would start running before a workflow_job event would be delivered
we've got it now that we can detect a GitHub Actions outage and let them know by opening a support ticket, before the status page updates
That’s not hard, the status page is updated manually, and they wait for support tickets to confirm an issue before they update the status page. (Users are a far better monitoring service than any automated product.)
Webhook deliveries do suffer sometimes, which sucks, but that’s not the fault of the Actions orchestration.
I'm seeing wonky webhook deliveries for Actions service events, like dropping them completely, while other webhooks work just fine. I struggle to see what else could be responsible for that behaviour. it has to be the case that the Actions service emits events that trigger webhook deliveries & sometimes it messes them up.
why wouldn't you? these are easily compressible text files. storing even like 100x into a 400 day (at most, the default for GH is 90) box is downright cheap to do on even massive scales.
it's 2025, for log files and a spicy cron daemon (you pay for the artifact storage), it's practically free to do so. this isn't like the days of Western Union where paying $0.35 to send some data across the world is a good deal
we don't need it. we need to run our CI jobs on resources we manage ourselves, and GitHub have started charging per-minute for it. apples and cannonballs
no, I'd cut the monthly seat cost and grow my user base to include more low-volume devs
but realistically, publishing a web page is practically free. you could be sending 100x as much data and I would still be laughing all the way to the bank
I think it's cheap to maintain. let me know how many devs you have, how many runs you do, and how many tests (by suite) you have, and I can do you up a quote for hosting some Allure reports. can spread the up-front costs over the 3-year monthly commitment if it helps
There are several services I know who offer this for free for open source software, and I really doubt any commercial offerings of that software would charge you extra for what is basic API usage.
Yep and the sky is blue and GitHub can charge for that too if they want to.
I don’t make policy at GitHub and I don’t work at GitHub so go ask GitHub why they charge for infrastructure costs like any other cloud service. It has to do with the queueing and assignment of jobs which is not free. Why do they charge per minute? I have no idea, maybe it was easiest to do that given the billing infrastructure they already have. Maybe they tried a million different ways and this was the most reasonable. Maybe it’s Microsoft and they’re giving us all the middle finger, who knows.
Yeah, if government regulations loosened to allow easier access to riskier investments by inexperienced investors or predatory VCs, there would be a lot more stock-based renumeration. Who cares about paying tax on the income generated by the difference between strike & fair market value if the grant also comes with a cash bonus exactly equal to the tax burden (and its income tax)?
EU banks have massive IT organisations and budgets so can afford to pay through the nose for contractor day rates. That's the ticket for frontline grunt wealth, and also the source of a lot of the risk-adverse 'bankist' mindset in a lot of experienced tech workers.
> Yeah, if government regulations loosened to allow easier access to riskier investments by inexperienced investors or predatory VCs, there would be a lot more stock-based renumeration.
No, if government regulations were loosened even more there would be exactly one thing and that is even more people ripped off by unscrupulous or outright criminal bankers. There's a reason why such terms as "accredited investor" exist.
> EU banks have massive IT organisations and budgets so can afford to pay through the nose for contractor day rates.
The only reason their budgets are so massive is that they are historically locked in mainframes and code untouched since the 70s, and people who (still) do COBOL etc. can command these high rates. Fixing up the cruft would require investments so high that the day rates for contractors pale and many banks attempting to do IT overhauls have paid billions to inevitably fail.
Banking IT is one hell of a shitfest, which I wouldn't dare to touch with a ten feet pole alone from a technological POV - the fact that IT and their needs are generally laughed at over the industry only confirms my position. Fintechs are different, but have their own issues - funding, data protection, questionable ethical decisions, reactions to security issues...
once you're done, make the actual changes in your real repo. I call the test repo 'pincushion'