Hacker Newsnew | past | comments | ask | show | jobs | submit | falsedan's commentslogin

top tip: make a repo in your org for pushing all these nonsense changes to, test out your workflows with a dummy package being published to the repo, work out all the weird edge cases/underdocumented features of Actions

once you're done, make the actual changes in your real repo. I call the test repo 'pincushion'


We call ours "bombing-range"

We maintain an internal service that hosts two endpoints; /random-cat-picture (random >512KB image + UUID + text timestamp to evade caching) and /api/v1/generic.json which allows developers and platform folks to test out new ideas from commit to deploy behind a load balancer in an end-to-end fashion, it has saved countless headaches over the years.


a display of great wisdom, nice


> You can run all your CI locally

if you can, you don't need CI. we can't (too slow, needs an audit trail)


I think the idea is GitHub actions calls "build.sh", or "deploy.sh" etc. Those scripts contain all of the logic necessary to build or deploy or whatever. You can run those scripts locally for testing / development, or from CI for prod / auditing.


oh that makes sense. I thought the OP was suggesting running CI locally instead of a workflow on remote runners


Yes this is what I meant! If you structure it correctly using task runners and an environment manager you can do everything locally using the same versions etc. E.g.

```yaml name: Continuous Integration (CI)

on: pull_request

permissions: contents: read

jobs: formatting: name: Formatting runs-on: ${{ matrix.architecture }} strategy: matrix: architecture: [ubuntu-24.04, ubuntu-24.04-arm] language: [rust, shell, python] steps: - name: Checkout code. uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 - name: Setup Nix. uses: cachix/install-nix-action@4e002c8ec80594ecd40e759629461e26c8abed15 # v31.9.0 - name: Check formatting. run: nix develop -c make check-${{ matrix.language }}-formatting

  linting:
    name: Linting
    runs-on: ${{ matrix.architecture }}
    strategy:
      matrix:
        architecture: [ubuntu-24.04, ubuntu-24.04-arm]
        language: [rust]
    steps:
      - name: Checkout code.
        uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
      - name: Setup Nix.
        uses: cachix/install-nix-action@4e002c8ec80594ecd40e759629461e26c8abed15 # v31.9.0
      - name: Check linting.
        run: nix develop -c make check-${{ matrix.language }}-linting

  compile:
    name: Compile
    runs-on: ${{ matrix.architecture }}
    strategy:
      matrix:
        architecture: [ubuntu-24.04, ubuntu-24.04-arm]
    steps:
      - name: Checkout code.
        uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
      - name: Setup Nix.
        uses: cachix/install-nix-action@4e002c8ec80594ecd40e759629461e26c8abed15 # v31.9.0
      - name: Compile.
        run: nix develop -c make compile

  unit-test:
    name: Unit Test
    runs-on: ${{ matrix.architecture }}
    strategy:
      matrix:
        architecture: [ubuntu-24.04, ubuntu-24.04-arm]
    steps:
      - name: Checkout code.
        uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
      - name: Setup Nix.
        uses: cachix/install-nix-action@4e002c8ec80594ecd40e759629461e26c8abed15 # v31.9.0
      - name: Unit test.
        run: nix develop -c make unit-test
... ```


> Having your commits refer ticket ID from system that no longer exists is royal PITA

just rewrite the short links in your front-end to point to the migrated issues/PRs. write a redirect rule for each migrated issue/PR, easy

hard-coded links in commit messages are annoying, you can redirect in the front-end too but locally you'd have to smudge/clean them on local checkout/commit



I don't want to shit on the Code to Cloud team but they act a lot like an internal infrastructure team when they're a product team with paying customers


I think you could learn a lot about the other use cases if you asked some genuine questions and listened with intent


it's not the runners, it's the orchestration service that's the problem

been working to move all our workflows to self hosted, on demand ephemeral runners. was severely delayed to find out how slipshod the Actions Runner Service was, and had to redesign to handle out-of-order or plain missing webhook events. jobs would start running before a workflow_job event would be delivered

we've got it now that we can detect a GitHub Actions outage and let them know by opening a support ticket, before the status page updates


> before the status page updates

That’s not hard, the status page is updated manually, and they wait for support tickets to confirm an issue before they update the status page. (Users are a far better monitoring service than any automated product.)

Webhook deliveries do suffer sometimes, which sucks, but that’s not the fault of the Actions orchestration.


I'm seeing wonky webhook deliveries for Actions service events, like dropping them completely, while other webhooks work just fine. I struggle to see what else could be responsible for that behaviour. it has to be the case that the Actions service emits events that trigger webhook deliveries & sometimes it messes them up.


The orchestration service has been rewritten from scratch multiple times, in different languages even. How anyone can get it this wrong is beyond me.

The one for azure devops is even worse though, pathetic.


if you were paying me a monthly license fee for each developer working on your repos, I'd probably consider it


What happens if I am, and now my developers suddenly start to produce changes much faster? Like, one developer now produces the volume of five.

Would you keep charging the same rate per head?


why wouldn't you? these are easily compressible text files. storing even like 100x into a 400 day (at most, the default for GH is 90) box is downright cheap to do on even massive scales.

it's 2025, for log files and a spicy cron daemon (you pay for the artifact storage), it's practically free to do so. this isn't like the days of Western Union where paying $0.35 to send some data across the world is a good deal


If that's the case, why all the fuzz?

All the people complaining can just tap into this almost-free and acessible cheap resource you are referring to instead.


we don't need it. we need to run our CI jobs on resources we manage ourselves, and GitHub have started charging per-minute for it. apples and cannonballs


no, I'd cut the monthly seat cost and grow my user base to include more low-volume devs

but realistically, publishing a web page is practically free. you could be sending 100x as much data and I would still be laughing all the way to the bank


Publishing the page is only the last step. It's orchestrating the stuff THEN publishing it.

If you think that's easy, do it for me. I have some projects to migrate, give me the link of your service.


> If you think that's easy

I think it's cheap to maintain. let me know how many devs you have, how many runs you do, and how many tests (by suite) you have, and I can do you up a quote for hosting some Allure reports. can spread the up-front costs over the 3-year monthly commitment if it helps


There are several services I know who offer this for free for open source software, and I really doubt any commercial offerings of that software would charge you extra for what is basic API usage.


they charge you for artifacts and logs separately, already


Yep and the sky is blue and GitHub can charge for that too if they want to.

I don’t make policy at GitHub and I don’t work at GitHub so go ask GitHub why they charge for infrastructure costs like any other cloud service. It has to do with the queueing and assignment of jobs which is not free. Why do they charge per minute? I have no idea, maybe it was easiest to do that given the billing infrastructure they already have. Maybe they tried a million different ways and this was the most reasonable. Maybe it’s Microsoft and they’re giving us all the middle finger, who knows.


I don't think you're responsible for anything more than your own comments.

I added some context that contradicts your assumption that the increased fees were to cover hosting/storage/scheduling costs.


Yeah, if government regulations loosened to allow easier access to riskier investments by inexperienced investors or predatory VCs, there would be a lot more stock-based renumeration. Who cares about paying tax on the income generated by the difference between strike & fair market value if the grant also comes with a cash bonus exactly equal to the tax burden (and its income tax)?

EU banks have massive IT organisations and budgets so can afford to pay through the nose for contractor day rates. That's the ticket for frontline grunt wealth, and also the source of a lot of the risk-adverse 'bankist' mindset in a lot of experienced tech workers.


> Yeah, if government regulations loosened to allow easier access to riskier investments by inexperienced investors or predatory VCs, there would be a lot more stock-based renumeration.

No, if government regulations were loosened even more there would be exactly one thing and that is even more people ripped off by unscrupulous or outright criminal bankers. There's a reason why such terms as "accredited investor" exist.

> EU banks have massive IT organisations and budgets so can afford to pay through the nose for contractor day rates.

The only reason their budgets are so massive is that they are historically locked in mainframes and code untouched since the 70s, and people who (still) do COBOL etc. can command these high rates. Fixing up the cruft would require investments so high that the day rates for contractors pale and many banks attempting to do IT overhauls have paid billions to inevitably fail.

Banking IT is one hell of a shitfest, which I wouldn't dare to touch with a ten feet pole alone from a technological POV - the fact that IT and their needs are generally laughed at over the industry only confirms my position. Fintechs are different, but have their own issues - funding, data protection, questionable ethical decisions, reactions to security issues...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: