Hacker Newsnew | past | comments | ask | show | jobs | submit | montroser's commentslogin

It's only one sliver of the problem here, but -- do you know how often I update my code editor? Like once every five or ten years, to the version that was released a year or two ago.

I do my own commits by hand so it's moot anyway, but there's a fair bit of "leopards ate my face" going in the GitHub thread.


VSCode updates itself what feels like daily so everyone is on the bleeding edge. There are upsides and downsides to that but it doesn’t feel like a trade-off many have made purposefully.

You can disable auto-updates for VS Code, and you can install older versions of it.

VS Code is updated monthly. More and more they also release a bugfix to the monthly release, a week or two after.

They switched to a weekly release cycle, presumably to compete with the perceived iteration speed of the many VS Code forks.

Welcome to 2026, in which a browser is an operating system!

OS bloat is no less of a problem.

This tracks. Tasks that used to be a day or two of grunt work are now an hour with Claude.

And there is a lot of that type of work to do if you're trying to grow a business. But, something in there should be trying to be exceptional or else you have no moat. Claude will probably not be able to breeze through that part with the same amount of ease...


Average is all you need, if your needs are average.


But, you see, our needs are above average because we target above average exists so we only hire from the top 1% of software engineers, blah, blah, yadda, yadda, etc.

The Business simply cannot admit that it’s really doing nothing above average. If they did, investment dries up.


That is correct. And if you need more you can get it as well.


I liken it to the Ikeaficiation of furniture. To a great majority, such as my college self, it was preferable and desirable. As I've made more money, I've wanted something better.

There's a market for both, but the furniture slop of Ikea is dominant.


> I sometimes wish I had never put my name on it so I could just take the money without harming my reputation, but I did, so I’m stuck with being honourable.

This distills down to: "I don't want to be honourable." They signaled right from the beginning.


> It just adds an extra layer of abstraction, which I happen to also find unnecessary.

Can't tell if you're talking about React or Tailwind


I like the big link to their Twitter profile at the top and bottom of that post!


Not sure if I actually want this (pretty sure I don't) -- but very cool that such a thing is now possible...


That is a lot of complaining for having no suggested better alternative.

And there is your answer to the clickbait title -- we're still using markdown because there's no alternative that is so much better that it is going to dethrone the one that has all the momentum from being the first good-enough take on this that got any traction.


It has been pretty rough. Their own numbers report just a single `9` for Actions in Feb 2026 with 98% uptime. But that said -- I don't get the 90% number.

Anecdotally, it seems believable that 1 in 50 times (2%) in Feb that Actions barfed. Which is not very nice, but it wasn't at 1 in 10 times (10%).


It looks like the aggregate stats are more of a venn diagram than an average. So if 1/N services are down, the aggregate is considered down. I don't think this is an accurate way to calculate this. It should be weighted or in some way show partial outages. This belief is derived from the Google SRE book, in particular chapters 3 (embracing risk) and 4 (service level objectives)

https://sre.google/sre-book/embracing-risk/

https://sre.google/sre-book/service-level-objectives/


If you're using all services, then any partial outage is essentially a full outage. Of course, you can massage the numbers to make it look nicer in the way you described but the conservative approach is better for the customers. If you insist, one could create this metric for selected services only to "better reflect users".

That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9.


> That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9.

It's definitely bad no matter how it you slice the pie.

If GH pages is not serving content, my work is not blocked. (I don't use GH pages for anything personally)


That's how you count uptime. You system is not up if it keeps failing when the user does some thing.

The problem here is the specification of what the system is. It's a bit unfair to call GH a single service, but it's how Microsoft sells it.


As a “customer”, I consider github down if I can’t push, but not down if I can’t update my profile photo (literally did this today, sending out my github to potential employers for the first time in a long time). This stuff is notoriously hard to define


> That's how you count uptime.

It's not how I and many others calculate uptime. There is not uniformity, especially when you look at contracts.


Thinking back to when I was hosting, I think telling a customer "your web server was running fine it's just that the database was down" would not have been received well.


I mean I think it's useful. It answers the question, "what percentage of the time can I rely on every part of GitHub to work correctly?". The answer seems to be roughly 90% of the time.


I don't use half of the services, the answer is not straight forward

https://mrshu.github.io/github-statuses/


Nobody cares about every part of GitHub working correctly. I mean, ok, their SREs are supposed to, but tabling the question of whether that's true: if tomorrow they announced a distributed no-op service with 100% downtime, you should not have the intuition that the overall availability of the platform is now worse.


In a nutshell, why would the consumer care (for the SLO) care about how the vendor sliced the solution into microservices?


It will depend on the contract.

When I was at IBM, they didn't meet their SLOs for Watson and customers got a refund for that portion of their spend


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: