Hacker Newsnew | past | comments | ask | show | jobs | submit | 152334H's commentslogin

why does this even occur? if it's merely compute limitations, why not just 429 some requests?

Have you run a system in production? There are a multitude of reasons that a system can go down. There's no indication so far from Anthropic that this was merely compute limitations.

> There are a multitude of reasons that a system can go down.

Start doing post mortems then!

At the very least, them using any off the shelf service that's shitting the bed would inform others to stay away from it - like an IAM solution, or maybe a particular DB in a specific configuration backing whatever they've written, or a given architecture for a given scale.

Right now it's completely like a black box that sometimes goes down and we don't get much information about why it's so much less stable than other options (hey, if they just came out and said "We're growing 10x faster than we anticipated and system X, Y and Z are not architected for that." that'd also be useful signal).

Or, who knows, maybe it's just bad deploys - seems like it's back for me and claude.ai UI looks a bit different hmmm.


I have no inside knowledge of Anthropic. But having done a lot of postmortems in general, one of the key dynamics that routinely comes up is "we know we keep shipping breakages, and we know these new procedures would prevent many of them, but then we wouldn't be able to deliver new stuff so quickly". Given where Anthropic is at and what they believe about the future of software development, that's a tradeoff that they may very well be intentionally not making.

Its most likely a "You're totally right, this fix broke production! Let me fix it"

Yeah, this is not just inference. First thing for me was an MCP I use went down in Claude Code, models still worked. Now "API Error: 529 Authentication service is temporarily unavailable."

Thank you


homely and relatable, but why promoted on HN?

How many here have read Burmese Days, had the bookworm's childhood, and are imbued with that sense of political worldliness?


HN is for anything that gratifies intellectual curiosity: https://news.ycombinator.com/newsguidelines.html. Historical and/or unexpected materials are welcome here! Having them on the site is a long tradition. (As is the "why is this on HN" comment, of course.)

It sounds like you know your Orwell - want to share something about that?


  Hacker News Guidelines

  What to Submit

  On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity. 
~ https://news.ycombinator.com/newsguidelines.html


Haven't read the book, but points two and three definitely struck some bells in the back clocktowers of my mind.

More generally, reading a bit of Orwell was inescapable in my schooling, but I sought out 1984 myself. I discovered I had kind of a thing for both utopias and dystopias.

And as I contemplate things I might write or compose, I do note that outrage towards this regime is very much in the mix of my motivations.


Wow that's a lot of snobbish superiority.


if it does so happen that the crash originates from a browser exploit, you should expect to be more at risk due to the absence of a crash on an older version, not less


The article's frame is concerning, but is it right to attribute the arrest to zero-click spyware? How is the process of the police's discovery known?


"NPU" seems to refer to trainium only?


Best to treat it with some emotional distance. It's not like the optimization process feels it.

Whether be it human dullards, scripted botfarms, or even maleficence -- none of them experience shame. If they do see it at all, it would be as one of many factors to boost engagement.


Fuck boosting engagement.


There is an ever-dwindling minority of people who think "fuck boosting engagement" is a valid strategy in this era. Online, engagement is everything. We have all, through social media and feed algorithms, been reduced to acting out the most insipid style of court-jester antics to try and garner attention; the SNR is just too high for good content to thrive.


Doubtful commentary misses the obvious: that, had Calif been slightly more responsible in their harness design -- and in particular, their definition of what constitutes a real bug -- it'd be rather unsurprising if Claude correctly dug some up.


Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?

How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?

What 'tough love' can be given to one who, having been so unreasonable throughout their lives - as to always invite scorn and retort from all humans alike - is happy to interpret engagement at all as a sign of approval?


> How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?

And even if it _could_, note, from the article:

> Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found.

The vendors have a perverse incentive here; even if they _could_ fix it, they'd lose money by doing so.


> Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?

Markets don't optimize for what is sensible, they optimize for what is profitable.


It's not market driven. AI is ludicrously unprofitable for nearly all involved.


The profit appears to be capturing the political class and it's associated lobbies and monied interests.


> clear thinking

Most humans working in tech lack this particular attribute, let alone tools driven by token-similarity (and not actual 'thinking').


> Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?

Maybe it’s not so sensible to offload the responsibility of tubacca addiction to tubacca companies?


It's almost as if being a therapist is an actual job that takes years of training and experience!

AI may one day rewrite Windows but it will never be counselor Troi.


Implying that programming is not an actual job that takes years of training and experience

To be clear I don't think the AI can do either job


Well, unless insurance companies figure out they can make more money by pushing everyone onto AI [step-]therapy instead of actual therapy


Come on, I'm sure Dario can find a nice tight bodysuit for claude


flagged for aigc


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: