Hacker Newsnew | past | comments | ask | show | jobs | submit | hosteur's commentslogin


> What's really happening is that a few employees realized they can game the system by turning on a firehose of AI slop and pushing 10x the LOC than any other engineer (with or without AI)

Did they figure out how to game the system? Or was the system set up exactly with incitaments to produce exactly this outcome?


They figured out how. Mind you the system was setup with incentives to produce this outcome - but before AI it wasn't really realistic to produce all those lines of code even though you could and so nobody was gaming it so badly it broke. (it was always broke, but the breakage was acceptable before)

The new system is immature and hence open to exploitation. This is eventually going to destroy some companies.

The article you are thinking of was likely written by Axel Rietschin who worked on Azure core compute team.

https://isolveproblems.substack.com/p/how-microsoft-vaporize...

HN thread: https://news.ycombinator.com/item?id=47616242


Wow. Yikes. I never liked Azure, but this level of dysfunction is just astonishing.

Yeah me too. I moved all my public projects to codeberg and my internal repos to self-hosted forgejo.

Hosting forgejo is really easy as well. It being a single binary makes it really easy to handle with almost zero maintenance.


Well, outages seem to be distributed across all days except weekends. So this seems like people fucking around with stuff being a major factor.

Surely it just means more people working, resulting in more load, resulting in more outages?

Or even both. In any kind of continuous deployment, you'd expect outages at the point of deployment, or shortly thereafter as the unintended consequences ripple.

Then the load during the working days makes those ripples larger and into outages.


Most outages are caused by changes by humans ("actors"?), very rarely are things "People just dig our stuff so much we can't keep up" but more often "We didn't think about this performance drawback when we built thing X, now it's hurting us", and of course, more outages when you try to fix those issues without fully considering the scope and impact.

Why is there even a “ moderation queue”? Isn’t this people’s private recordings?

This is my question too. I get moderating things that people are posting. Being not familiar with the device and how it works, I'd assume that all footage is posted to the user's cloud account even if not publicly posted. This being cloud storage, Meta is "moderating" the footage to ensure CSAM or other restricted footage type is not being stored on their (Meta's) platform. That's my very generous take on it, not that I believe it

I’m betting this is going to some ML / Data labelling pipeline.

Yeah, moderation may instead be labelling in this case. Its likely the same type of firm handles both sorts of work on behalf of FAANG

Sounds plausible.

We could also toss vibe coded mess on top of this and probably get closer to the truth.


The article itself is ambiguous on this point: "At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI."

That could be moderation, or it could be labelling new examples for training/validation


This feels like an instance of weasel words. One can scarcely imagine any reason to do content moderation over people’s own private and personally consumed data.

Yes but also we don't want people live streaming murder and suicide, so there's detection and moderation in place.

Private recordings aren't public live streams.

databasus does not do PITR.


Is that info up-to-date? Their readme states:

  **Backup types**
  
  - **Logical** — Native dump of the database in its engine-specific binary format. Compressed and streamed directly to storage with no intermediate files
  - **Physical** — File-level copy of the entire database cluster. Faster backup and restore for large datasets compared to logical dumps
  - **Incremental** — Physical base backup combined with continuous WAL segment archiving. **Enables Point-in-time recovery (PITR)** — restore to any second between backups. Designed for disaster recovery and near-zero data loss requirements
EDIT: It seem PITR has been added this March (for PostgreSQL)

https://github.com/databasus/databasus/issues/411


Databasus does support PITR

Where do they state it is aliens?


I prefer Firefox as it is still the only feasible non-chromium alternative.

I have been a firefox user for more than two decades - since it was called Pheonix and later Firebird.


I almost never use YouTube anymore. They seem less and less relevant.


"I see you enjoy cooking, ancient history and building terrariums. Would you also be interested in a couple of tiktok dance short videos?" ugh..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: