Hacker Newsnew | past | comments | ask | show | jobs | submit | rhines's commentslogin

How are you using it? I'm curious if you hit the limit so quickly because you're running it with Claude Code and so it's loading your whole project into its context, making tons of iterations, etc., or if you're using the chat and just asking focused questions and having it build out small functions or validate code quality of a file, and still hitting the limit with that.

Not because I think either way is better, just because personally I work well with AI in the latter capacity and have been considering subscribing to Claude, but don't know how limiting the usage limits are.


I'm not sure I agree - on the one hand yes, it's trivial to generate pages stuffed with keywords. But on the other hand Google is already interpreting search intent, and while this is okay for some things it is extraordinarily frustrating when trying to look for something specific.

Often I do want exact matches, and Google refuses to show them no matter what special characters you use to try to modify the search behaviour.

Personally I'd rather search engines continue to return exact matches and just de-rank content that has poor reputation, and if I want to have a more free-form experience I'll use LLMs instead.


Something that's been on my mind for a while now is shared moderation - instead of having a few moderators who deal with everything, distribute the moderation load across all users. Every user might only have to review a couple posts a day or whatever, so it should be a negligible burden, and send each post that requires moderation to multiple users so that if there's disagreement it can be pushed to more senior/trusted users.

This is specifically in the context of a niche hobby website where the rules are simple and identifying rule-breaking content is easy. I'm not sure it would work on something with universal scope like Reddit or Facebook, but I'd rather we see more focused communities anyway.


I dont know if it's true or not. But I remember reading about this person who would do the community reports for cheating for a game like cs or something. They had numerous bot accounts and spent a hour a day on it. Set up in a way that when they reviewed a video the bots would do the same.

But all the while they were doing legitimate reporting, when they came across their real cheating account they'd report not cheating. And supposedly this person got away with it for years for having good reputable community reporting with high alignment scores.

I know 1 exception doesnt mean it's not worth it. But we must acknowledge the potential abuse. Id still rather have 1 occasionally ambitious abuser over countless low effort ones.


Yeah I can definitely see that being a threat model. In the gaming case I think it's harder because it's more of a general reputation system and it's based on how people feel while playing with you, whereas for a website every post can be reviewed by multiple parties and the evidence is right there. But certainly I would still expect some people to try to maximize their reputation and use that to push through content that should be more heavily moderated, and in the degenerate case the bad actors comprise so much of the userbase that they peer review their own content.

So the Slashdot model?

Everyone gets a random set of messages to review and if they agree with the original judgement, stuff happens.


Is that different from voting on comments?

It's more like jury duty where the decision of one or a few people can have a huge impact (modded or not).

I see this kind of testing as more for regression prevention than anything. The tests pass if the code handles all possible return values of the dependencies correctly, so if someone goes and changes your code such that the tests fail they have to either fix the errors they've introduced or go change the tests if the desired code functionality has really changed.

These tests won't detect if a dependency has changed, but that's not what they're meant for. You want infrastructure to monitor that as well.


Mass market SAAS will generally just use other products to handle this stuff. And if there does happen to be a leak, they just say sorry and move on, there are very few consequences for security failures.


You're right

but guess who advises that architecture and implements it... the principal developer/architect.

You can use good security tools, badly.


What use is privacy and security when all our data lives in a DC in us-east-1?


I see tests as more of a test of the programmer's understanding of their project than anything. If you deeply understand the project requirements, API surface, failure modes, etc. you will write tests that enforce correct behaviour. If you don't really understand the project, your tests will likely not catch all regressions.

AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.


Well Wang used to live with Altman. What value that actually provides, I don't know. But it seems to be why he's worth this much.


ToS didn't stop the companies that built those models and it won't stop the companies that bootstrap off them. Until an AI company eats a multi billion dollar lawsuit for unlawful data use they will continue to operate this way.


Didn’t Anthropic already eat a $1.5 billion lawsuit?


> Until an AI company eats a multi billion dollar lawsuit for unlawful data use they will continue to operate this way

If only. That's my dream, massive copyright lawsuits against all of these AI players and maybe the courts can do something good for a change, put an end to all of this AI bullshit


Social media is no longer social - it's just media. At least for most people anyway. The average user, and probably kids even more so, are just scrolling through.

If you're posting as well, or at least commenting on stuff and having discussions with people you know (even if you just know them online), I think that's fine. Like forums, or being in group chats with friends on Facebook, or sharing photos you take with a specific community.

It's when you're only consuming (like scrolling TikTok or Instagram), or when your comments are written for the algorithm rather than for actual discussion (like on Reddit, or even Hackernews to an extent), that social media is an issue.


What is meant by writing comments for the algorithm?


Or upvotes might be a better example, at least for Reddit/Hackernews. But the idea being that the comments are sorted based on some algorithm, whether than be popularity or something else, and commenters are trying to optimize for that. In traditional forums where comments are sorted linearly and it's more about having a discussion with others, but when comments are surfaced by other metrics then it's less about the discussion and more about gaming those metrics.


Thanks for sharing this, looks like a useful overview of the field. I wish the first edition had come out a year or two earlier - this would have been a great resource for my undergrad research work. Back then there were no books covering CMA-ES, surrogate models, and gaussian processes all in one book, everything was scattered across different books and papers, with varying levels of technical depth and differing notations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: