Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not simply "training". What's the point of training on prompts? You can't learn the answer to a question by training on the question.

For Anthropic at least it's also opt-in not opt-out afaik.



There is a huge point - those prompts have answers, followed by more prompts and answers. If you look at an AI answer in hindsight you can often spot if it was a good or bad response from the next messages. So you can derive a preference score, and train your preference model, then do RLHF on the base model. You also get separation (privacy protection) this way.


I think the prompts might actually really useful for training, especially for generating synthetic data.


Yeah and that's a little more concerning than training to me, because it means employees have to read your prompts. But you can think of various ways they could preprocess/summarize them to anonymize them.


I don't think it means they have to read your prompt, but it's very probably that they would read some during debugging etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: