Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

this is clearly untrue. it’s an input, a black box, then an output. openai have 100% control over the output. they may not be able to directly control what comes out of the black box, but a) they can tune the model, and they undoubtedly will, and b) they can control what comes after the black box. they can—for example—simply block urls


This is true, but detecting and omitting code hallucinations is (functionally) as hard as just not hallucinating in the first place.


They don’t have control over the output. They created something that creates something else. They can only tweak what they created, not whatever was created by what they created.

E.g., if I create a great paintbrush which creates amazing spatter designs on the wall when it is used just so, then, beyond a point, I have no way to control the spatter designs - I can only influence the designs to some extent.


did you read what I said?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: