Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you can have it write test cases though.

in this case to make a local copy of the db, fill it with a set of records with an expected output of the query, then check to see of the query produces what you want.

you could then have it make queries that check the various assumptions that went into that artificial set of data. if it can find the assumptions broken, add records like that to the test set.

same old agentic programming techniques as ever. use your engineering skill to set up feedback loops. stuff that was painful to do as an engineer for checking your work is now straightforward

 help



The point is that you have verify it yourself. Like you wrote: "check to see of the query produces what you want"

Otherwise the LLM can just write tests against whatever it wrote and not what is expected. This happens often with the top models too.

Someone needs to check the tests work, review they cover edge cases etc.


Feedback loops require a deterministic metric for success. You are doing the equivalent of using a slot machine to decide whether something is right or wrong.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: