Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Your objective has explicit instruction that car has to be present for a wash.

Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?



In this case, with such a simple task, why even bother to prompt it?

The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: