Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Everything to do with LLM prompts reminds me of people doing regexes to try and sanitise input against SQL injections a few decades ago, just papering over the flaw but without any guarantees.

With the key difference being that it's possible to do this correctly with SQL (e.g., switch to prepared statements, or in the days before those existed, add escapes). It's impossible to fix this vulnerability in LLM prompts.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: