Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really fear for the day I need to debug some AI-generated legacy code. It's not really the algorithmic part that scares me, but the naming and code architecture.

These AI seem so confident when they output BS that it makes you doubt yourself. Now imagine if some code looks coherent but you find that each line does something slightly different that what the variable names and other method calls suggest. Now you can't trust the names to build a mental image of the code; you have to follow each method call to find out exactly what it does. It would be worst than looking at obfuscated names because you may think you know what is going on.



Some of the legacy code I have to debug makes me wonder if someone already had a GPT 5 years ago... Seriously - it's alien code - at the very least this person doesn't think like me at all.


That is a really useful insight! I share your fear about human-conducted debugging of AI-generated code




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: