Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a really important point here, and it's critical to be aware of it, but we're really just at the beginning of these tools and workflows and these issues can be solved, IMO, possibly better than with humans.

I've been trying to use LLMs to help code more to mixed success, honestly, but it's clear that they're very good at some things, and pretty bad at others. One of the things they good at obviously is producing lots of text, two important others are that they can be very persistent and thorough.

Producing a lot of code can be a liability, but an LLM won't get annoyed at you if you ask it for thorough comments and updates to docs, READMEs, and ADRs. It'll "happily" document what it just did and "why" - to the degree of accuracy that they're able, of course.

So it's conceivable to me at least, that with the right guidance and structure an LLM-generated codebase might be easier to come into cold years later, for both humans and future LLMs, because it could have excellent documentation.



There is a really important point here, and it's critical to be aware of it, but we're really at the end of these tools and workflows and these issues can't be solved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: