Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure what to make of these takes because so many people are using such an enormous variety of LLM tooling in such a variety of ways, people are going to get a variety of results.

Let's take the following scenario for the sake of argument: a codebase with well-defined AGENTS.md, referencing good architecture, roadmap, and product documentation, and with good test coverage, much of which was written by an LLM and lightly reviewed and edited by a human. Let's say for the sake of argument that the human is not enjoying 10x productivity despite all this scaffolding.

Is it still worthwhile to use LLM tooling? You know what, I think a lot of companies would say yes. There are way too many companies whose codebases lack testing and documentation, that are too difficult to on-board new engineers and have too high risk if the original engineers are lost. The simple fact that LLMs, to be effective, force the adaptation of proper testing and documentation is a huge win for corporate software.



> people are going to get a variety of results.

Yes, but the point of this article is surely that on average if it's working, there would be obvious signs of it working by now.

Even if there are statistical outliers (ie. 10x productivity using the tools), if on average, it does nothing to the productivity of developers, something isn't working as promised.


We need long running averages and 2023-2025 is still too early to determine it's not effective. The barriers of entry for 2023 and 2024, I'd argue is too high for inexperienced developers to start churning software. For seasoned developers, the skepticism and company adoption wasn't there yet (and still isn't).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: