Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One difference is that clichéd prose is bad and clichéd code is generally good.


Depends on what your prose is for. If it's for documentation, then prose which matches the expected tone and form of other similar docs would be clichéd in this perspective. I think this is a really good use of LLMs - making docs consistent across a large library / codebase.


A problem I’ve found with LLMs for docs is that they are like ten times too wordy. They want to document every path and edge case rather focusing on what really matters.

It can be addressed with prompting, but you have to fight this constantly.


> A problem I’ve found with LLMs for docs is that they are like ten times too wordy

This is one of the problems I feel with LLM-generated code, as well. It's almost always between 5x and long and 20x (!) as long as it needs to be. Though in the case of code verbosity, it's usually not because of thoroughness so much as extremely bad style.


I think probably my most common prompt is "Make it shorter. No more than ($x) (words|sentences|paragraphs)."


I've never been able to get that to work. LLMs can't count; they don't actually know how long their output is.


I have been testing agentic coding with Claude 4.5 Opus and the problem is that it's too good at documentation and test cases. It's thorough in a way that it goes out of scope, so I have to edit it down to increase the signal-to-noise.


The “change capture”/straight jacket style tests LLMs like to output drive me nuts. But humans write those all the time too so I shouldn’t be that surprised either!


What do these look like?


  1. Take every single function, even private ones.
  2. Mock every argument and collaborator.
  3. Call the function.
  4. Assert the mocks were  called in the expected way.
These tests help you find inadvertent changes, yes, but they also create constant noise about changes you intend.


These tests also break encapsulation in many cases because they're not testing the interface contract, they're testing the implementation.


Juniors on one of the teams I work with only write this kind of tests. It’s tiring, and I have to tell them to test the behaviour, not the implementation. And yet every time they do the same thing. Or rather their AI IDE spits these out.


You beat me to it, and yep these are exactly it.

“Mock the world then test your mocks”, I’m simply not convinced these have any value at all after my nearly two decades of doing this professionally


If the goal is to document the code and it gets sidetracked and focuses on only certain parts it failed the test. It just further proves llm's are incapable of grasping meaning and context.


Docs also often don’t have anyone’s name on them, in which case they’re already attributed to an unknown composite author.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: