The “change capture”/straight jacket style tests LLMs like to output drive me nuts. But humans write those all the time too so I shouldn’t be that surprised either!
1. Take every single function, even private ones.
2. Mock every argument and collaborator.
3. Call the function.
4. Assert the mocks were called in the expected way.
These tests help you find inadvertent changes, yes, but they also create constant noise about changes you intend.
Juniors on one of the teams I work with only write this kind of tests. It’s tiring, and I have to tell them to test the behaviour, not the implementation. And yet every time they do the same thing. Or rather their AI IDE spits these out.