That’s because it’s pretty safe to safe you have experience and have seend havoc in the past.
Less experienced developers are the primary vector of propagation for this « low quality » output, with seniors trying to educate and review the mess (if time permits)
was thinking about this while reading another story about AI code review.
Having an LLM write the code for me? Blecch, it doesn't do it right.
Have an LLM make suggestions about my code? That's fine. If some of them are asinine I just get to laugh and feel smart while ignoring them. But if 1/5 of the suggestions are actually good? That's a win.
But if 1/5 of the questions I ask an LLM are correct, that's a waste of time. Funny how the accuracy of the model matters a different amount depending on the task at hand!
Do implementation with minimal (not none, tbf) AI support, ask copilot if there are any obvious issues with and otherwise check my work.
This workflow has helped me catch quite a few gotchas I would have otherwise missed.
Coding assistants are really helpful for validating output, I've had much more mixed results trying to use it to generate novel outputs.