> So anything that can let you iterate the loop faster is good.
I think the major objection is that you only want to automate real tedium, not valuable deliberation. Letting an llm drive too much of your development loop guarantees you don't discover the things you need to unless the model does by accident, and in that case it has still trained you to be a tiny bit lazier and stolen an insight you would have otherwise had yourself, so are you really better off?
This is a confusion that comes up often - 100% agree with chat-in-the-loop style interfaces. That slows me down way too much and its too hard to fix when it inevitably gets something wrong.
I'm mostly talking about Cursor Tab - the souped up autocomplete. I think its the perfect interface, it monitors what I type and guesses my intention (multiline autocomplete, and guessing which line I'm going to next).
It lets me easily parse if the LLM is heading in the right direction, in which case pressing tab speeds up the tedium. If its wrong, I just keep typing till it understands what I'm trying to do. It works really really well for me.
I went back to using a non-LLM editor for a bit and I was shocked at how much I had become dependent on it. It was like having an editor that didn't understand types and didn't autocomplete function names. I guess if you're a purist and never used any IDE functionality, then this also wouldn't be for you. But for me, its so much better of an experience.
I think the major objection is that you only want to automate real tedium, not valuable deliberation. Letting an llm drive too much of your development loop guarantees you don't discover the things you need to unless the model does by accident, and in that case it has still trained you to be a tiny bit lazier and stolen an insight you would have otherwise had yourself, so are you really better off?