> I have to be extremely careful and make sure nothing unwanted or addition that I am unaware of has been added.
I've started getting desperate to the point of saying 1) "never. never, ever add or remove features without consulting me first and getting approval." Then eventually, 2) appended to the previous "The last rule is the most important rule, because you keep doing it and I need you to stop doing it." Then finally 3), "THE LAST RULE IS THE MOST IMPORTANT RULE, BECAUSE YOU KEEP DOING IT AND I NEED YOU TO STOP DOING IT."
3/4 of my AI bugs are the AI making changes to the functionality of the code when I'm not looking, or repeatedly reinserting bugs that had been previously removed. The most valuable thing I'm getting it to do is to refactor the code it already wrote into shorter well-named functions (during which it still inevitably adds and removes behavior), because it means that I can just debug by hand and stop demanding over and over again that it not ignore what I said.
But, of course, it's not ignoring me, it's not thinking at all. Trying to look for the magic words to keep it from ignoring me and lying about it is just an illusion of control. The thing that will knock it off it's dumb track is likely just a lucky random seed during the 13th attempt. Then, like a sports fan, I add the lucky underwear to my instructions.
edit: the "I'll get AI to write the AI prompt so it will be perfect" stuff is so much voodoo. LLMs have no special insight into what will make LLMs work correctly. I probably should have stopped that last sentence after the word "insight." Feed them a sample prompt that you say doesn't work, and it will explain to you exactly why it's so bad, and could never work. Feed them the same prompt and ask why it works so well, and it will tell you how perfectly crafted it is and why. Then it will offer to tell you how it could be improved.
I've started getting desperate to the point of saying 1) "never. never, ever add or remove features without consulting me first and getting approval." Then eventually, 2) appended to the previous "The last rule is the most important rule, because you keep doing it and I need you to stop doing it." Then finally 3), "THE LAST RULE IS THE MOST IMPORTANT RULE, BECAUSE YOU KEEP DOING IT AND I NEED YOU TO STOP DOING IT."
3/4 of my AI bugs are the AI making changes to the functionality of the code when I'm not looking, or repeatedly reinserting bugs that had been previously removed. The most valuable thing I'm getting it to do is to refactor the code it already wrote into shorter well-named functions (during which it still inevitably adds and removes behavior), because it means that I can just debug by hand and stop demanding over and over again that it not ignore what I said.
But, of course, it's not ignoring me, it's not thinking at all. Trying to look for the magic words to keep it from ignoring me and lying about it is just an illusion of control. The thing that will knock it off it's dumb track is likely just a lucky random seed during the 13th attempt. Then, like a sports fan, I add the lucky underwear to my instructions.
edit: the "I'll get AI to write the AI prompt so it will be perfect" stuff is so much voodoo. LLMs have no special insight into what will make LLMs work correctly. I probably should have stopped that last sentence after the word "insight." Feed them a sample prompt that you say doesn't work, and it will explain to you exactly why it's so bad, and could never work. Feed them the same prompt and ask why it works so well, and it will tell you how perfectly crafted it is and why. Then it will offer to tell you how it could be improved.