Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This works only until it doesn't. The stochastic nature of LLMs will not go away. When you have to fix that bug, but the explanations of the LLM are incorrect (root) cause analysis, and you have to dig into the code yourself, you will regret not having taken more care earlier. I have had numerous scenarios in my latest project, in which the LLMs simply did not get on the right track, when I asked them about some issue I saw with a widget or making a custom widget (Python, tkinter). I don't think it will fare much better when analyzing existing code, because ultimately it does not understand things.
 help



Given the stochastic nature, if I am forced to have to dig into the code because the LLM couldn't figure it out perhaps one out of every 10 times, it's still a huge bonus. Probably it depends on what you are working on. Esoteric COBOL? Erlang? Yeah good luck, you're probably hand steering the thing while the frontier model providers figure out how to train it better. Vanilla-ish Python/Golang/Typescript/Java? I pretty much never have to do that nowadays for things the model is familiar with. If i do have to dig into the code, I've never regretted doing it this way, because 90% of my use cases worked just fine, and in those 90% of use cases I was able to produce working code at 20x the rate of hand writing it if not more. Feels like a huge win to me.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: