Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is one fundamental flaw with the AI generated code: unmaintainability and unpredictability.

An attempt to modify the already generated specific piece of code or the program in general will produce an unexpected result. Saving some money on programmers but then losing millions in lawsuits or losing customers and eventually the whole business due to an expected behaviour of the app or a data leak might not be a good idea after all.



Takes like this miss the forest for the trees. The overall point is that automated programming is now a target, just like automating assembly lines became a target back in the day. There will be kinks in the beginning, but once the target is set, there will be a huge incentive to work out the kinks to the point of near full automation


You do realize how predictable an assembly line is though right?


Playing devil's advocate, between compilers and tests, is it really less predictable than some junior developer writing the code?

If you're pushing unreviewed, untested code to production, that's a bigger problem than the quality of the original code


Who reviews and tests the code?

And how do they build the knowledge and skill needed to review and test without being practiced?


Retorts from the business side:

* sounds like next quarters problem

* since everyone is doing it, sounds like it will be society who has to figure out an answer, not me. (too big to fail)

Not joking, I think those are the current defacto strategies being employed.


Really get them going when you mention that "too big to fail" is a logical fallacy.


For some reason 'logical fallacy' just results in frowns and 'Needs Improvement' ratings when used on MBAs. Weird.


Oh, I just mention how dinosaurs were too big to fail; it really helps when the whole team starts making fun of them for saying something stupid like that.


Unmaintainability? You should see the stuff some of my colleagues write. I'm fairly certain GPT could outperform them in the readability/maintainability department at this point.


You're assuming that AI-based code is even that minimally good.

Here's a nice litmus test: Can your AI take your code and make it comply with accessibility guidelines?

That's a task that has relatively straightforward subtasks (make sure that labels are consistent on your widgets) yet is still painful (going through all of them to make sure they are correct is a nightmare). A great job to throw at an AI.

And, yet, throwing any of the current "AI" bots at that task would simply be laughable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: