LLMs generate lots of security issues and bugs. Just being "Rust" doesn't automatically fix that. Generating that amount of code means no human review. How could this not end in obvious disaster?
So AI is based on the insecure and buggy human code, but on top of that it can't think for itself? Definitely, in 2025.... 2027 it will be, coding, for us all.