Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.
But that doesn't actually work for my use cases though, plenty of other people have already told me "I'm Holding It Wrong" without actual suggestions that work I've started ignoring them. At this stage I just assume many people work in very different sectors, and some see the "great benefits" often proselytized on the internet. And other areas don't see that. Systems programming, where I work, seems to be a poor fit - possibly due to relatively lack of content in the training corpus, perhaps due to company internal styles and APIs meaning lots of the context is taken up simply detailing takes a huge amount of the context leaving little for further corrections or details, or some other failure modes.
We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.
About religion, I don't think we can say "always" or anything near to that.
I agree that religions commonly use the god/god's will as the reason, but I don't think we should take that at face value. It's the argument to trump all others - rulers often claim to be chosen by the will of the supernatural - but not the reason the rule was made, which is a product of the cultures involved.
And humans often come to the same ethical conclusions: The rules against murder and rape, the priority on justice and fairness, as examples, are universal across cultures regardless of religion (look up 'cultural univerals').
Only if you are paying per token on the API. If you are paying a fixed monthly fee then they lose money when you need to burn more tokens and they lose customers when you can’t solve your problems within that month and max out your session limits and end up with idle time which you use to check if the other providers have caught up or surpassed your current favourite.
Even if Opus 4.5 is the limit it’s still a massively useful tool. I don’t believe it’s the limit though for the simple fact that a lot could be done by creating more specialized models for each subdomain i.e. they’ve focused mostly on web based development but could do the same for any other paradigm.
That's a massive shift in the claim though... I don't think anyone is disputing that it's a useful tool; just the implication that because it's a useful tool and has seen rapid improvement that implies they're going to "get all the way there," so to speak.
Personally I'm not against LLMs or AI itself, but considering how these models are built and trained, I personally refuse to use tools built on others' work without or against their consent (esp. GPL/LGPL/AGPL, Non Commercial / No Derivatives CC licenses and Source Available licenses).
Of course the tech will be useful and ethical if these problems are solved or decided to be solved the right way.
We just need to tax the hell out of the AI companies (assuming they are ever profitable) since all their gains are built on plundering the collective wisdom of humanity.
Linear progression feels slower (and thus more like a plateau) to me than the end of 2022 through end of 2024 period.
The question in my mind is where we are on the s-curve. Are we just now entering hyper-growth? Or are we starting to level out toward maturity?
It seems like it must still be hyper-growth, but it feels less that way to me than it did a year ago. I think in large part my sense is that there are two curves happening simultaneously, but at different rates. There is the growth in capabilities, and then there is the growth in adoption. I think it's the first curve that seems to be to have slown a bit. Model improvements seem both amazing and also less revolutionary to me than they did a year or two ago.
But the other curve is adoption, and I think that one is way further from maturity. The providers are focusing more on the tooling now that the models are good enough. I'm seeing "normies" (that is, non-programmers) starting to realize the power of Claude Code in their own workflows. I think that's gonna be huge and is just getting started.
Odin’s design is informed by simplicity, performance and joy and I hope it stays that way. Maybe it needs to stay a niche language under one person’s control in order to do so since many people can’t help but try to substitute their own values when touring through a different language.
Going all in on AI generated code has taught me more about project management than I learned in the last decade. I also have a much better perspective on what it’s like to be a client contracting a developer to build an app for them. The best part is the AI actually follows all of the processes that you ask it to. Today was the first day that I wrote code in over a week and I still ended up asking for some review from Opus and it went perfectly. At this point no one has an excuse for shipping slop if they can afford $20 a month.
It’s all in how you use it. If you want to learn you can just tell it to walk you through the code or write a tutorial with examples and exercises or give you programming problems to solve or use the socratic method or recommend the best human written tutorials and books or review your code and suggest more idiomatic techniques or help you convert a program from one language or paradigm to another and a million other ways.
I like the AI written tutorial method, both Opus 4.5 and Gemini 3 are good at this. You just have to put the effort in to copytype, make changes, ask questions and put what you’ve learnt into practice. AI code review is also great for discovering alternatives you don’t know about.
reply