That has not been my experience. JS/TS requires the most hand-holding, by far. LLMs are no doubt assumed to be good at JS due to the sheer amount of training data, but a lot of those inputs are of really poor quality, and even among the high quality inputs there isn't a whole lot of consistency in how they are written. That seems to trip up the LLMs. If anything, LLMs might finally be what breaks the JS camel's back. Although browser dominance still makes that unlikely.
> Very few people will then take the pain of optimizing it
Today's LLMs rarely take the initiative to write benchmarks, but if you ask it will and then will iterate on optimizing using the benchmark results as feedback. It works fairly well. There is a conceivable near future where LLMs or LLM tools will start doing this automatically.
My experience is from trying to get the React Native example to work with OpenUI. Felt Sonnet/Opus was much better at figuring out whats wrong with the current React implementation and fixing it than it was with React Native
But yes I see what you mean and I think people are trying to solve it with skills and harnesses at the application layer but its not there yet
That has not been my experience. JS/TS requires the most hand-holding, by far. LLMs are no doubt assumed to be good at JS due to the sheer amount of training data, but a lot of those inputs are of really poor quality, and even among the high quality inputs there isn't a whole lot of consistency in how they are written. That seems to trip up the LLMs. If anything, LLMs might finally be what breaks the JS camel's back. Although browser dominance still makes that unlikely.
> Very few people will then take the pain of optimizing it
Today's LLMs rarely take the initiative to write benchmarks, but if you ask it will and then will iterate on optimizing using the benchmark results as feedback. It works fairly well. There is a conceivable near future where LLMs or LLM tools will start doing this automatically.