Even documented libraries can be a struggle, especially if they are not particularly popular. I'm doing a project with WiFi/LoRa/MQTT on an ESP32. The WiFi code was fairly decent, but the MQTT and especially LoRa library code was nearly useless.
Sonnet 3.5 fails to generate basic JetpackCompose libraries properties properly. Maybe if somebody tried really hard to scrape all the documentation and force feed it, then it could work. But i don't if there are examples of this.
Like general LLM, but with complete Android/Kotlin pushed into it to fix the synapses.
Of course, why wouldn't it? It's a generative model, not a lookup table. Show it the library headers, and it'll give you decent results.
Obviously, if the library or code using it weren't part of the training data, and you don't supply either in the context of your request, then it won't generate valid code for it. But that's not LLM's fault.
Also, local llms with an agentic tool can be a lot of fun to quickly prototype things. Quality can be hit or miss.
Hopefully the work trickles down to local models long-term.