Great question! So our AI agents actually use a combination codemods plus generated AI results. We use static analysis and codemods as much as possible, but there are lots of situations where AI is the best tool. We have found that AI is particularly good at transforming EXISTING logic from one state to another, in a fairly predictable and consistent way, so long as the code transformations are individual functions or code files.
In combination with our RAG approach, you will find that if you run the same module multiple times, the generated results are incredibly similar with very little variation. Give it a go for yourself! You can try it for free on codebases up to 2MB, or use any of our example repos.
Hah yea. But, we use RAG to ensure that the choices made are really good. LLMs on their own can't be trusted, they are just great communicators. LLMs combined with reputable sources (like documentation and code examples) provide really great results! And these data sources actually can be queried, as they are shipped with each module.
We use LangChain to crawl documentation and code examples for relevant frameworks and libraries, to make sure that the AI generated PR is up-to-date. We also have a custom dependency resolver that detects which new dependencies need to be added, as well as updating the package.json files with the correct version numbers
In general, as our AI agents produce code files, we collect dependencies, and then at the end, use npm to determine the correct version numbers without actually installing on disk
Yes today the bots assume you are connecting to a Next.js 13 app. Bad assumption I know! I am working to add repo a scanner (ChatGPT prompts!) to understand the general tech stack of a connected repo. This will be coming soon.