Part of the problem will not be solved by LLMs but maybe hiding aspects of one running. LLMs basically "think" "out loud" as it processes and produces tokens.
The amount of thought required to answer any of those questions is pretty high, especially because they are all sizeable lists. It is going to take a lot of thinking out loud, and detailed training data covering all those items, to do that well.
The amount of thought required to answer any of those questions is pretty high, especially because they are all sizeable lists. It is going to take a lot of thinking out loud, and detailed training data covering all those items, to do that well.