The reason LLM exists at all is that there is a big corpus of text that have the same standard rules with minimum deviation , a limited dictionary in comparison and a even more limited sets of concepts and words that are generally used inside a given timeframe/domain.
No one has bothered writing a formal description of day to day interactions inside a small town.
LLMs can describe day to day interactions in a small town just fine. They can deliver accurate text around stuff no one has likely ever bothered to write down. For example, I gave it a list of random objects and asked which ones would need to be treated delicately by a robotic hand. A cotton ball, an apple, a rock, a puddle of water, etc. It responded to each item accurately, though I doubt anyone has ever written that a cotton ball doesn't need a gentle touch from a robotic hand
Without using a AI, I can say with certainty there is a text where “delicately” is applied to a “cotton ball” related to the “handling” concept. I’ve just asked ChatGPT about a child’s day in an african village and the result is something taken from a fairy tale with an african spin. Leaving an LLM in charge of that aspect in a game, would probably lead to the hand problem we have with image generators.
No one has bothered writing a formal description of day to day interactions inside a small town.