Asking the LLM for something is like doing an "Ask the Audience" in Who Wants to be a Millionaire. You're basically polling the consensus answer to the question you're asking.
> You're basically polling the consensus answer to the question you're asking.
That's perfectly fine, because that's what you expect from human work to begin with. From natural text like blog article or tecnical reports to software changes, all output is expected to comply with patterns we are already familiar with. Heck, look at pull requests, where you ask your audience to evaluate your work hoping to reach a consensus.
You'd expect that from an average person, but not a genius in the subject. "Genius" is almost by definition a contrarian viewpoint or technique that happens to be better than the consensus or the best-known.
> You're basically polling the consensus answer to the question you're asking.
There is nothing wrong with going with the consensus answer. Sure, you can't invent an O(N) sorting algorithm with the consensus, but LLMs can definitely write good, maintainable code. Michelin star analogy might be an exaggeration, but it can help cook a good homemade meal, possibly even better than most wannabe home cooks.
>There is nothing wrong with going with the consensus answer.
For most things, yes. But you don't hire experts to tell you what everyone else knows. You hire experts to give you the angle.
For example, an LLM trained on all AWS documentation wouldn't be super useful (possibly less useful than traditional full-text search), because what you really want to know is the things that AWS didn't write, the writing "between the lines", all the things that DynamoDB can't do.