Why are you not able to have an earnest conversation with an LLM? What kind of ideas are you not able to bounce of LLMs? These seem to be the type of use cases where LLMs have generally shined for me.
Eh, I am torn on this. I had some great conversations on random questions or conceptual ideas, but also some where the models instructions shone through way too clearly. Like, when you ask something like "I’m working on the architecture of this system, can you let me know what you think and if there’s anything obvious to improve on?"—the model will always a) flatter me for my amazing concept, b) point out the especially laudable parts of it, and c) name a few obvious but not-really-relevant parts (e.g. "always be careful with secrets and passwords").
However, it will not actually point out higher level design improvements, or alternative solutions. It’s always just regurgitating what I’ve told it about. That is semi-useful, most of the time.
Because it spits out the most probable answer, which is based on endless copycat articles online written by marketers for C-level decision makers to sell their software.
AI doesn't go and read a book on best practices, then comes back saying "Now I know Kung Fu of Software Implementation" and then critically thinks looking at your plan step by step and provides answer. These systems, for now, don't work like that.
The "meaningless praise" part is basically American cultural norms trained into the model via RLHF. It can be largely negated with careful prompting, though.