Yes it would mimic real human output. That's why we have to verify if the subject matter is important. Knowing who to trust or not is not easy, if we have to be sure we have to do some work.
I still don’t get how this helps you differentiate between them… or do you mean to assume both are genuine human outputs regardless, if the information proves to be genuinely true?
And it’s not like a live video call, where a few seconds delay would be noticeable.
There would still be a real human being, just as smart as you, behind the LLM, but pretending to be say 20 different lower skilled people.