The problem is that it’s distracting, lowers the quality of the writing, and one has to be cautious that random details might be wrong or misleading in a way that wouldn’t happen if it was completely self-authored.
That's just not true, and even if LLMs did introduce more errors than humans, if you can't trust the author to proof read a summary article about his own papers, then you shouldn't trust the papers either.
I agree with the latter. The fact that they use an LLM for the summary post without rewriting it in their own words already makes me not trust their papers.
Great, and I think that's incorrect, and only getting more incorrect every year, and perhaps you should consider trusting researchers in this field to know how and when to use their own tools correctly. I suppose that's all there is to say about that.