Which still does not demonstrate that they believe it has opinions. Natural language is how you interact with an LLM -- interactions will mimic human interaction, even for those who realize it is not sentient.
They were under the impression they could in fact change the AI's mind. So yes, they did believe it has an opinion. They believed it was sentient and able to think for itself. Do not underestimate peoples inability to distinguish between a very clever Markov chain and actual intelligence. The future is going to be ... interesting.
>They were under the impression they could in fact change the AI's mind.
They aren't really wrong here. LLMs are often trained on input. Have you considered you might just be taking their anthropomorphism a little too literally? People have used these anthropomorphic metaphors for computers since the Babbage machine.