My point is that the user is adding another layer of abstraction, and that layer of abstraction needs itself to be trusted. When UI elements are really concrete and you can clearly see that you pressed a particular button and the thing you wanted happened, then the UI layer, at least, is a nonissue.
But in retrospect I don't know if my point was that good. The UI problem hasn't actually been solved, and an LLM-based chatbot may actually be more reliable for non-tech users since the user has to do less translation.
Apple users already let apple (or at least their device) know everything about them.
If a person is blind and can't read or type onto their phone, a tool that can reliably pull up messages app and send Dad a letter is a godsend.