Some of its responses demonstrate a degree of consistent repetitiveness which I wouldn't expect from a GPT-style model, which I would expect would give more variable outputs. Sometimes it really gives the impression of just regurgitating a collection of fixed scripts – closer to old-school Eliza than a GPT-style system.
I'm not expecting the underlying GPT-based system to have perfect conversational pragmatics, but I suspect the Eliza-like component makes its pragmatics a lot worse than a purely GPT-based chat system might have.
I'm not expecting the underlying GPT-based system to have perfect conversational pragmatics, but I suspect the Eliza-like component makes its pragmatics a lot worse than a purely GPT-based chat system might have.