Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably — it’s just weird how much of an identifiable “voice” ChatGPT has


I wonder if it's the textual equivalent of looking at an "average face"?

Is ChatGPT just blending all its sources together into a wall of text that always looks the same?

https://www.researchgate.net/figure/Example-of-average-face-...


it's fine tuned specifically for that tone. The base model without fine-tuning will tend to be a lot less corporate, and respond more to the prompt (chatGPT can still imitate other styles reasonably well if you ask for it, so long as you don't trigger one if its safeties)


The thing is, you (we) only identify chatGPT generated content when it has that generic voice. Maybe there's a lot more generated content here, but it isn't so obvious. It's a selection bias, we see mostly what's easy to see.


I don’t think this voice is emergent, it learned this from its training data. If you gave me a video game review script written by ChatGPT and another one written by Ganeranx, I doubt I’d be able to tell the difference. They both have a style of just vaguely referencing lots of different things that different people have said without really saying anything at all or offering any concrete opinions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: