Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, the Turing Test tries to give an operational definition of 'think'- can a computer give the same response as a thinking human.

Further, we can even disallow decisions based on the human/machine signature. The computers don't need to imitate human frailty and examiners decide based on the quality of the output, not whether it feels human or machine. I think that this is also why one of the stipulations is that the human/machine is not physically visible.

But even then, this goal is far away. We are not asking just about simple short chat imitation games but whether a computer can operationally do everything humans are capable of in the textual medium. This often requires experts.

The chat game version can be solved. But, the foundational goal is far away.



> Yes, the Turing Test tries to give an operational definition of 'think'- can a computer give the same response as a thinking human.

So it's quite neat that we've managed to beat the test by brute-forcing the right NN architecture on a goal function that literally is just "give the same response as a thinking human", in full generality of that phrase.

> But even then, this goal is far away. We are not asking just about simple short chat imitation games but whether a computer can operationally do everything humans are capable of in the textual medium. This often requires experts.

That is moving the goalposts. Or just another game entirely - which is fine on its own, as a further milestone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: