Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your argument seems to imply that if an AGI tricks us into giving it the ability to destroy us, that's basically okay because its goals are "better" than human goals.

Yea that's about right.

I don't consider goals that are compatible with the destruction of humanity to be "better" than goals which are aligned with human interests.

Well of course you wouldn't, neither you nor I would possibly understand what a superhuman-AGI does or thinks.

I don't think people realize that actually creating a superhuman-AGI is effectively creating a God in all the forms that people interpret it now.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: