Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>The question is- if we don't know anything about this unknowable threat, how can we protect ourselves against it? In fact, since we're starting from 0 information, anything we do has equal chances of backfiring and bringing forth AGI as it has of actually preventing it. Yudkowski is calling for random action, without direction and without reason.

Are you sure you read the essay? That's literally the question he answers.

At any rate, we do have more than '0 information', and if you make an honest effort to think of what to do you can likely come up with better than 'random actions' for helping (as many have).



>> Are you sure you read the essay? That's literally the question he answers.

My reading of the article is that he keeps calling for action without specifying what that action should be and trying to justify it by saying he can't know what AGI would look like (so he can't really say what we can do to prevent it).

>> if you make an honest effort to think of what to do you can likely come up with better than 'random actions' for helping (as many have).

Sure. If my research gets up one day and starts self-improving at exponential rates I'll make sure to reach for th




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: