> And if you realize there's no way to convince them that they are wrong?
Then you resign and start your own company whose charter is written the way you think it should be written.
> But it's a matter of survival of the entire universe?
If this is actually true, then, as I've already said elsewhere in this discussion (and in previous HN discussions of OpenAI), Altman is the last person I want in charge of this technology. And the same goes for everyone else associated with OpenAI. None of them have come anywhere remotely close to showing the kind of maturity, judgment, and ethics that would qualify them to be stewards of an existential risk.
If Altman sincerely believes that AI is an existential risk, then he should resign from OpenAI and disqualify himself from working on it or being involved in it in any way. That's what he would do if he were capable of taking an honest look at himself and his actions. But of course I won't be holding my breath.
Then you resign and start your own company whose charter is written the way you think it should be written.
> But it's a matter of survival of the entire universe?
If this is actually true, then, as I've already said elsewhere in this discussion (and in previous HN discussions of OpenAI), Altman is the last person I want in charge of this technology. And the same goes for everyone else associated with OpenAI. None of them have come anywhere remotely close to showing the kind of maturity, judgment, and ethics that would qualify them to be stewards of an existential risk.
If Altman sincerely believes that AI is an existential risk, then he should resign from OpenAI and disqualify himself from working on it or being involved in it in any way. That's what he would do if he were capable of taking an honest look at himself and his actions. But of course I won't be holding my breath.