Not according to Sam Altman, or for that matter the other OpenAI board members who have been involved in this whole kerfluffle. They all say AI is an existential risk, and OpenAI is necessary to mitigate that risk.
In other words, they themselves have told us to judge them by a much higher ethical standard than "just business". And all of them (not just Altman) have failed when judged by that standard. Even if they're wrong and AI actually isn't an existential risk, that doesn't mean they're off the hook for their behavior, because they themselves set the standard for them to be judged by.
OK. Sure. But if you're disinclined to hold businesses to higher standards than the norms of business, is there some other argument that would be persuasive about Altman's obligation during the board ouster debacle?
> if you're disinclined to hold businesses to higher standards than the norms of business
The only reason to be so disinclined in the case of OpenAI would be that you're sure their existential risk claims are wrong. If that's the case, then I suppose you could just shrug your shoulders and ignore the whole kerfluffle, at least as long as you have no skin in the game. I personally still think the conduct of everyone involved has been childish and unprofessional, but if we take existential risk off the table, then the issue is just that the norms of business in our current culture are childish and unprofessional, which is disappointing, but has no simple fix.
Can I reasonably assume that people who are very riled up about OpenAI not meeting its original (weird, IMO) moral standard are revealing that they do in fact take existential AI risk seriously?
> Can I reasonably assume that people who are very riled up about OpenAI not meeting its original (weird, IMO) moral standard are revealing that they do in fact take existential AI risk seriously?
I would say many of them probably do, yes--quite possibly in large part because of OpenAI's own rhetoric on the subject.
Not according to Sam Altman, or for that matter the other OpenAI board members who have been involved in this whole kerfluffle. They all say AI is an existential risk, and OpenAI is necessary to mitigate that risk.
In other words, they themselves have told us to judge them by a much higher ethical standard than "just business". And all of them (not just Altman) have failed when judged by that standard. Even if they're wrong and AI actually isn't an existential risk, that doesn't mean they're off the hook for their behavior, because they themselves set the standard for them to be judged by.