If you are a CEO, and you are unable to convince your board that you are correct, then the proper thing to do is resign, for you have failed to do your job. The main job of a CEO is to be leader, and part of the job of a leader is to sell people your vision. And if you can't sell the vision to the people who cut your paycheck, well, either you aren't good at your job, or the people in that position are incompatible with you, and given that the board outranks the CEO, that means you should go.
This is all just business, right? If you're holding the nut flush and you think the board's got rags, you play it out. Whatever reasons the board had to be done with Altman, they did not have the cards, and he did: the team, as I understand it, believed itself to be dependent on Altman (or at least, on not-the-board) for their stakes to be worth anything.
I see how it'd be really dicey for a CEO to pitch a fit on the way out under ordinary circumstances. But also, if you fire your CEO, he may very well go stand up a terrifying competitor, which is basically what happened here, right?
Not according to Sam Altman, or for that matter the other OpenAI board members who have been involved in this whole kerfluffle. They all say AI is an existential risk, and OpenAI is necessary to mitigate that risk.
In other words, they themselves have told us to judge them by a much higher ethical standard than "just business". And all of them (not just Altman) have failed when judged by that standard. Even if they're wrong and AI actually isn't an existential risk, that doesn't mean they're off the hook for their behavior, because they themselves set the standard for them to be judged by.
OK. Sure. But if you're disinclined to hold businesses to higher standards than the norms of business, is there some other argument that would be persuasive about Altman's obligation during the board ouster debacle?
> if you're disinclined to hold businesses to higher standards than the norms of business
The only reason to be so disinclined in the case of OpenAI would be that you're sure their existential risk claims are wrong. If that's the case, then I suppose you could just shrug your shoulders and ignore the whole kerfluffle, at least as long as you have no skin in the game. I personally still think the conduct of everyone involved has been childish and unprofessional, but if we take existential risk off the table, then the issue is just that the norms of business in our current culture are childish and unprofessional, which is disappointing, but has no simple fix.
Can I reasonably assume that people who are very riled up about OpenAI not meeting its original (weird, IMO) moral standard are revealing that they do in fact take existential AI risk seriously?
> Can I reasonably assume that people who are very riled up about OpenAI not meeting its original (weird, IMO) moral standard are revealing that they do in fact take existential AI risk seriously?
I would say many of them probably do, yes--quite possibly in large part because of OpenAI's own rhetoric on the subject.
> And if you realize there's no way to convince them that they are wrong?
Then you resign and start your own company whose charter is written the way you think it should be written.
> But it's a matter of survival of the entire universe?
If this is actually true, then, as I've already said elsewhere in this discussion (and in previous HN discussions of OpenAI), Altman is the last person I want in charge of this technology. And the same goes for everyone else associated with OpenAI. None of them have come anywhere remotely close to showing the kind of maturity, judgment, and ethics that would qualify them to be stewards of an existential risk.
If Altman sincerely believes that AI is an existential risk, then he should resign from OpenAI and disqualify himself from working on it or being involved in it in any way. That's what he would do if he were capable of taking an honest look at himself and his actions. But of course I won't be holding my breath.
But the release of chatgpt was not a matter of survival of the entire universe. So why are you asking the question?
The ad-extremo argument is a fallacy that doesn't ad anything to the discussion. It's simply a way of pushing a certain view, because you can always find some hypothetical that might justify an action.
Fortunately we are dealing with a concrete situation so we don't need to talk in hypotheticals.
> And if you realize there's no way to convince them that they are wrong?
Leave.
> But it's a matter of survival of the entire universe?
Literally nothing is like that. And even after correcting your hypothetical to something reasonable, Altman was almost certainly closer to the "risk destroying the universe" camp than the board.
No, there's a group that truly believes that AI progress will inevitably end up creating an omnipotent God. And that if we're not careful enough, that newly created God will be evil.
And judging by how Altman, and indeed everyone associated with OpenAI, has behaved through this whole kerfluffle, any AI they create will be evil if it has that much capability.
The "entire universe" or "future of humans" type arguments are essentially psychopathy. I am not calling you a psychopath either.
I do very strongly urge you to cast that kind of thinking aside.
Fact is, in many cases where those kinds of arguments are seen, we will also see someone who really, really does not want to be told what to do and or who cannot tolerate a decision they do not agree with.
One can justify any means to any end thinking like that, which is why I am tagging that kind of reasoning as psychopathy.
It is unhealthy.
Take care, live well, peace and all that. I mean nothing personal.
Just give this all some real thought. You are extremely likely to be better off for having done it.
Okay, but let's imagine, the board wants to detonate a nuclear bomb. You think it's not a great idea, because it might cause a lot of deaths. You are not seeing a way to convince them in other ways, however you do see a way where you can lie to them, to keep them from that idea and therefore saving many lives and possibly the World from assured mutual destruction. Would that be unethical? Would that be psychopathy?
Do you think that OpenAI is somehow on the same level of importance as "the survival of the entire universe" or "[detonating] a nuclear bomb"? I'm trying to understand why you're comparing this scenario to things of that magnitude. Seems like one hell of a gigantic stretch.
Not talking about OpenAI, but in general. I wouldn't know what is happening in the OpenAI's case, I just don't have the information.
I just don't like the idea of some group of people having superior power of not being questioned about their ethics (the board in this case). Just sounds a bit cultish to imagine boards to always wield these kind intentions.
And when one does so, remain ethical, should that discussion get ugly, holding to those ethics does a world of good when it comes down to others having to trust your intent was as just and true as you could manage.
> let's imagine, the board wants to detonate a nuclear bomb.
In the case of OpenAI, it was Altman, not the board members he should have been arguing his position with openly, who wanted to undertake actions that others thought were too risky. So your argument, if it were valid, would not apply to Altman, but to the other OpenAI board members whose confidence in him had been destroyed. Do you think they should have lied to Altman to get him to stop doing things they thought were too risky?