Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI Boardroom Battle: Safety First (fabricatedknowledge.com)
45 points by skmurphy on Nov 19, 2023 | hide | past | favorite | 42 comments


> Let’s examine the structure from the bottom to the top. At the bottom is OpenAI Global, the capped profit entity. The restriction on the capped profit company is that it must start to return a profit to the nonprofit after a 100x return in capital. Later-stage investors shouldn’t expect to receive such a return.

> Microsoft, in particular, has tied its wagon closest to OpenAI. It’s a later investor, so it cannot make the ~100x return, but it invested $10 billion dollars in OpenAI, mostly in the form of cloud computing credits. The deal’s terms are not quite disclosed, but we all understand that Microsoft has the largest economic stake in OpenAI.

This explains the deal between Microsoft and OpenAI. They weren't able to find investors who were satisfied with a lower return, but Microsoft wanted something other then a financial return: Access to the models. So the restriction on the return ensured a much worse deal for them as the only potential investors were big tech companies to whom access to the models is useful.

This entire non-profit/capped-profit structure turned out to be too clever by half. Had OpenAI been a standard business, they wouldn't have had this problem.


I think this assessment is a likely accurate story. But I disagree on the takeaway and predictions.

> The current structure of a non-profit board without much else in place should not be running a critically important company like OpenAI.

This seems insane, given the alternative of running it like your typical YC startup.

> I believe that the longer-term problem of safe AI is important, but you don’t do that with sudden shakeups of the founder and an exodus of half of the employees. OpenAI has been doing something special for a long time, beating the likes of better-funded research organizations like Google or Microsoft. It’s probably in everyone’s best interest to keep the team together.

I'm so curious about all these takes about everybody leaving if Altman is out. And it's weird to support it by talking about the long-time specialness that existed before Altman was so day-to-day involved, and that has been under tension for a long time too (see all the departures that lead to Anthropic existing). They didn't oust him out of nowhere. The schism is there and either side winning doesn't magically make for unity.

I have no idea which side I'd root for, but these takes that the board massively screwed up all look very one-sided, ignoring the disagreements over the massive pivot openai has undergone in recent years. I'd bet we see lots of people on the "losing" side leave openai over the next months regardless of the outcome.


One way or another the current OpenAI team will split into two teams - one headed by Sam and focused on making money and another headed by Ilya and focused on creating a safe AGI.


pretty sure that's the reason Anthropic exists, so Ilya would probably just go there. This isn't the first clash over AI safety Altman has caused


Only if we define safe as "controlled by Ilya and/or some clique of 'great men'". But at least they are good at PR. After all, who could say something against safety?

And no, I also don't believe that Sam A. has my best interests at heart, but at least he doesn't hide behind "safety" to pretend otherwise.


The attitudes I'm seeing in this debate are mind blowing to me, explain to me why you think it's better that the future of AI and therefore the future of humanity is better off controlled by narrow-minded shareholder value maximizing entities like Microsoft or Silicon Valley VCs.

What's wrong with the idea that AI should be developed with the goal of being safe and for the benefit of all humanity?

As for the objection to safe being defined by a narrow group 'great men'. I'm sure Ilya would agree that it's not ideal that such an important milestone in human history being decided by such a small group of people. But what alternatives do we have right now? What if OpenAI decided to turn over their technology to the US government or the UN? Would that make you feel better?


> What's wrong with the idea that AI should be developed with the goal of being safe and for the benefit of all humanity?

Nothing. I just don't believe that's what will happen here. All I see is two sides trying to steer things in a direction they profit most from, with one side saying - more or less openly - that that's what they want to do, while the other side says "oh no, we don't want to do this at all. We only have your best interests at heart!" and I hate people that try to take me for a ride.

> What if OpenAI decided to turn over their technology to the US government or the UN? Would that make you feel better?

Replace it with "a UN replacement with only democratic governments" and yes, that would make me feel better immensely. Or put it out into the open, which would be the best option if they really care about humanity.


I dunno man, not sure why you're so cynical. There actually are people out there who are motivated by higher ideals than their own self-interest. I don't know if Ilya and the OpenAI board are in that category but I believe the world is a better place if OpenAI is still governed by the principles in its current charter than just another extension of Microsoft/VC will


> I dunno man, not sure why you're so cynical.

I've seen the "We are motivated by higher goals, trust us" plot play out one too many times, just with different actors. At some point I stopped assuming the best and instead opted for "assume the worst, maybe get pleasantly surprised once in a while". So far, I've been proven correct more often than not - as sad as that is.

> There actually are people out there who are motivated by higher ideals than their own self-interest.

Oh, sure there are. Just many more who aren't. And it's really hard to tell them apart, especially if both say the same things. But we live in a society which rewards the latter, so the safe option (the irony isn't lost on me) is to assume people aren't.

> I don't know if Ilya and the OpenAI board are in that category

That's kind of the problem, isn't it? If they aren't then OpenAI isn't governed by the principles in its charter in either case. And in that case: What is better, an OpenAI which is scrutinized at every step, because no one trusts Microsoft or an OpenAI which can do whatever it wants, cause "safe AI is our goal, trust us" and then wake up one day and find out they too betrayed our trust?


> But at least they are good at PR.

What could possibly make you think that Ilya and AI Notkilleveryoneism is possibly winning the PR battle? HN, Tech Twitter, VC's, and major industry are all raking them over the coals. All of those people are trying to make money and get rich. Imagine thinking you're the underdog. The scientists are trying to prevent catastrophe for everyone. This is the climate change debate all over again. You wouldn't have the cajones to think climate change was real if that view was unpopular.


"AI safety" is just a rehashed version of "think of the children".


This assumes that AGI is capable of being done “safely.”

That is an interesting assumption.


Properly and absolutely airgapped should do it. Internet is the big vector that can't be closed once exploited.


That's like saying we can solve aging and cancer by a combination of genetic and epigenetic interventions.

It might even work, if you could do it.

Actually doing that is going to be essentially impossible, though the reasons are completely different.


Huh? We technically know how to airgap something already. You wanna be absolutely sure? Build a bunker underground, make the power and all utilities independent, allow nothing connected in there, and there you go. Do we technically know how to solve aging and cancer already?


> though the reasons are completely different

For example, but not limited to: there's no point having an AI you're not using, using it — even as an Oracle — involves some kind of interaction with the outside world in the form of what questions you ask and what actions you take in response.


That’s a very reasonable argument.


Ilya if he wants to fight will probably just hire Wachtell, Lipton, Rosen & Katz. the teams on both sides will be lawyers, not vc’s or engineers.


A soap opera for nerds.


The real tragedy is failure on behalf of US and EU governments to regulate ML/near-agi. It shouldn't be up to Ilya to keep the public safe or to determine if there is any real danger -- at least not on his own as part of an organization that has a conflict of interest. But we don't even have basic privacy protections or social media regulation even after multiple election interferences.

I hate being right about this crap but OpenAI's tech will get integrated into a lot of daily tech people use and as always abuse will flourish, the 80% will suffer and the 20% will prosper.


I think it's awesome to see this level of excitement, but what are the leading theories behind Ilya/board's silence?


The board's silence is normal, you're just too used to this weird 'everything in public on twitter all the time' type of behaviour that emanates from this particular sector.

Do you see the boards of GM and IBM slapfighting on twitter? No. It's a thing some people have taken to doing, and - in my opinion - it's not a good thing. It's incredibly immature.

This 'I'm in the office lol' behaviour is also pretty weird. It's more the sort of thing you'd expect to see on the pro-wrestling circuit than the type of behaviour you'd expect from a CEO.


It makes a big internal breakthrough more and more likely, when they aren't willing to speak publicly


They finally talked to a lawyer?


Thoughtful analysis of dynamics in play at OpenAI that may have led to his firing and likely will result in his reinstatement.

Key take-aways (excerpts from article):

"The board made a blunder. OpenAI’s employees will likely get their CEO back by Monday, and Satya Nadella’s 10 billion dollars in Azure credits will have some vote in the future of OpenAI.

What’s clear is that the board is grossly mismanaged. A non-profit board should not be running a critically important company like OpenAI. Just look at the turnover and lack of transparency on re-election.

I think that Ilya [Sutskever] will leave OpenAI when Sam is reinstated. He has to be the player who initiated the power play. The entire current board will leave, and a new board with fewer AI safety people (sadly) will be reinstated.

I would not be surprised to see the OpenAI charity and capped-profit structure flipped, with a formal board at the GP that becomes the real locus of power.

The boardroom move was amateur and sudden. And as much as boards have technical legal power, so do the organizations they rule. It’s all a construct, and the people of OpenAI will get their way. And hopefully, a better governance structure.

I believe that the longer-term problem of safe AI is important, but you don’t do that with sudden shakeups of the founder and an exodus of half of the employees.

OpenAI has been doing something special for a long time, beating the likes of better-funded research organizations like Google or Microsoft. It’s probably in everyone’s best interest to keep the team together. "


> A non-profit board should not be running a critically important company like OpenAI

A for-profit board should not be running a critically important company like OpenAI.


There’s been a rewrite btw, mostly a critique of the non profit board structure not the non profit.


Thanks I did not realize this. My excerpts are from the original version, most of them no longer appear in the text.


>What’s clear is that the board is grossly mismanaged. A non-profit board should not be running a critically important company like OpenAI. Just look at the turnover and lack of transparency on re-election.

This is an insane thing to say. It's an argument against OpenAI existing at all, and the alternatives (for-profit boards and a military research project) are both much, much worse.


Why would the board agree to all that? If they are worried about the direction Altman is taking OpenAI, then surely the price for Altman’s return is a strengthening of the current structure and some guarantees from Altman to slow down, not a weakening and hollowing out of the structure.


> Why would the board agree to all that?

Because if they don't agree, then all their best employees, and probably also their Azure cloud credits, will leave to join Sam and Greg at a new company picking up right where they left off.


The remaining org may likely desicate without Microsoft and other commercial investment and revenue sources, but I don't think you can assume a new entity would be off to a strong start without the IP which OpenAI itself ultimately owns (the brand, the software products and infrastructure, the customer accounts, the trained systems, etc).

Unraveling this weekend's drama is way more complicated than you suggest. Both factions actually have quite a bit of leverage.


All the best people you say are just sales guys or people who are trying to create a product out of real engine.


Only 3 people are reported to have resigned.

There’s no evidence that a mass resignation will happen.


OpenAI is the hottest of the hot, there’s no shortage of talented people who want to work there. The people who decide this is why they want to leave are probably the ones least aligned to OpenAI’s mission.

The same goes for the Azure credits. If MS decides they want out then other big tech companies will be lining up to take their place. This makes Microsoft’s threats rather hollow.


I posted these excerpts about an hour after the original post went it. The article has now been substantially re-written (with no indication in the text of the changes) and most of the sentences I excerpted have been removed. It was an accurate summary at the time I posted it.


Strange to see this stated as if they were clear facts. Openai when founded had the goal of AGI in mind. That's why they chose this structure so that the corporation side would not have almost unbounded power when the technology matures and the total revenue becomes a sizable proportion of the GDP. Weird and idealistic? Maybe, but also possibly correct. If the only way to get there is through raw capitalism, society is in for a big shock. Bigger than it would be with its original model.


Let’s be honest… the cat is out of the bag and now it’s a race. If Openai tries to regulate itself too much they will eventually be eclipsed by another player that only worries about gov regulations and possibly just accepts the fines like many capitalists. I say buckle up and get ready for some massive disruption! Better for the US (at least for us citizens) to lead the way and maintain the power of AGI than many other alternatives… let’s start working on the post capitalist society!


If OpenAI changes their governance structure I think every single major AI lab will be controlled by corporations. What makes you so optimistic about a world where the most powerful technology of all time is controlled by a narrow set of capitalist interests? Just think about what the world would be like, I'm pretty sure it's not going to be a 'post-capitalist' society. If anything it'll be the exact opposite


We'll see. Apparently they're meeting right now at OpenAI and they made Sam use a guest badge. The level of pettiness is mind blowing.

https://twitter.com/sama/status/1726345564059832609


Pettiness seems to be Sam tweeting about this. As a person who is, at the moment, not an employee, using guest pass is correct.


What badge do you suggest they should have given him? Those badges actually serve a function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: