Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The attitudes I'm seeing in this debate are mind blowing to me, explain to me why you think it's better that the future of AI and therefore the future of humanity is better off controlled by narrow-minded shareholder value maximizing entities like Microsoft or Silicon Valley VCs.

What's wrong with the idea that AI should be developed with the goal of being safe and for the benefit of all humanity?

As for the objection to safe being defined by a narrow group 'great men'. I'm sure Ilya would agree that it's not ideal that such an important milestone in human history being decided by such a small group of people. But what alternatives do we have right now? What if OpenAI decided to turn over their technology to the US government or the UN? Would that make you feel better?



> What's wrong with the idea that AI should be developed with the goal of being safe and for the benefit of all humanity?

Nothing. I just don't believe that's what will happen here. All I see is two sides trying to steer things in a direction they profit most from, with one side saying - more or less openly - that that's what they want to do, while the other side says "oh no, we don't want to do this at all. We only have your best interests at heart!" and I hate people that try to take me for a ride.

> What if OpenAI decided to turn over their technology to the US government or the UN? Would that make you feel better?

Replace it with "a UN replacement with only democratic governments" and yes, that would make me feel better immensely. Or put it out into the open, which would be the best option if they really care about humanity.


I dunno man, not sure why you're so cynical. There actually are people out there who are motivated by higher ideals than their own self-interest. I don't know if Ilya and the OpenAI board are in that category but I believe the world is a better place if OpenAI is still governed by the principles in its current charter than just another extension of Microsoft/VC will


> I dunno man, not sure why you're so cynical.

I've seen the "We are motivated by higher goals, trust us" plot play out one too many times, just with different actors. At some point I stopped assuming the best and instead opted for "assume the worst, maybe get pleasantly surprised once in a while". So far, I've been proven correct more often than not - as sad as that is.

> There actually are people out there who are motivated by higher ideals than their own self-interest.

Oh, sure there are. Just many more who aren't. And it's really hard to tell them apart, especially if both say the same things. But we live in a society which rewards the latter, so the safe option (the irony isn't lost on me) is to assume people aren't.

> I don't know if Ilya and the OpenAI board are in that category

That's kind of the problem, isn't it? If they aren't then OpenAI isn't governed by the principles in its charter in either case. And in that case: What is better, an OpenAI which is scrutinized at every step, because no one trusts Microsoft or an OpenAI which can do whatever it wants, cause "safe AI is our goal, trust us" and then wake up one day and find out they too betrayed our trust?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: