I've seen the "We are motivated by higher goals, trust us" plot play out one too many times, just with different actors. At some point I stopped assuming the best and instead opted for "assume the worst, maybe get pleasantly surprised once in a while". So far, I've been proven correct more often than not - as sad as that is.
> There actually are people out there who are motivated by higher ideals than their own self-interest.
Oh, sure there are. Just many more who aren't. And it's really hard to tell them apart, especially if both say the same things. But we live in a society which rewards the latter, so the safe option (the irony isn't lost on me) is to assume people aren't.
> I don't know if Ilya and the OpenAI board are in that category
That's kind of the problem, isn't it? If they aren't then OpenAI isn't governed by the principles in its charter in either case. And in that case: What is better, an OpenAI which is scrutinized at every step, because no one trusts Microsoft or an OpenAI which can do whatever it wants, cause "safe AI is our goal, trust us" and then wake up one day and find out they too betrayed our trust?
I've seen the "We are motivated by higher goals, trust us" plot play out one too many times, just with different actors. At some point I stopped assuming the best and instead opted for "assume the worst, maybe get pleasantly surprised once in a while". So far, I've been proven correct more often than not - as sad as that is.
> There actually are people out there who are motivated by higher ideals than their own self-interest.
Oh, sure there are. Just many more who aren't. And it's really hard to tell them apart, especially if both say the same things. But we live in a society which rewards the latter, so the safe option (the irony isn't lost on me) is to assume people aren't.
> I don't know if Ilya and the OpenAI board are in that category
That's kind of the problem, isn't it? If they aren't then OpenAI isn't governed by the principles in its charter in either case. And in that case: What is better, an OpenAI which is scrutinized at every step, because no one trusts Microsoft or an OpenAI which can do whatever it wants, cause "safe AI is our goal, trust us" and then wake up one day and find out they too betrayed our trust?