Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Taking the broader view of this nature feels like an attempt to change the narrative.

The entire point of having transparency is around building those foundations so they don’t inherit the biases of humans, for starters. Right now, we have zero introspection into this and no ability to improve upon it with the widely deployed models being used today, and that has already created problematic situations, let alone situations that are problematic and not known yet.

Transparency around this is a very good thing to prevent AI from inheriting negative human ideas and biases, and broadens access to improve training data that benefits everyone



I think they're parallel concerns and everyone has their own priorities. Openness of the models and their training is important but for most people, it wouldn't really matter anyway because they can't afford the computing power to do their own training.

I care about all that in the abstract but what I can download and use on my computer is more concrete and immediate.


I'm a big believer in not allowing the pursuit of perfection to cause us to lose sight of the good things that we have.

Yes, these open models could stand to be more open and I hope that we'll see that in the future. But at the same time I'm extremely grateful to the companies who have released their weights under reasonable terms. Them doing so has undeniably led to an enormous amount of innovation and collaboration that would not have been possible without the weights.

If we constantly downplay and disparage the real efforts that companies make to release IP to the world because they don't go as far as we'd like, we're setting ourselves up for a world where companies don't release anything at all.


>Yes, these open models could stand to be more open and I hope that we'll see that in the future

The most operative word here is hope. Which means we may not see more get open sourced over time. Especially, if there is no pressure for companies to do so.

>If we constantly downplay and disparage the real efforts that companies make to release IP to the world because they don't go as far as we'd like, we're setting ourselves up for a world where companies don't release anything at all.

I don't mean anything as disparagement or downplay, but companies aren't releasing this stuff because it makes everyone feel good. Its a tactic. They're only open sourcing something because they expect to get something out of it. That's fine, I'm all for that. That's a valid reason, and often it can be a 2 way street.

What it isn't though, is an attempt at any company saying "we are open sourcing this today because we want to encourage more transparency and auditability as AI takes on more critical roles in society, to ensure in the domains its being applied, to the best of our ability and the ability of our community, that it does not inherit negative human biases"


> companies aren't releasing this stuff because it makes everyone feel good. Its a tactic.

It's a tactic, but one of the primary reasons to expect it to be effective is building goodwill in the community. If the goodwill dries up then most of the reason to open anything up is gone.


Goodwill doesn't equal transparency or auditability, which are the core concerns that have been repeated around AI models and training of said models




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: