Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In this case, the road to hell seems to be paved with intent to... make it easier to goof around and make silly prank videos, I guess? A lot of deepfake projects seem to be aimed in that direction and while there's nothing wrong with that in itself, it's hardly a compelling use case that outweighs the obvious harms that everyone has been talking about for years now. That's why I say that if someone cared about those harms they wouldn't be making this. Of course there are always things we tell ourselves: "if I didn't make this someone else would", "by making this easier (faking videos of real people) I'm training the public to be more skeptical", etc... etc... At what point is it obvious that these are excuses and the person really doesn't give a damn?


So the truth here is that the reason they're doing this is because they aren't yet good enough to sell to Hollywood. Not to say that Hollywood isn't using deep learning[0], but there's typically a combination of classical tools and deep learning tools. But these companies all seem to have an aversion to traditional tools and appear to want to be deep learning all the way down. This is a weird tactic and the fact that people are funding such companies is baffling. I can't even imagine a future where you don't want traditional tools, even if ML could do 99%. Hell, even 100%. Language is pretty lossy and experts are still going to want to make fine grain edits.

[0] Disney Research is pretty cool: https://www.youtube.com/@DisneyResearchHub




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: