Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh I'm totally an armchair lawyer, so my ruminations were not grounded in laws or legal precedence :-) I do have some background on the patent side of things, where independent reinvention is also not a defence for infringement, but not so much in copyright, so this was educational.

However, has there been any case where the infringment was not only unintentional, but also unexpected?

That is, if you look at cases of uintentional infringement, these are typically cases where some the act of reproduction of content was intentional, but there was a lack of awareness or confusion about the copyright protections of that content. (This paper was useful for background: https://www.law.uci.edu/faculty/full-time/reese/reese_innoce...)

But I could not find a case where the act of copying itself was non-intentional.

In this case, looking at how LLM training works and what LLMs do, it is surprising that it could reproduce the training content verbatim. The fact that it reproduced those outputs is undeniable, but how does existing law and jurisprudence apply to an unprecedented case like this where the reproduction was through some magic black box that nobody can decipher?



These are interesting questions but they are not legal questions. Intent is not an element of infringement. It is only an element of willful infringement. Therefor it can never be used as a defense against infringement on its own.

>The fact that it reproduced those outputs is undeniable, but how does existing law and jurisprudence apply to an unprecedented case like this where the reproduction was through some magic black box that nobody can decipher?

People love to ponder... but ponder how the law should handle that... "Yes, your honor, our business has a magical black box that violates the law, we're just not sure how! Therefore we can't be liable" -- How does that even make sense? On what principle should that apply here and not elsewhere? Can your magic black box murder? Defame?


> On what principle should that apply here and not elsewhere? Can your magic black box murder? Defame?

Good questions, and I think relevant to the current point. We're already seeing cases like that pop up with the libel suits or the recent, tragic AI-assisted suicides.

It's very clear that these models were not designed to be "suicide-ideation machines", yet that turned out to be one of the things they do! In these cases the questions are definitely not going to be about whether the AI labs intended these outcomes, but whether they took sufficient precautions to anticipate and prevent such outcomes.

One possible defense for the AI labs could be "these machines have an unprecedented, possibly unlimited, range of capabilities, and we could not reasonably have anticipated this."

A smoking gun would be an email or report outlining just such a threat that they dismissed (which may well exist, given what I hear about these labs' "move fast, break people" approach to safety.) But without that it seems like a reasonable defense.

While that argument may not work for this or other cases, I think it will pop up as these models do more and more unexpected things, and the courts will have to grapple with it eventually.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: