Next, somebody grabs her work (copyrighted by the clients she works for), without permission. Then goes on to try and create an AI version of her style. When confronted, the guy's like: "meh, ah well".
Doesn't matter if it's legal or not, it's careless and plain rude. Meanwhile, Hollie is quite cool-headed and reasonable about it. Not aggressive, not threatening to sue, just expressing civilized dislike, which is as reasonable as it gets.
Next, she gets to see her name on the orange site, reading things like "style is bad and too generic", a wide series of cold-hearted legal arguments and "get out of the way of progress".
How wonderful. Maybe consider that there's a human being on the other end? Here she is:
Napster was a peer-to-peer file sharing application. It originally launched on June 1, 1999, with an emphasis on digital audio file distribution. ... It ceased operations in 2001 after losing a wave of lawsuits and filed for bankruptcy in June 2002.
Use of the output of systems like Copilot or Stable Diffusion becomes a violation of copyright.
The weight tensors are illegal to possess, just like it's illegal to possess leaked Intel source code. The weights are like distilled intellectual property. You're distributing an enormous body of other people's work, to enable derivative work without attribution? Huge harm to society, make it illegal.
If you use the art in your product, on your website, etc., you risk legal action. Just like if I publish your album on my website. Illegal.
The companies that train these systems can't distribute them without risking legal action. So they won't do it. It's expensive to train these models. When it's illegal, the criminals will have to pay for the GPU time.
It will always exist in the black-market underground, but the civilized world makes it illegal.
That's where this is going, I hope. Best case scenario.
Piracy made music acquisition too convenient for the consumers, so an alternative had to be created - but this alternative really only helps the labels, and not the people actually making the music.
It's not clear to me that the streaming world is better for artists than the Napster one. At least anyone wanting to legally listen to music then would buy albums, rather than just having a spotify subscription. Not that royalties on physical CDs were great, but my understanding is they did work out better for most artists than we see with streaming royalties.
I don't know what a potential analogy would be here with stable diffusion or dall-e or whatever, but I don't know that people were able to immediately identify the potential downsides with "winning" against piracy, either.
> We effectively didn't, though, at least as far as artists are concerned. Streaming revenue is abysmal for artists.
But that's not Napster's fault. Spotify pays a lot of money for playing of a song, of that the artist only sees a tiny percentage due to music middlemen trying to relive the 90's.
And that's why I buy music off Bandcamp whenever I can, and thankfully most of the music I listen to is on smaller labels, so usually even more money goes to artists.
I'm just saying that the solutions that pop up once you "win" are not necessarily ones that provide a win for the people you are trying to protect.
I distribute my music through CDBaby, and looking at transaction history I've been getting $3.65 per thousand streams. That's not nothing, and is much higher than I'd get from radio.
Spotify is taking in a lot of money and paying 70% to labels, which adds up to a lot of money for artists depending on their agreement with their label/distributor. But the per stream rate is still very low because there are trillions of songs streamed annually.
I'm sorry, I don't want to sound dense but I'm not clear what your point is.
Are you saying that CD Baby is a better distribution technique than standard labels because you get good margins? I didn't know CD Baby until I just looked them up but they appear to be a distributor, so your $3.65 metric is still being paid by Spotify/Amazon/Apple. Please correct me if I'm wrong but that is much higher than the normal published numbers by 10-100x.
Is this an RIAA moment where labels are trying to make other people look like jerks rather than accepting what they do, or are people using the "per 1000 streams" poorly because they will always look worse on successful platforms?
I think the distribution of streaming revenues is generally reasonably fair, and people who say things like "Spotify pays artists nothing" are confused about either (a) how much money there is to divide up or (b) where it is going.
Anything is a lot of money if you offer no basis for comparison. A million dollars seems like a lot until you say that it's what you paid to build a downtown skyscraper.
The math here makes a flawed presumption: that you play a song only once after buying it from iTunes.
I obviously don't know your listening habits, so for you that may be the case. But people will listen to a single song far more often than once. Or otherwise there'd had to be 77.946.027 new users on spotify[1] last month all playing Ed Sheeran once. Clearly nonsense.
If you play every $1 iTunes song eight times on spotify, the costs (and therefore fees) will be on par: $10/month.
70% percent of Spotify revenue goes to the artists (content owners to be precise), I doubt that was the case when you bought a CD (I have found numbers closer to 40%). It is not abysmal, revenue never seemed to be better.
It doesn't matter if it's Spotify's fault - I'm not saying they are the evil empire. I am saying that streaming is how we "beat" piracy, and it was not a panacea for the people it was supposed to protect - unless we consider the labels the people it was supposed to protect.
You're also comparing apples to oranges on revenue. 40% of the revenue of a $10-$18 CD sale is a lot different than 70% of a $10/mo subscription being split out over however many artists someone might listen to on spotify.
Lots of artists talk about how they simply can't make a living off streaming royalties - artists that were able to do in the era of album sales. Obviously any sort of comfortable living requires merch sales and touring.
Comparing artists that could make record sales earlier to all current streaming artists is comparing apples to oranges.
I agree that Spotify's revenue split is not perfect (it is in fact worse than what you described), but it is still much more fair than record sales. I consider having tape/CD record sold in physical shops (i.e. not during concerts) would be a success on its own in the 90s.
Now every artist can publish their work on Spotify and start to earn money, possibly getting noticed through it. It is much feasible to not be a part of record label now.
> The weights are like distilled intellectual property. You're distributing an enormous body of other people's work, to enable derivative work without attribution? Huge harm to society, make it illegal.
The thing is that you're distributing only the instructions for making other peoples' work. There are art books and articles explain the styles of certain artists and what techniques they use to achieve it; you could probably recreate "the Mona Lisa but in the style of Marc Chagall with cool glasses on" with real paint if you had previously stared at both the Mona Lisa and Marc's art for hours at a time. Are you infringing upon either of their copyrights by combining them? Probably not. But if you just recreated the Mona Lisa after having stared at it for hours, and it turned out nearly identical, then it would be. So where is the line?
Well think if you took the left side of Mona Lisa and combined it with Van Gog's Starry night? Would that be ok? Of course not.
But if you took 100 paintings from 100 artists and clipped all into small jigsaw pieces then recombined them randomly to create a new picture, that would probably not be considered "derived art".
What matters is how much creativity you put into the process. Does the creative esthetics of the work derive from your efforts, or from those of the existing author's existing art.
But if I took 100 paintings from a single artist and combined them all into new works of art, that would probably be copyright infringement in my view, and is what seems to be happening here.
Consider the history of trademark lawsuits. You can be sued if you create something that somehow resembles an existing trademark, say use the exact same color as Coca-Cola for something similar.
So I think the guiding principle is or should be whether what you create can be confused with the work of some other highly original artist. It doesn't matter if you painted it all, if it looks similar enough that people could confuse it with the original artist's work, you are infringing.
>But if I took 100 paintings from a single artist and combined them all into new works of art, that would probably be copyright infringement in my view
What makes you say that? A work is considered derivative with regards to another work, not to an author. If we take your jigsaw example and accept that the final result would not be derivative of any of the works that contributed each individual piece, and then pretend as if in actuality all the sources were from the same artist, what would change that would suddenly make the result derivative from some or all of the original works?
You are probably right there, I was just assuming that courts would consider it a factor if all pieces came from the same author.
As I understand it copyright infringement is not just a pure "crime" in itself. It is about the financial harm caused. I think the word they use is "tort". It is always about violating somebody else's right(s).
Oracle sues Google for Java copyright infringement. It is not just about "Hey here's a copyright infringement ... put Google in jail".
It is about "We lost a billion dollars, because of your infringement". So Oracle claims not just a single copyright infringement of a single work, but billion dollars worth of infringement. It is not black and white, it is quantitative. How much there is of it determines the seriousness of the violation.
That's kind of hilarious because ALL artists copy/imitate other artist' styles during their learning process before settling into a style all of their own.
The number of people who learned to draw by redrawing/imitating Disney stuff is countless.
The thing people aren't seeing with AI art is that it's the same as mass manufacturing, compare: buying a mass produced knife vs buying a handmade artisanal knife. I think exactly the same thing applies; generating machine made art in a given style vs buying/commissioning an artist.
I think taking someone's work to train an AI is fine, as long as you obtained legal access to the material in the first place. There is no copyright for art styles, if there was we would have no artists because even this artist in question would've started out by imitating other artist' styles.
As an update after taking a closer look at the article rather than the discussion: her art style is 100% inspired by Disney (and a few others) and there is nothing wrong with that.
It seems very strange to use the existing rules of copyright as a defense of the use of this new technology.
The concept of copyright was created in response to the development of the printing press. It was a reaction to a disruptive technology. It was possible to laboriously copy written works before the printing press existed, but the new technology made it incomparably cheaper and faster to do so, and societies reacted by creating new protections for content creators.
We are now at the threshold of a new disruptive technology that is likely to bring about profound economic changes in the arts. It makes no sense to me to take the old rules and try to use them to justify this disruptive technology, when the old rules were initially created in response to a different disruptive technology.
It seems uncontroversial that this new generative technology is built on the backs of human artists. It only functions by drawing from their works. Is it so unconceivable that we might need a totally new set of protections for those human artists?
It is true that generative ai technology is often trained on human artists' work. But how is that different from human artists taking inspiration/learning and adapting the style of other human artists? I suppose the argument is that humans should get special treatment in the copyright domain?
I wonder if it is possible to get a machine to learn a style without input. Likely a room full of typewriter monkeys searching for Shakespeare scenario, but a human would still be involved in the loop to "confirm" the desired style - which is technically a creative decision in itself.
Which I guess shows the true nature: machines could generate stuff for machines without any external input. But we built them, so we've tasked machines to generate stuff for humans. And therein lies the answer I guess.
I 100% believe machines can be creative. Creativity isn't something unique to humans or to living things. For me it's a concept.
>It is true that generative ai technology is often trained on human artists' work. But how is that different from human artists taking inspiration/learning and adapting the style of other human artists?
It's different in the same way that making a copy of a book by hand, where it might take weeks or months to make a single copy, is different than making a copy with a printing press in a few minutes. It was the technological development of the latter process which lead to the concept of copyright being created in the first place.
There is a fundamental difference between a human being taking years to acquire artistic skill, then using that artistic skill to create individual works inspired by other artists, vs. using a generative AI system to "learn" a particular artist's style in a minutes or hours, then create infinite iterations of that style nearly instantly.
There's a tendency for people in tech to search out broad, overarching, universal principles that can be applied to all behavior. But sometimes, simply being able to do something tens of thousands of times faster or tens of thousands of times more cheaply is enough of a difference to require new rules, new moral frameworks, new modes of thinking.
"The computer is just doing what a human could do" simply isn't a compelling enough argument, any more than "the printing press is just doing what a scribe could do" would be.
> The concept of copyright was created in response to the development of the printing press. It was a reaction to a disruptive technology.
Absolutely, one of the major factors was that it allowed individuals to benefit directly off someone elses work without having made substantial changes. The protection was intended for the original works it self and derivatives too close to the original content.
> We are now at the threshold of a new disruptive technology that is likely to bring about profound economic changes in the arts.
This already happened with photography taking over portraits and tracing, the response wasn't to outright ban it, or really prevent it either. When technology made photography more accessible, to the point it was going to be disruptive to professionals in the field, the response again wasn't to outright ban it, or really prevent it either. This is despite the fact that it has litterally destroyed a significant amount of jobs to achieve conviniences that we now all enjoy.
I feel like the AI issue is a parallel to above situation. People are now given better tools to generate/create art themselves and as long as it isn't blatant copies, derivatives too close to the original content, it probably should be have similar rules in my opinion.
> It only functions by drawing from their works.
You can train AI models by taking photos and then vectorizing/toonifying/paintify etc. depending on what you're aiming for with various wildly available non-AI filters. Stylistic ideas are possible to implement into these filters, I have some experience having done so with making plugins for processing my photos. So, that isn't even a strict requirement for generation. So, even in the case where you ban AI from learning from people made art (even in the situation where they would allow it), there are ways to still train the AI models regardless to achieve a similar result.
There is another problem that hasn't been discussed, enforcement is going to be a very interesting problem considering how international borders for information/data are virtually non-existent now and it's becoming relatively difficult to even distinguish if a piece was generated by an AI or by a person. The economic changes are likely coming in regardless from my point of view. It's going to be either people are using it illegally if banned regardless or people using it legally if it isn't -- I just do not see this changing either way.
> Taking someone's work to train an AI is fine, as long as you obtained legal access to the material
That is the big question here, what kind of legal access does Copilot etc. have to the training materials. When they use the training material they must copy it to their computer. According to most open source licenses they then also have to retain the copyright notice to wherever they copy it. But now it seems that Copilot skips that part. It copies everything else but not the copyright notice.
You can trademark a style, you can't copyright it. IANAL but that is what my corporate IP compliance training tells me. As long as am regurgitating non legal advice, I suspect half mona lisa half starry night might be considered a transformtive work. If a single human artist painted both perfectly onto the same canvas it could be construed as a statement about changes in the culture between the two contexts, so if you do it with Photoshop, it might very well get ruled the same way.
As for the morality of it? I don't like the idea of copilot replacing me, but I don't think it was wrong to make it. I'll eventually have to retrain myself to retrain copilot models I suppose. Or we'll have to decide to care for each other as we all go unemployed.
If you gave the co-pilot the license to copy your code, part of that license is they have to include your license and copyright notice in every derived work they make.
And Copilot, it doesn't just copy "style", it copies code.
Andy Warhol did something very similar to this with Avril Harrison's computer illustration of Venus. He just used the clone tool to add a third eye then called the result his own work. It even still had her signature on it.
>But if I took 100 paintings from a single artist and combined them all into new works of art, that would probably be copyright infringement in my view, and is what seems to be happening here.
If I look at 100 paintings by Pablo Picasso and then paint a new one in his style, did I commit copyright infringement?
That's a good question. My immediate answer would tend to be no.
But consider you produced a comic-book about Mickey Mouse where the character Mickey Mouse looked exactly like the one in the several Disney books and movies. You would probably get sued. Right?
Trying to take a strong form of OPs position, one obvious line would be the automation and mass reproduction aspects, in addition to how much unique creativity you specifically added to the process.
It gets of course harrier because what happens to experts in the field who are able to do that and then just use this as a boosting tool. Still, I don’t think copyright law has clear bright lines so much, but more guidance that the courts just try to muddle through as best they can. Certainly one can make an argument that just stealing an artists style like this could be considered a copyright violation, just like sampling even a few seconds of someone else’s track can be in music.
Again, not saying these are net good for society, but clearly existing copyright laws do try to take this tack. I think the one thing working against her favor is that a) early days so laws haven’t caught up b) us laws generally favor corporate interests over individuals so she might never get any relief even if deeper pockets start to protect themselves as this becomes a bigger problem for them.
The process for a human to copy her style in an original work would be similar, and legal. I don't think it's a good idea to prevent the automation of human-capable tasks, because it's anticompetitive: it protects an industry (albeit a small one of starving artists) at the cost of consumers.
The harms to artists are obvious and immediate, but limited and small. The benefits of letting an ML model train in the same way as a human are vague and in the future, but might be capable of massive transformative changes in the way we work. I think it's right to be careful about "protecting" a limited number of people at the cost of enormous future potential.
Enormous future potential for derivative work gets created. Enormous future potential for original work gets erased.
Why would anyone in their right mind choose to put effort into creating original art if there is "one easy trick" to get around copyright by simply turning their art into a model that can be used to churn out things they could have produced?
>Why would anyone in their right mind choose to put effort into creating original art if there is "one easy trick" to get around copyright by simply turning their art into a model that can be used to churn out things they could have produced?
Why do some artists still paint on canvas instead of using photoshop or krita, where you can easily ctrl+z any mistake, never need to mix any paint, can move layers up and down, etc. etc. etc.?
Why do some photographers still shoot anything smaller than large format with film when medium format and full frame digital cameras exist?
Why do some people still use analog synths when Native Instrument's Komplete exists?
Why do some guitarists still use amplifiers when they could use an AxeFx/Kemper/Neural DSP?
Most of those options are also more expensive, on top of being more inefficient/difficult/generally burdensome, yet people still do them.
People do a lot of things that are not necessarily the most efficient way to do something. They like the minor differences, or enjoy the process, or many other things.
I also don't see how SD and similar get around copyright. Even if training these models on copyrighted images is legal, that doesn't mean that the output they produce necessarily is. It doesn't matter how I create a depiction of Iron Man, be it SD or a paintbrush and canvas, I do not have the rights to reproduce him. And for things that can't be protected by copyright, such as style, I am not hindered by it no matter if it is created with SD or colored pencils on a sketchpad.
If you think about future business cases, my guess is if I'm in the content creation business I'd hire some artist to create inputs for my ML model to train. And I'd be the only one with access to these inputs (in the beginning). Or think about it the other way around. If I'm an artist I buy a commodity AI-art-generation-engine and feed it with my work and I can create infinite items in my own style for (digital) sale.
It'll all be about time to market and brand building. I could even see a world where the originals of the input creator would sell quite well as classic modern artworks. Imagine for a second a world where 3D assets get created this way. I'm pretty sure fans of popular games would shell out good money for originals from "the artist behind the Witcher 7 asset engine" if the trajectory of human development goes as I see it going.
Also...artists are going to create art no matter if it makes financial sense or not. In fact I'd argue that's the difference between art and design :P
> if I'm in the content creation business I'd hire some artist to create inputs for my ML model to train
That's a reasonable way to go about things. The problem is that right now the status quo is that you just take artists' work without their consent and use it to train your model.
Because they want to do it? The motivation for creating art isn't purely financial.
Plus, we humans all built our skills and works on the shoulder of giants. Artworks and cultural artifacts are never created in a vacuum. Maybe it's time to acknowledge that.
> The motivation for creating art isn't purely financial.
Yeah, but getting financial compensation can certainly help.
The opportunity cost of putting bread on the table means that the output of most professional artists today would drop significantly, if they needed to pick up another profession (especially full time).
> Plus, we humans all built our skills and works on the shoulder of giants. Artworks and cultural artifacts are never created in a vacuum. Maybe it's time to acknowledge that.
Financial compensation does help. But certain industries become marginalised or relegated to history given enough time. People then keep them alive because they choose to.
Where are the tears for horseback couriers? Or blacksmiths? Or thatchers?
I guess you didn't get my point which was: those industries died apart from specialists keeping them alive today and that's just the nature of the world.
The same thing will happen to human generated creative content whereby it becomes something that people are involved in because they want to be, not because it's a necessity/it's the only way to do it.
Yes the potential for future art work done by a human today will be erased in the future when it can be performed by a machine, but that has always happened & yet somehow it's surprising to people.
An artist being indignant towards machine generated art yet using mass produced tools, eating food farmed by mechanised equipment, wearing clothing woven by automatic looms, taking a digital photo themselves instead of hiring a portrait painter, owning a car instead of a horse that supports many sub-industries, sending emails instead of letters is just hypocrisy.
Technology has always brought us forward and these new AI powered tools will assist us as the tools we produce have always assisted our species. And as always those who refuse to change will eventually be left behind.
And yes, if this was happening to the industry I'm in I would currently be going through the 5 stages of grief about it, too. But then I'd just have to change up what I'm doing to reflect the changing times. As she herself said, it still doesn't capture what she puts into her art & so there is still that avenue to pursue.
That's begging the question. I don't agree that a model is one easy trick to get around copyright, any more than paying another animator to draw in the same style would be.
In terms of creating original art, I think that in ten to twenty years artists will see models as another tool for creative expression; one that lets an individual artist be more productive but can produce a generic feel, like thin-line animation or sticking difference clouds everywhere or using a palette of pre-made drag and drop body parts.
> The process for a human to copy her style in an original work would be similar
It wouldn't be similar at all. It takes years to get skills good enough to even copy stuff like that. With AI person who never did any art in their lives can get hundreds of copies in few hours.
The engineer stumbled onto the least sympathetic, least transformative, most obnoxious use case for the AI. He was trading on the artist's name, confusing people and even arguably devaluing her work by reproducing it in a clumsy and low-value way. Folks in the industry would do better to acknowledge, as he did, that this was wrong and establish standards so everyone knows this is not considered a proper practice.
It's already out there. On people's local computers and soon their mobile devices. People are tinkering with it at warp speed. This point addresses that it technically cannot be stopped.
I don't expect it to be possible to detect that the art is AI-generated. This becomes further impossible when using a personal input image as well as many follow-up edits or composite works. It blends into normal image creation. The only way to prove that it's not AI-generated is to record in-progress "human art" as is sometimes done in art contests, but this isn't reasonable to legally require of every single piece of art to be created.
As a society we have gone through great pains to protect the software developer's incompe and job, source code was given copyright and patent protection - no other industry gets both protections at once.
Now you seek to deny others such protection while taking advantage of it yourself.
Let's not kid ourselves. Those protections exist for the benefit of corporations, not software developers. If those corporations could have robots write the software and copyright that software and patent it, they absolutely would.
Ironically, the high salaries the software industries is able to pay is precisely because of the copyright protection afforded that prevents the value of the software from being diluted by way of rampant copying.
This is also the same reason due to which open source projects often struggle with funding, and why many databases (among other OSS software) are moving towards stricter licenses such as the AGPL.
I don't think so. The highest paying companies don't distribute software, they provide access to a remote service. Even if copyright didn't exist, you couldn't copy the Google executable.
Pretty sure you could sue them using other means, such as through contract law, if they signed an appropriate contract. You really don't need copyright if you aren't broadly distributing information.
if Guy A copied your code, and i got it off him, you can sue him for violation of contract but you cant stop me, a third party.
Your ability to sue him will be limured, he cant go to jail, he can deckare bancrupsy, and you have to be spesific about what is protected, 'idea' is a vague term that cant be protectedm
You're making my point though. The fact that Google is successful, in part, is because of the fact that you can't copy their trade secrets and methods; which is one reason no solid competitor has come up to challenge Google.
(There are infrastructure challenges as well, but this thread is about intellectual property.)
My understanding is that trade secrets are distinct from patents. For patents, you tell everyone how to do it but they're not allowed to for 20 years. For trade secrets, you don't tell anyone how to do it, but if someone else figures it out for themselves, it's fair game. Most of Google's search IP is protected as trade secrets rather than patent/copyright, I believe.
Your point is that you can't copy Google's secrets because of copyright, and therefore copyright is valuable to its employees. I'm saying that the reason you can't copy that information is different from copyright.
Or maybe they genuinely believe that the content they make today shouldn't be copyrighted for the next 100 years? Lifetime of the author +70 years is a very long time.
Given a binary choice between unemployability and extending patent protection to (even) snippets of code, I am quite confident that 90+% of salaried developers today will chose the latter.
thats not the argument we are discussing - the question is should these protections exist at all. We are talking about denying the artists all protection, only fair to confront developers with the same dialemma.
Whether they are 3 years or 300 is a finer point, and is only worth discussing after nessesity and legitimacy of sich protections is established
What a load of nonsense. The last 40 years has made the barrier of entry to software development lower than it's ever been.
We got applications like unity for game generation, low code solutions that let you generate a crud dashboard for a database in a few clicks, etc.
As a developer with an actual degree in comp sci, I can guarantee I'd be a helluva lot better paid if everyone had to do their software development in low level C.
That wasn't to protect software developers that was to protect companies & corporations. Twisting the law/politics is what capitalism has always had companies try to do.
Software developers aren't protected at all, we are simply just in demand for the time being. There will probably be a time in the far flung future where our jobs are phased out, too.
It doesn't matter what any of us think. The genie is out of the bottle and cannot be put back because unlike pirating existing media, this new style pirates the whole style.
Visual arts as a career will be dead soon. Visual arts as a hobby will live on.
Put a fine of 10% of annual turnover if a company cannot prove that images used in its product/ads are human generated. Make payment to visual artists part of the process of proving it.
Boom, visual arts as a career saved.
It's one thing to say we shouldn't do it, it's quite another to say we cannot.
That's a profoundly stupid idea. There are low code solutions that generate fully compilable pieces of programs and applications, so let's make sure we ban those.
Plenty of music is procedurally generated so we gotta make sure we ban those as well.
What's profoundly stupid is to assume every piece of technology is good so we should do nothing about its proliferation.
Yes, we should ban low-code tools based on deep learning over "fairly used" (not) datasets and we should ban the same for music, writing, whatever.
AI bros can go cry in the corner, I don't care.
Think of it from a non-monetary way and ignoring job security for a moment. Why would anyone (artists/programmers) spend their time doing something a machine can do ? It would be a terrible way to spend ones life. Perhaps the datasets can be licensed (if that's the sticky point) and embrace the AI ?
>Why would anyone (artists/programmers) spend their time doing something a machine can do
Why do humans play chess after 1997?
Why do they play checkers after the 70s?
AI capability is destroying human enjoyment of activities because it also destroys the economic rationale for engaging in them and/or allows other humans to cheat.
The obvious conclusion of this position is that we should just all kill ourselves if strong AI ever starts to exists. No, thank you, I'd rather do everything possible to prevent it from being created.
It's the difference between a bus and hiking: With a bus, you can arrive at lots of places fast but you will never experience the place as the hiker does.
With the amount of power the copyright lobby has, who knows it might be true. However advertising is not the only industry where visual arts will be affected. Gaming, Comics, Animation, VR, Movies etc will be affected as well. And regulation isn't going to keep up across all the countries.
Even if we assume the above rule is made, there is no way to prove it because even if a human does it, he might still use the help of photoshop etc which are planning to integrate such tools currently. Most favorable outcome I see out of this is for famous/competent artists to license out their style of art to these generation companies which then train their models on them (which sounds win-win but won't be as profitable as it is today)
>Even if we assume the above rule is made, there is no way to prove it because even if a human does it, he might still use the help of photoshop etc which are planning to integrate such tools currently.
It's unclear to what extend tool producers will support AI in this context at all. Also, this is a problem for safeguarding the integrity of the product of artists, not for safeguarding their income.
The companies that pay artists' salaries won't be willing to secretly break the law to save money.
Microsoft won't legally be able to continue its abuse of GitHub. Copilot will be dead. OpenAI and Stability will not legally be able to profit from large-scale intellectual property theft. All these violations will end.
These are the most significant digits. The residual amateur piracy doesn't matter. It doesn't matter if random guy gets some leet neural net warez and uses it to make his desktop wallpaper.
First, you didn't address my point that you cannot detect AI output.
I already have Stability running on my PC. I generate an image with it. How do you know the output comes from Stability? Answer that, please.
Second, a worldwide draconian ban on AI image training and generation just isn't going to happen. Very few legal things are coordinated worldwide, and copyright law is incredibly low on the list.
Even training without consent can be addressed. Google trained some of their AI from Google Photos. Which they made free for unlimited use so that us fools would produce billions of images, accept the terms we don't even read and voila: AI legally trained.
I don't think there is any work on that yet, but if the model is known it should be possible to derive the probability that a particular image is the output from it.
Ive produced over 5000 images on my local SD install. Currently, there are many dead giveaway if you produced the art with it. Specifically around hands, feet, holding things, pupil directions. Of course these things will get better with time, but currently there are many things that exposure generative art.
It's inevitable that someone comes up with methods to detect generated images, since a lot of political (Edit: and financial) capital hinges on that. If AI image generation is inevitable, then methods to analyze images wrt. known generators are even more inevitable.
I'm sure I've still got a bunch of MP3 files from Napster on a hard drive somewhere, but yeah, that genie was put back in the bottle. This one can too.
It was put back in the bottle by Apple legitimating the piracy business model by cutting desks with the major labels to digitize their music.
The analogue here is AI art week be legitimized, and the artists who profit will be the ones who let their work be used as input for a cut of the profits. And nobody will be able to compete with them, and the owners of the machine will be able to set the profit rate as they choose.
... That does sound like a new stable equilibrium, actually.
Piracy is held at bay only by the ease & affordability of legally obtaining media and difficulty accessing the technical means to pirate.
... and well-packaged piracy solutions and modern broadband bandwidth likely sink the maximum price (the only remaining term) below the cost of production.
It took me 20 minutes to go from nothing to an entire season saved locally and streaming to a Roku. That's finding the software, installing, configuring, finding torrents, downloading, and then playing. And that's not having pirated in a decade or so.
Napster has single points of failure and future p2p had poisoned seeds for tracking.
Stable Diffusion is math and cannot be stopped now the toothpaste is out. You can attempt to regulate, assign draconian requirements by force of law, but ultimately these are as unenforceable as regulating that pi=3.
Ironically, what could help is NFT type tech. Signed with a private artist key, your copy is "original". Even if knockoff generative copies are produced, the digitally signed produced-bys are still authentic.
>Ironically, what could help is NFT type tech. Signed with a private artist key, your copy is "original". Even if knockoff generative copies are produced, the digitally signed produced-bys are still authentic.
That solves a completely different problem, though. I don't think anyone is saying that the problem is one of false attribution, where people are claiming generated images are the work of a particular person. What's being discussed is artists having less work because people generate art computationally rather than commission artists to do it.
Aye, and on your concern about the different problem, the toothpaste is out of the tube never to truly be returned.
We can evolve the market (in my view, into luxury goods with NFT type tech) or we can wait for artists to truly starve. I'm a proponent for solving the problem that can be solved to help folks move forward.
We can try to evolve it, sure. I don't think that's an option that will interest enough people to matter.
While it's possible that these AI tools will leave some (certainly not all) artists without work, what I think is really going to happen is that artists will harness them to do new things that were simply impossible before, or to make their work easier. Technology rarely destroys jobs; it more frequently changes their nature. Just like how at some points animators needed to know how to use 3D tools when in previous decades they didn't, in the near future graphical artists will need to know how to use AI. It's possible that where there were previously two artists working there will then be only one, but such is life. Demand for art is finite.
I agree, traditional animation was better than modern CGI, but I don't think it's as simple as CGI being an inherently worse medium, but that films are produced more cheaply. Some weeks ago a friend and I were watching and comparing some scenes of Snow White and Cinderella in English and Spanish and were stunned by the singers in both languages. How often do you hear actual opera singers in modern Disney films?
So, yes, what you say may definitely happen, but it's a trend some graphical industries have been on for decades. It's why there are so many fewer professional animators anymore. I wouldn't be surprised if some techniques of traditional animation have been lost by now.
Yeah, sure, but that's because streaming was merely more convenient than Napster. No more downloading bad songs with bad metadata. No more lugging around and hand curating an mp3 library. And for a lot of people: no more having to choose what to listen to.
In two years people will be generating novel music of any style with any vocals and vocalists they want. That'll be even more of a fit to consumers' wants. They'll never run out of music that will appeal to them.
I'm currently working in this space and it's wild the things that are possible.
> It's a bit like saying we can't stop music piracy, now that Napster exists.
Curious choice of example, because it was never stopped. It just went somewhat out of the mainstream because the industry offered pricing options, like Spotify, who were acceptable for most people so they no longer had an incentive to resort to piracy. Not wanting to pay at all was always a minority position, most people just found it ludicrous to pay full album price for one of two songs that they liked.
And still, if you do want song X for free you can still obtain it easily. The industry just no longer makes a fuss about it.
> There's nothing inevitable about it. Laws exist to protect people.
Amen.
I think one thing we're going to have to look at is having the expectation of a separate agreement for having ones work go into a training set. Maybe equity should even be the standard here.
And informed consent associated with it. People need to know they're training something else to do their job as well as doing the job, selling the cow instead of the milk.
Everything you can come up with as a "solution" is really just a stop gap measure. Instead of specifying the style by name, you could specify it by example image. Instead of training the AI on her images directly, you could train a second generation AI on images drawn in that style generated by another AI that was trained on her images. Thus your second generation AI would be free of any copyrighted work. And of course the whole copyright thing only comes into play when people redistribute the AI. If AI is easy to train yourself locally, even that doesn't matter anymore.
If you want to go all Butlerian Jihad on the world, you might be able to stop it. As long as AI is allowed, this ain't going away, it's only getting easier, cheaper and faster.
For all we know, this could already be happening. Every digital image produced yesterday could have been AI-generated for all anyone knows. The original artist in this story could already be using AI to create their own work. Of course, I don't actually believe that's what's happening in this case, but the fact that it could means it's probably impossible to return this genie to its lamp.
HN suddenly filled with a bunch of crazy luddites. Why don't we instill the death penalty for artists who has taken inspiration from other artists while we're at it.
I know right? I think all those FBI warnings worked maybe, and the new generation of geeks think IP is actually a moral thing instead of a corporate money-grab.
Also, Herbert wasn't against AI I don't think. I suspect he simply recognized he couldn't comprehend the world that far in the future if AI was a part of it. Instead, he used space magic to explore his very present reality of resource wars, and went on to make a point I'm not sure I understand, about too much political order and resultant stagnation causing self annihilation.
I was joking, but HN at the same time is filled with people that believe regulation only stifles innovation.
So just because AI is inevitable doesn't mean that we should abandon all regulation. There would be good merit in slowing down some progress, so we can actually maintain a good transition to new industries.
> Use of the output of systems like Copilot or Stable Diffusion becomes a violation of copyright.
That really should depend upon the output.
Many, if not most, people learn an art by imitating the style of established artists. Some will carry on with that style. Others will develop their own, though it will probably always carry elements from those they imitated. Should injecting a machine into the process automatically make it illegal?
There are going to be clear cut cases where it should be, cases where so much is imitated that it goes beyond style and into substance. Yet that means we should have a human looking at the output to determine if it is too close to a copy, rather than banning AI generated art altogether. To do so would put the creative process in peril. This is not because machine learning reflects our definition of creativity. Rather it is because it is difficult to define what human creativity itself is.
(That said, I do believe that using the artist's name as a way of promoting their own work is stepping over a line.)
Except, in copyright law it depends on the _input_.
These models would not exist if they were not first fed the source material.
Until we have systems that are not trained on a pre-existing corpus this will remain true. No matter how clever the algorithm, without the source material you have no output. Zilch. Nada.
Now, when the source material is someone else's property this means that without - someone else's property - you would have had no output.
So, when you want to use someone else's property, which you do not own, the general rule is that you a) first ask them if you may and b) pay them for the right to use their property.
In this sense it is no different than using a photocopier.
It's the copyright ownership of the material you put into the machine that will interest the judge not the quality of the copy.
I'm really looking forward to the first court cases and predict that much hilarity will ensue!
Trained models don't have the actual images inside, they have summed up gradients. So what they are doing is far from a copy&paste job, it's more like decomposing and recomposing from basic concepts, something made clear by the "variations" mode.
Among the things the model learned are some un-copyrightable facts, such as the shapes of various objects and animals, how they relate, their colours and textures - general knowledge for us, humans. Learning this is OK because you can copyright the expression, but not the idea.
Trained models take little from each example they learn. The original model shrunk 4B images to a mere 4GB file, so 1 byte/image worth of information learned from each example, a measly pixel. The DreamBooth finetuning process only uses 20-30 images from the artist, it's more like pinning the desired style than learning to copy. Without Dreambooth its harder but not impossible to find and use a specific style.
And the new images are different, combining elements from the prompt, named artists and general world knowledge inside. Can we restrict new things - not copies - from being created, except in patents? Isn't such an open ended restriction a power grab? To make an analogy: can a writer copyright a style of writing, and anything that has a similar style be banned?
> Trained models don't have the actual images inside, they have summed up gradients. So what they are doing is far from a copy&paste job, it's more like decomposing and recomposing from basic concepts, something made clear by the "variations" mode.
Doesn't matter. JPEG of the work is just a bit of numbers to feed equation, doesn't change the fact it's copyright infringement
A digital photo of the Eiffel tower at night doesn't have the real Eiffel tower inside, only weights and pixels - still you don't have the rights to publish your photo of the Eiffel tower in France.
>• reproduction of the work in various forms, such as printed publications or sound recordings;
>• distribution of copies of the work;
>• public performance of the work;
>• broadcasting or other communication of the work to the public;
>• translation of the work into other languages; and
>• adaptation of the work, such as turning a novel into a screenplay
None of these rights, to me, indicate that copyright protects the input. The AI model is not reproducing any specific works, distributing copies of it, performing it in public, broadcasting it, translating it to another language, or adapting the work from one format to another.
>Now, when the source material is someone else's property this means that without - someone else's property - you would have had no output.
Exactly the same happens with artists. The only artists who can claim not to have been influenced by seeing the work of other artists lived tens of thousands of years ago. So what makes it okay to process artwork via some processes but not others, when the ultimate output may in some way copy the input anyway?
Personally, I'm on-board with protecting artists incomes, however, I think there's a middle-ground.
First, I'd like to correct a fact you omitted: Napster, Limewire and the like didn't come out of nowhere. They were created because artists and their recording companies forced consumers to buy entire CDs at inflated prices that kept rising. Now, what they got in return from their consumers after that wasn't fair either.
I don't think making AI generation illegal for everyone makes sense. That's how you get the Metallica's of the world bank-rolling professional grifters to hold people's grandmother's financially hostage.
I do think it makes sense to bar AI generated products from making money if the works used to train it did not belong to that company or individual. If you create a program using CoPilot, you should not earn money. If you make a comic using Stable Diffusion, you don't deserve money. This keeps the power players in check while allowing artists paths to use these AIs if they own their own work outright. Imagine if you could train CoPilot on your own code and then use it to help you. That to me sounds like the framing for a new and responsible form of innovation.
yeah! i think AI tools must be transparent of the input, period.
it feels too unfair leading a new art style and simply be copied with machine precision and speed… opting to not contribute to the neural network database should be a thing but i do not know how reverse engineering of output can be done
Yes, there should be opt outs for ML training. They could take many forms - robots.txt rules, special HTML tags, http headers, plain text tags or a centralised registry. You can take any work out of the training set without diminishing the end result. But doing so would mean being left out of the new art movement. Your name will not be conjured, your style not replicated, your artistic influence thinning out.
If an artist wants her works to have the fate of BBC archives, that removed millions of hours of radio and tv shows from the internet, then go ahead. The historic BBC content was never shared, liked, commented or had any influence since the internet became a thing. A cultural suicide to protect ancient copyrights.
Music piracy didn't stop because of the Napster shutdown. It just manifested itself in different ways. Now all you need to do is use youtube-dl to download the youtube video or soundcloud track or bandcamp album with the -x flag to extract audio. Both the software and the original media sources are legal. In fact, GitHub was forced to take a public stance on youtube-dl after a DMCA takedown request on the repo.
The biggest reason the laws can't possibly hope to stop the practice is:
> [MysteryInc152] told me the training process took about 2.5 hours on a GPU at Vast.ai, and cost less than $2.
As those costs are driven down and the software is accessible to more people, distribution of the weights will not be needed.
Modern intellectual property law, specifically copyright, is so brazenly slanted to maximize benefit profit for American corporations at the expense of your averge person with zero consideration for the rule of law or the democratic process.
Year after year entrenched media interests lobby the US government to make IP policy more corporate friendly and those policy changes are forced on the citizens of countries around the world through strong armed free trade agreements.
We don't get to discuss these thigs as citizens of soverign states, they just happen to us.
Maybe we want to live in a world with a substantially shorter copyright term, is that so wrong? Maybe that would be better for individuals and society as a whole but we'll never know because American companies wont risk the chance of losing money or power to find out.
How long do you think should copyright should be before it reverts back to the public domain?
Sure, on a small scale, but Spotify, Apple Music, Tidal and I'm sure many other platforms exist and are quite successful.
I'm a big pirate myself, with multiple TB harddrives full of pirated music accrued over the years, but even I choose to use Spotify and Tidal a lot of the time, out of pure convenience.
Exactly though, the fears of the industry of the time were met, one way or the other. Spotify and others came around and basically destroyed the album / CD model, led to independent publishers having way more power than ever before. It is a record company hell that we're living in right now. Despite spending as much as they did to kill Napster, they weren't able to stop the "inevitable."
Napster distributed whole songs. If it sampled 1000's of songs then created original compositions that sounded kind of like the 'style' of those songs what would the legalities be? That is a huge difference. I'm a professional artist who has been able to make a good living and support a family, what does this mean when someone with an algorithm and some key words can produce good-enough work in a fraction of the time for pennies? There is a huge swath of professional artists whose livelihoods are at stake.
Is this like the stagecoach makers when automobiles where invented? Or is this like Napster stealing copyrighted material? This is new territory.
It's even more fundamental than stagecoach --> automobile. It is more like cipher --> RSA -- fundamental change based on basic math and ubiquitous, readily available technology.
At this point, I don't think the law can stop it. We're looking at a technology that can easily become illegal but ubiquitous, like Napster in the heady days of flouting audio copyright.
If the entire Western copyright sorted of influence unifies on it being an illegal system, Russia and China are under no disincentive to ban the techs. Especially if it makes their entertainment industries more competitive with the Hollywood machine.
>It's a bit like saying we can't stop music piracy, now that Napster exists.
Trivially, you still can't. Lawbreaking when it comes to copyright is enabled at scale by computers (like everything else); so unless you manage to win the war on general-purpose computing there's nothing you can do.
Sure, streaming has taken the place of piracy (growing the pie is better than strict conservatism), and Patreon (and its offshoots) has made it possible to be paid for recurring content that's inevitably going to be pirated, but file-sharing (torrents) and alt.binaries (abusing free storage sites as a backend for streaming video) still work just as well as they ever did.
The only reason people pay for content is that they want to, provided the price isn't usurious or infinite ("not sold in your region"); those that continue to work with said want prosper, those that fight it fail, and that's just the way it is.
If artists were required to exclusively sign with globe-spanning conglomerates that pay them in loans and take a cut every time they teach an art class (no good comparison for record companies taking cuts of concert revenue), you'd see a society-breaking, unjustifiable level of protection for an artist's "style."
As it is, artists don't have massive teams of lawyers and billions in assets, which makes their concerns irrelevant to the people who would normally be bribed to advocate for them.
For copilot, I'd like to see more models trained on stolen and leaked proprietary code from hacks, or an organized movement to leak code from businesses and feed it into a freely-shared model. If transformation into the model is enough to launder copyright, it ceases to be stolen code. I'm sure it would be helpful in cloning proprietary products.
How do you exactly define a single person's style vs a genre? While any artist might specialize, as is the case here, in a single style and distill it and create a large body of work in the specific style, do you think no one before created a similar work of art in the same style?
Naming a recognizable artist is the current "lazy" way of doing this instead of naming every possible visual style; and sure we could ban name and surname, but should an artist own "dreamy flat pastel colored illustrations of cities of characters with high contrast, no lines, children illustrations" for perpetuity? Definitely not, for the style itself there's likely hundreds of artists that have done something similar before and after.
I think the advantage of using them is too great, companies that use the networks will outcompete ones that don't -- even if it were made illegal in the US it won't be everywhere. I imagine when it comes down to it, the law's going to be pragmatic. What happens to US industries if we allow this, and what happens if we don't, and my guess is that it ends up being allowed.
I'm not saying that's good or right - I really don't know how I even feel about the AI networks morally... I just think money is going to win out.
Copyright exists to incentivize authors, artists, scientists, etc. to create original works by providing a temporary monopoly.
The arguments suggesting that people shouldn't benefit from their work on an individual level, and pointing to music piracy as an example of why we shouldn't try, strike me as arguments for general inaction and fatalism. Not sure what the goal is, there...
The goal is to get these people to face reality. The fact is we are in the 21st century, the age of information. Their creations are just data, and data can be copied, processed and transmitted worldwide at negligible costs. There is no controlling it.
The goal is to make them stop trying to control it. Because their attempts to control it are ruining computers for all of us. We already have harmful stuff like DRM on every chip because of these people. Platforms are getting more locked down, our freedom as users and programmers is decreasing. They will destroy free computing as we know it if this keeps going unchecked.
Because someone may see a version of reality where people are incapable of benefiting from their own work does not mean that it's by any means a settled issue or indicative of "Reality". I doubt these conversations would exist if it was. It is indeed the current year, but that doesn't mean that because things can be metaphorically distilled with false and reductionist equivalencies, that it should all be free for the taking to benefit a few people who outran regulation.
Regarding the concept of control, artists were first put in a defensive position by the individuals who started using their work without their consent, and who are trying exercise their own control over the artwork produced by others through monetizing outputs. Are only companies like Stability.AI, OpenAI, and Midjourney exclusively permitted to use and control the artwork of others, and allowed to charge for access to models which use this artwork without compensation or accreditation to the original authors? Are those artists computers not also being ruined? Do they not deserve representation?
We need to stop demonizing the idea that someone can benefit from their work because there are some companies that have fought to extend copyright for their own benefit.
Copyright REFORM is generally a much more supportable issue than the idea that everything should be free in perpetuity...
> does not mean that it's by any means a settled issue or indicative of "Reality". I doubt these conversations would exist if it was.
It is the reality of computing. Anyone trying to deny that is going to discover that bits are bits and there is no control unless you end computer freedom. It takes tyranny such as mandating that computers only run government signed software to change this reality. This is the sort of thing that will happen if this copyright insanity continues and it will also pave the way to absurdities like regulation of cryptography.
> We need to stop demonizing the idea that someone can benefit from their work
Nobody is doing that. They can benefit from their work as much as they want. Plenty of creators are benefiting right now from patronage via platforms like Patreon. They're getting paid for the act of creating, not for sales of an artiticially scarce product. Copyright is not necessary.
The reality of physics and biology is that if someone is bigger and stronger than someone else, they can beat them up and take their things. Anyone trying to deny that is going to discover there is no control unless you end the freedom of unlimited violence. It takes tyranny such as mandating that beating people for no reason and taking whatever they have using the tool of your superior physical strength results in punishment imposed by collective agreement of society.
I don't think this is freedom - as long as some company with a million time the resources that I have can train a better model, I'm only ever using the models someone with power gives me, no matter how small a device the model runs on.
Having larger models and adapting the weights is one thing but the innovation is mostly on the side of large entities.
It's ideas (memes) copying themselves, making variations, evolving. Until now ideas could only jump from human to human, intermediated by various media. Now they can be unified and distilled into a model, a more efficient medium of replication for ideas, and more helpful in general because it can be adapted to new situations on the fly.
So the same argument should advacate for having no patents, the advantage of just stealing everyone's patents is too great and not all countries enforce patents
You can argue that patents are an INCENTIVE to progress, since people are INCENTIVIZED to create newer and better things knowing that they will be able to enjoy the results of their labor without copycats leeching on their work and ingenuity. I think the pharma model of short-time allowed patents is the best, something like a 10 year competition free period is completely fair to INCENTIVIZE people to create the iPhones and cancer cures of tomorrow.
There are so many arguments here about big C copyright (Disney etc.) and how it is evil and that it shouldn't be an argument - but what I'm seeing is that small artists, freelancers are getting hurt by the output mostly at this point.
If this is about big C copyright, where is the Mickey Mouse dreambooth concept? Disney property is seen as property but the labor of some random freelancer is just seen as nothing.
No chance. IMO i can see case scenario artists get together to lobby for some sort of label system like food industry to label non synthetic art for those interested in supporting bespoke human created works. Then watch said artists get called out as fakers for using AI assisted features like context-aware fill.
> It's a bit like saying we can't stop music piracy, now that Napster exists.
AI art is unknown territory. Comparing this to media piracy (e.g. copying music) leads to a fallacy.
Specifically: where does fair use stop?
And consider: Good artists copy; great artists steal. Any art historian will be able to show you how true this is accross all epochs and styles (and types of art no less, i.e. including e.g. music)[1].
Anything that follows is OT for the debate at hand. It is merely to point out that while not only not applying here (AI art is derivatives/remixed works not simple copies), the notion that the P2P crackdown and its legal repercussions of the early 2000's had anything to do with how much someone creating the music in the very first place got paid is a myth perpetuated by the music industry. Specifically that part of the music industry that is not the artists.
> Remember the naive rallying cry among those who thought everyone should have the right to all music, without any compensation for the artist?
The only naivity is that compensation of artists played a role in this. Piracy was never noticable for musicians who weren't already stinking rich. And for those, while noticable, it wasn't an issue. One may argue it was/is for people high up in the food chain of the music industry. But even that stands on feet of clay. From [2]:
> The main finding of the present study is that in 2014, the recorded music industry lost approximately €170 million of sales revenue in the EU as a consequence of the consumption of recorded music from illegal sources. This total corresponds to 5.2% of the sector’s revenues from physical and digital sales. These lost sales are estimated to result in direct employment losses of 829 jobs.
There are approx. two million people being employed by this industry in the EU[3]. Go figure.
For further reading on the funny idea that artists got compensated before P2P and didn't after there is Courtney Love's classic debunking piece on musician's revenue around the time Napster was a thing[4].
And some comparable numbers from what this means for artists trying to make a living of digital music today [5][6].
[1] My father was an art historian. My opinion is mainly based on spending every holiday of my youth looking at art from all epochs across Europe, first hand. Nolens volens I may add. I.e. I'm saying: take my word for it. :]
I know it doesn't sound nice but harm to the artists is similar to the harm you do to hole diggers when they see you bringing excavator to your plot instead of hiring them.
Art is not digging holes. But some of it is and more of it will become it in the future.
It's hardly comparable. The excavator does not owe its creative influence to the hole diggers. The quality of its work does not result from someone else's intellectual labor. It's 100% the digger doing the digging.
It’s a teacher-student relationship, except the teachers don’t do anything specifically for the students. Let our jobs be taken by Copilot. Are you that afraid?
> The excavator does not owe its creative influence to the hole diggers
But it does. While not as complex as AI art generation, the excavator is mimicking the hole digger. It takes a human action, generalizes it, and offers it in a more efficient manner.
As you allude to, this analogy works if you're creating art purely because there is a demand for it and you only put in the effort required by the customer.
But that is generally not the case with illustrators, and certainly not in this case.
Also creating a new model is dependent on artists (at least right now!) while excavators are not dependent on hole diggers.
I think the problem is ethics models kind of fall apart here. You can trivially make an argument both for and against this on the grounds of ethics. Legally it seems pretty clear nothing wrong is being done here. A human can train on a particular artists style and lift it just fine. Which they regularly do. We just made it way way easier now.
So sure, we can empathize with someone feeling kind of off about the situation. But at the same time its kind of eh, that's how the world is.
> Legally it seems pretty clear nothing wrong is being done here.
It most certainly doesn’t. Just because a human can eyeball an art style and copy it eventually, does not translate into “I can take your copyrighted work and feed it to a machine”. Ethically you may argue one way or the other, but legally you are using somebody’s else’s work without permission.
And besides, there's nothing illegal about "using" a copyrighted work without permission (for example, if artist wanted to use the pixel values in an image for a color pallet, that's totally fine), only reproducing it - which no image generation model does.
I just think the law hasn’t caught up with tech again here. This is derivative work, essentially by definition, and just because the styles are being created without “patching together existing IP,” doesn’t mean they are in the clear.
We can trust that a human creator who apes the style of another human creator will do so with a preponderance of flaws such that their works are distinguishable. AI doesn’t operate like that and the case can’t be made that somehow both the AI and the person spontaneously landed on a certain style like it could with two human beings. As the commenters say in the article, the AI couldn’t generate anything without the original works.
Apropos of nothing, her art style really isn’t that original. Her style itself clearly apes illustration styles of the 40s and 50s. I guess copying never goes out of style.
Wrong. Various regularization schemes are used in AI models which essentially introduce “flaws” and noise into the process. The flaws are more optimal than what the brain does, but they are there.
This may only be the case right now but I find current AI art sits in the uncanny valley. At first glance it looks impressive but after you spend a few hours looking at the output of current algorithms you start to recognize the same quirks and shortcomings in every image. At this point if I needed art for a project I really cared about I'd still spend the money for a human artist.
See I don't understand this. Training an AI using this content is legal, but how is copying it in the first place to use not illegal? This is what I never understand about these AI cases. If I copy a Youtube video I'm breaking the law, but if I then use my illegal copy to train an AI it retroactively makes my copying legal?
I think you're using the word "copy" incorrectly. What does it mean to "copy" a Youtube video? You are literally making a copy of it on your computer as you watch it, that's what watching a video is. You're also making a "copy" of it in your brain.
Right, but you don't train a model by pointing a camera at a screen. You download the video file. You deliberately bypass the copy protection. I'm not saying it should be illegal, I'm saying it is.
Hmm, is that how it works? We're mostly talking about images in the op, not video, and her images are freely available to view / download from her website.
I'm not sure about literally downloading YouTube videos, you might be right about that.
and her images are freely available to view / download from her website.
At least under US law, the person downloading the images has to be the same one who trains the model, because giving them to another person is copyright-violating distribution.
To be clear, I'm just sick of corporations being free and clear to do things that would get the rest of us stomped.
The LAION dataset, which SD was trained on, is just a list of URLs and textual descriptions. There's no illegal copying going on when StabilityAI trained SD. It's also not illegal for you to do the same thing.
No difference at all with how Disney a giant corporate machine has treated artists and art over the last few decades. This is just the next stage in the mindless machines evolutions.
Artists have had less and less influence on anything beyond mindless consumption during our generation. So no surprise where the story goes. Without influence you can't control what the machine does.
If your art style can be ripped off by an AI looking at 32 images, your art style is too pop to be considered "yours" imo.
To take that criticism further, the original artist took Disney art subjects and applied a nickelodeon animation style to it.
Hopefully the artist can appreciate the irony of claiming that it is _her_ style being ripped off when everything in the examples shown is clearly pop art, categorically defined by predominant social influences, and not something which came from her artistic perspective.
Is there any artist whose style cannot be imitated by an AI given selected images? I'm pretty sure an AI can correctly mimic Picasso if you give it 32 blue period paintings. Does that mean Picasso has no artistic value?
I very much sympathize with her because it isn't fair, but I do find it slightly hypocritical that when looking at the work on her website, much of it is drawing other copyrighted work.
I suspect the only acceptable answer here is to disallow AI training of copyrighted material, but this only delays the models supplanting the actual artists (because people will contribute to and build up a pool of copyright-free training material), it doesn't prevent the ultimate issue of people being replaced by AI.
But her style is still defined/drawn from Disney/Nick/other cartoons anyway. At the end of the day she said that she didn't see herself in the AI images and felt distanced from it - isn't that good enough?
Her art still has value to her and her clients as long as the human in the loop still has something to offer buyers (ie being able to have a conversation/work on the design for something tailor made).
When the AI is smart enough to generate art from an ongoing conversation about a piece, making adjustments etc then she will have to draw art for herself and those that appreciate it, in the same way that there are still some blacksmiths around and people buy their works because they love that it was handmade/a piece of history.
And another important point that's buried, is that most of the art used to train the model is actually owned by her clients such as Disney. So, she could not, even if she wanted to, give permission to use that content.
IOW, the person training the model was just fine with stealing the artwork.
While the artist seems to not be litigious, it'll be interesting to see if the major rights-holders like Disney start going after the AI model companies and/or the people that train the models, if they find that there is output of their properties.
This automated generation of code, text, art, etc., is really nothing more than sophisticated sampling/mashup, and when you use snippets in your work output, it should be credited and properly compensated. This is rapidly amounting to automated creation theft engines.
Worse yet, thinking ahead a bit, once they've all been trained on the available works, and all the writers, artists, & coders have been put out of work, progress will stagnate, because the "AI"s will generate nothing new, only continuously regurgitating bits and mashups of what is now old stuff.
I appreciate the argument. It made me think that we might have a “photographs steal your soul” kind of moment here with new technology.
Nonetheless, the difference is pretty clear here I think. An AI makes the artists style infinitely reproducible by all. A single artist copying an artist’s style is basically what copyright is all about, including the relative straightforwardness of enforcement through ordinary litigation.
Whether copyright is or is not being breached by the AI, there’s a paradigm shifting difference in the nature of said AI, perhaps not unlike downloadable digital music on the internet compared physical media.
I'm not a moral philosophist, but I'd say the difference is effort.
I mean no shade on the illustrator herself, but her style looks derived itself from other styles.
Anyway, another illustrator would need to put in the effort to learn the style, and then X hours to create each piece. An AI, once trained, can churn out thousands of artworks in that style per second (with enough computing power); it makes the illustrator obsolete, and like mass production of low cost knockoff products, it forges competition and cheapens the brand / style.
Is that good or not? I don't know, again I'm not a moral philosophist.
When it's using a lot of different types of work from different creators I think the output is more a sum of it's parts, it's a little bit more of a gray area. When it's specifically trying to copy one persons style, that's very personal, and very real to the person being copied. I think it's made weirder by how low effort the copy is. Someone learning your art style and painting it is a bit different to just making a computer do it.
My second thought on this, is that it reminds me of the attitude I saw rampant for data collection back when I was getting into tech. The casual attitude towards consuming other peoples information, be it private data or in this case, work they've labored over, has lead us down the path of exploitation and profiteering. I'm sure it will be no different here.
It's all fun and games when it's free and open and we're all just having making toys, but the commercialization has already begun, and these precedents will end up being profiteered by companies who are willing to profit off of things that others had too strong of a morale compass to do.
Jaron Lanier's 'Data dignity' idea really seems like the best solve here. Her work was indispensable to whatever value this algorithm produces in the future - it would make sense if she got partial ownership of some kind. It's what share of ownership she should have that we get hung up on. In some sense she's already won, she'll definitely get more traffic to her own site now, because she was part of an interesting story about the early days of artistic AI. But we intuit that for every Hollie Mengert or Metallica out there who benefits from the attention sluice, there are a number of other artists who don't get those benefits, and by definition, we don't know who those artists are.
In an ecosystem we might say 'fit data is what makes it into the next generation no matter the species'. In a 20th century economy, we might say 'the creator should benefit from her work'. But we're not in either of those. We care more about the Hollie Mengerts of the world than their impact on the future evolution of art. Or more precisely, we care more about the right incentives being present for Hollie Mengert than how those incentives play out in this individual case. But that influence on future art is also undeniably part of the incentive structure for an artist today.
This seems like a classic wicked problem - does anyone know of a group engaging with it?
Maybe not ruthless enough. Society needs to evolve past this notion that people have control over data and information just because they created it. The faster this happens, the better. Are we seriously gonna have to put up with the good old copyright industry forever? They keep destroying perfectly good technology just because it threatens their existence. I say let them disappear.
If this was to happen, and we created a world where there was no form of copyright: why would somebody spend their life in a creative industry making new things?
Most of the entertainment I have consumed in recent years has been made by amateurs that at most got donations. Besides, there can be subsidies. The current copyright law is already a subsidy, but it’s selling off the society’s natural rights instead of petty tax money.
Allowing creative output to be freely used, while forcing creators to subsist on the crumbs thrown back is a two-class system. It seems unfair to those doing the work in such a system.
Copyright is far from perfect (far far...) but it is still an improvement on patronage. At least a creator has control over the use of their work.
> It seems unfair to those doing the work in such a system.
It's the only thing that makes sense in the 21st century. Copyright is unenforceable in the age of ubiquitous networked computers.
In order to enforce copyright, every computer will have to be locked down so that they only execute "compliant" software. Surely everyone browsing this site can appreciate the unfairness of that outcome. I for one do not want such a future under any circumstances. If the copyright business model is killed, so be it.
> At least a creator has control over the use of their work.
An illusion. They have no control. Their copyrights are infringed every single day. Most of the time people don't even realize they are infringing someone's copyright.
> In order to enforce copyright, every computer will have to be locked down so that they only execute "compliant" software.
This is not true. While it is one possible approach to enforcing copyright, it is not the only possible approach. Network surveillance of distribution is another possibility that has been against p2p networks.
Copyright has never been completely enforcable. It has always been a partial solution aimed at preventing organized / profitable distribution, i.e. it is a legal fallback rather than a prevention. But a partially working solution is better than nothing.
What a joke. That "temporary" monopoly lasts centuries and gets extended whenever some rich company's imaginary property is about to enter the public domain. Copyright duration is functionally infinite, you will be long dead before your culture is returned to you.
Right. I doubt many people would disagree copyright REFORM is sorely lacking. Or are you suggesting that an artist is not allowed to benefit from their work because Disney has extended copyright?
Creators can benefit as much as they want. Just not through artificial scarcity. That ship sailed the second computers were invented and they need to stop trying to put that genie back in its bottle.
Either copyright remains unenforceable or computing as we know it today will be destroyed. There is no middle ground and I know which side I'm on. Computers are among the greatest inventions of humanity, they are too precious to be jeopardized because of such concerns as invalidated business models.
Well yeah, I don't know if people have noticed but notice how Disney has started using the classic Mickey Mouse animation at the start of all their works now, because they know their already extended copyright is about to expire.
It should be illegal (if not already) to use other people's work to train AI models. In the future, all artworks will have a license attached to it, some fair usage clause.
https://holliemengert.com/
Next, somebody grabs her work (copyrighted by the clients she works for), without permission. Then goes on to try and create an AI version of her style. When confronted, the guy's like: "meh, ah well".
Doesn't matter if it's legal or not, it's careless and plain rude. Meanwhile, Hollie is quite cool-headed and reasonable about it. Not aggressive, not threatening to sue, just expressing civilized dislike, which is as reasonable as it gets.
Next, she gets to see her name on the orange site, reading things like "style is bad and too generic", a wide series of cold-hearted legal arguments and "get out of the way of progress".
How wonderful. Maybe consider that there's a human being on the other end? Here she is:
https://www.youtube.com/watch?v=XWiwZLJVwi4
A kind and creative soul, which apparently is now worth 2 hours of GPU time.
I too believe AI art is inevitable and cannot be stopped at this point. Doesn't mean we have to be so ruthless about it.