Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is immoral to create a device, from the labors of someone, which is capable of replacing their labor, without compensating them.

Unless they agreed to provide their work for that purpose for free.

Postulate #1: Image generation models would not exist without large amounts of training data from current artists.

Postulate #2: Every major AI company either trained directly on public web scraped datasets or is murky about what they train on.

Theft at scale does not somehow make it not theft. Stealing 1/100th of a penny 10 billion times is still stealing.

And when you repackage the results of that theft in a profit generating machine, and then label it not theft because "it's a whole new thing," you start to sound like a CDO apologist.

And look, I get it -- it's about money.

It's always about money.

You may not be making any off your work, but that's immaterial because lots of huge companies are making obscene amounts of money from doing this (or expect to be in the future).

At the same time, it is an excellent tool. Art without human time! It will eliminate a lot of artist jobs, but everyone as a whole will be better off (because we're swapping human labor for electricity).

However, the currently vogue "artists don't deserve anything" smacks more of "we don't want to share profits during the transition period" than a cohesive moral argument.

We can have an AI future, but we should be honest about what enabled that. And we should compensate those people during the transition.

Hell, AI tax. Paid to everyone who created a work in a scraped dataset. Sunset in 30 years. Done.



I disagree with you, simply for the fact that artists have been learning from one another for thousands of years.

We can see a clear timeline of art and it’s progression throughout human history, and it’s often clear how a later work took inspiration from an earlier period.

Art school teaches techniques and methods pioneered by earlier artists, for the express purpose of their students to know how to incorporate them into their own original work.

Yet, no one is arguing that Van Gogh’s descendants should be paid a small royalty anytime a variation of on of his painting is produced, or even just when a painting in the style of one of his is produced.

Were all visual artwork to disappear from the world and collective human memory today, then the first new pieces produced by artists would look dramatically different - and likely much worse - than they do today.

What AI is doing is no different. Perhaps faster and on a larger scale than how humans learn from one another, but principally it’s the same.


> Perhaps faster and on a larger scale than how humans learn from one another, but principally it’s the same.

I like how you just tucked this at the end there without any introspection on what kind of a paradigm shift that is. If you wanted a "Van Gogh style painting," you'd contract with a painter who specialized in it, and no, his descendants don't get royalties from that (which is an interesting discussion to have, I'm not sure they should, but I haven't thought about it but anyway) but you are paying a human creative to exercise a vision you have, or, from another perspective, perhaps a person goes into creating these style of paintings to sell as a business. Again the idea of royalties isn't unreasonable here but I digress.

Now, with these generative art algorithms, you don't need a person to spend time turning your/their idea into art: you say "I want a picture of a cat in Van Gogh's style" and the machine will make you dozens, HUNDREDS if you want, basically as many as you can stomach before you tell it to stop, and it will do it (mostly) perfectly, at least close enough you can probably find what you're looking for pretty quickly.

Like, if you can't tell why that's a PROBLEM for working artists, I'm sorry but that's clearly motivated reasoning on your part.


I can tell why it’s a problem for working artists. I never suggested otherwise. What I disagreed with was the premise that it’s immoral or inherently wrong. A problem posing a difficulty to a certain group of difficulty doesn’t have any bearing on its morality.


I'm guessing you mean to say "A problem posing difficulty to a certain group of people doesn't have any bearing on it's morality." and that's just... so very gross in terms of ethical statements.

Like just, hard disagree. Undercutting the value by entire factors of a whole profession's labor is incredibly immoral, especially when you couldn't have done it without the help of their previous works. Like... a very non-exhaustive list of problems I would say meet that definition are:

- Generational/racial wealth inequality

- Police brutality

- The victims of the war on drugs

- Exploitation of overseas labor

I don't think we really have anything else to discuss.


> A problem posing a difficulty to a certain group of difficulty doesn’t have any bearing on its morality.

A good point, but I think an all-humans good argument can be made here, not just a specific group.

To sketch, I think we can all agree that the destruction of the human journalism profession negatively impacted public discourse for everyone?

Ergo, the destruction of the human artist profession seems like something we should consider carefully.


Alike in method is not like in output, and it's output that matters.

A human takes ~4-20 years to become a good artist. They can then produce works at a single human rate.

A model takes ~30 days to become a good artist. It can then produce works at an effectively infinite rate, only bounded by how many GPUs and much electricity can be acquired.

These are very different economic constraints and therefore require different solutions.


> These are very different economic constraints and therefore require different solutions.

This is often listed as the reason why it’s ok for human to learn from a prior art, but not for a LLM. The question is why? If the act of learning is stealing, then it is still stealing, no matter how small scale, and every single human on earth has committed it.

The LLM vendor may benefit more than a mere mortal pupil because of the scale and reach. At the same time the LLM may make the prior art more visible and popular and may benefit the original creator more, even if only indirectly.

Also if content creators are entitled to some financial reward by LLM vendors, it is only appropriate that the creators should pay back to those that they learn from, and so on. I fail to see how such a scheme can be set up.


Law exists to benefit humans.

Either directly (outlawing murder) or indirectly (providing for roads and bridges). And well (libraries) or poorly (modern copyright law).

But fundamentally, law benefits people.

Most modern economic perversions are a consequence of taking laws which benefit people (e.g. free speech) and overzealously applying them to non-people entities (e.g. corporations).

So "why [is it] ok for [a] human to learn from a prior art, but not for a LLM"?

Because a human has fundamental output limitations (parallel capacity, time, lifespan) and a machine does not.

Existing laws aren't the way they are because they encode universal truths -- they're instead the consensus reached between multiple competing interests and intrinsically rooted in the possible bounds of current reality.

"This is a fair copyright system" isn't constant with respect to varying supply and demand. It's linked directly to bounds on those quantities.

E.g. music distribution rights, when suddenly home network bandwidth increased enough to transfer large quantities of music files

Or, to put it another shorter way, the current system and source-blind model output fucks over artists.

And artists are humans. And LLMs are not.


> Because a human has fundamental output limitations (parallel capacity, time, lifespan) and a machine does not.

Industrialization as we know it would have never happened if we artificially limit progress, just so that people could still have jobs. I guess you could hold the same kind of argument for the copists, when printing became widespread; for horses before the automobile; or telephone operators before switches got automated. Guess what they have become now. Art made by humans can still exist although its output will be marginal compared to AI-generated art.

LLMs are not humans but are used by humans. In the end the beneficiary is still a human.


I'm not making an argument for Ludditism.

I'm making an argument that we need new laws, different than the current ones, which are predicated on current supply limitations and scarcity.

And that those new laws should redirect some profits from models to those whose work they were trained on during the temporary dislocation period.

And separately... that lobotomizing our human artistic talent pool is going to have the same effect that replacing our human journalism talent pool did. But that's a different topic.


For the AI/Robot tax, the pessimistic view is that the legal state of the world is such that such tax can and will be evaded. Now not only the LLMs put humans out of a job because an LLM or a SD model mimicks their work, but the financial gains have now been hidden away in tax havens through tax evasion schemes designed by AIs. And even if through some counter-AIs we manage to funnel the financial gains back to the people, what is now the incentive for capital owners to invest and keep investing in cutting-edge AI, if the profits are now so meagre to justify the investment?


>> I disagree with you, simply for the fact that artists have been learning from one another for thousands of years.

They learn from each other and then give back to each other, and to everyone else, by creating new works of art and inventing new styles, new techniques, new artf-orms.

What new styles, techniques or art-forms has Stable Diffusion created? How does generative AI contribute to the development and evolution of art? Can you explain?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: