Honestly? I think it would result in a smarter image generator. Part of the problem with the "hope-Authors-Guild-v-Google-is-controlling-precedent" approach is that the data set is extremely noisy. In AI, the training set is gospel, and people are almost certainly overfitting their models. DALL-E 2 is suspiciously familiar with how to draw Getty Images watermarks, for example.
Man, if I knew how half this training software worked, I'd be downloading the whole image set today and shoving it straight through my poor aging 1080 Ti.
Man, if I knew how half this training software worked, I'd be downloading the whole image set today and shoving it straight through my poor aging 1080 Ti.