Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wouldn't dismiss it so fast. I've seen SD generate some quite creative images, and original as I've been able to determine by searching the training dataset. One example was asking for a picture of someone riding a Vespa, and one of the images had the rider wearing the Vespa fenders as a helmet, louvers and all. I don't see what else to call that but the AI's "own idea".


By deconstructing the "decisions"(to use a disgusting anthropomorphism) that led to either image we can dismiss the "I don't understand, so it must be doing something greater than it is" rhetoric.

The decisions leading up to the human art is the entire human experience leading up to the creation of the art(and possible context afterwards), which we as people tend to put value on.

The "decisions" leading up to the AI art are a series of iterative denoising steps that attempt to recover an image from noisy data by estimating how much the noise differs from the "good looking" image.

So for your "vespa fenders as a helmet" drawing, I don't think that constitutes an algorithm being "creative". If a human were to make the same picture we could rationalize that they're being creative because we can imagine a path where their human experiences led to a new idea. Since the algorithm was only ever made to denoise an image based on its abstract feature-space representation I don't see any way we could rationalize that it created a new idea. The algorithm never "thought" it should use a fender as a helmet, it only found that the best way to denoise the current image to the one described in feature-space was to remove pixels that resulted in the image.

Don't humanize algorithms. They're applied statistics, not a sum of human experiences.


If a calculator adds 2 and 2 and shows 4, is that disgustingly anthropomorphizing the word "add"? If we need a separate word for every informational process, it's going to get awfully messy.

When an idea "pops" into your head, how was that made? Couldn't it also be a similar denoising of patterns in synaptic potentials? We know from many experiments that what something feels like can be quite different from what it actually is.

Is it only that we don't know the exact brain process that makes humans special? And once we inevitably do figure it out, does all human art become meaningless too? I think we need to learn to disconnect process from result and just enjoy the result, wherever it came from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: