Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean it’s definitely a weaker argument than 2 years ago.

AI may very well plateau, but it might not.



> I mean it’s definitely a weaker argument than 2 years ago.

Is it? I see no evidence that machines are any closer to "thinking creatively" than they ever have been. We certainly have been developing our capacity for computation to a great extent, but it's not at all evident that statistical methods and creativity are the same thing.


> is it?

It is. And I bet most people would agree with GP. Most people (including engineers building these systems) have experienced surprise with some of the outputs of these models. Is there anything better to gauge creativity by than perceived surprise?


> Is there anything better to gauge creativity by than perceived surprise?

I think there has to be, since such surprise can be generated through purely random mechanisms, and I don't think anyone would call a purely random mechanism "creative".


If it were purely random it would generate rubbish.


Not necessarily, but even in the cases where that's true, there will be the occasional result that surprises (in a good way).


This reminds me of when I would be surprised by AI bots I coded up for video games. Yeah, sometimes they beat me, and their strategies were often a mix of RNG and heuristics so I was certainly surprised by their behavior on many occasions. But would I consider them creative? No. Would they fool some people into thinking they had creativity and agency? Sure.


A practical definition of "creativity" is "can create interesting things." It's pretty clear that machines have become more "creative" in that sense over the last few years.


I have yet to see ChatGPT or something similar ask a followup to clarify the question. They just give you a "solution". That's equivalent of a super bad junior dev that will cause more trouble than the amount they will solve.

That being said, I think we could make such a system. It just has to have training data that is competent...


Tell them to ask you follow up questions and they will.

Some systems built on top of LLMs have this built in - Perplexity searches for example usually ask a follow up before running the search. I find it a bit annoying because it feels like about half the time the follow up it asks me isn't necessary to answer my original question.


> Tell them to ask you follow up questions and they will.

That's rather missing the point. If your question makes no sense it will not ask a followup, it will spit out garbage. This is pretty bad. If you are competent enough to ask it to ask for followup then you are probably already competent enough to either not need the tool, or competent enough to ask a good question.


I have had chatGPT suggest that I give it more data/information pretty regularly. Although not technically a question, it essentially accomplishes the same thing. "If you give me this" vs. "Can you give me this?"


> I have yet to see ChatGPT or something similar ask a followup to clarify the question.

I don’t use it for dev, but other things and I get chatgpt asking me follow up questions multiple times a day.


I can ask my computer to write a backstory for my dnd character, give it a few details, and it makes one.

Sometimes it adds an extra detail or two even!

A few years ago that was almost unthinkable. The best we had was arithmetic over abstract language concepts (KING - MAN = QUEEN)

We don’t have a solid definition of “creativity” so the goalpost can move around a lot, but the idea that a machine can not create new prose, for example, is just not true anymore.

That’s not the same as creativity, sure, but it definitely weakens the “computers can’t be creative” argument.


actually, KING - MAN is a neuter ruler. You need to add WOMAN to that vector to get QUEEN.

Sometimes. If the embeddings were trained well.


This point is often brought up in threads about AI, and I don't think it's accurate.

The thing is that statistical models only need to be fed large amounts of data for them to exhibit what humans would refer to as "creativity", or "thinking" for that matter. The process a human uses to express their creativity is also based on training and the input of other humans, with the difference that it's spread out over years instead of hours, and is a much more organic process.

AI can easily fake creativity by making its output indistinguishable from its training data, and as long as it's accurate and useful, it would hold immense value to humanity. This has been improving drastically in the last few years, so arguing that they're not _really_ thinking creatively doesn't hold much water.


Not really. Large language models' output is creative only insofar as you prompt them to mix data. It allows you to create combinations that nobody has seen before. On its own, an LLM is incapable of producing anything creative. Hallucinating due to a lack of data is the closer it comes to autonomous creativity. Happy accidents are an unreliable creativity source. This type of creativity is amusing but doesn't solve any existing problem today.


There's a simple irony in it all. AI's perceived value is based upon and built from human creativity - remove it (and its evolution) and it will end in grey/brown sludge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: