Hacker Newsnew | past | comments | ask | show | jobs | submit | Davidzheng's commentslogin

There's no objective measure of intelligence comparisons, we only say llm is jagged compared to humans.

How can you tell?

How can I tell what? That current LLMs are not conscious or that AGI/ASI will not require consciousness?

How do you know they aren't conscious of we don't know what consciousness is, and have no test to see if anyone or anything is conscious?

This may seem like a joke, but your answer will likely be in the vain of "conscious things are obviously conscious", which gets us nowhere.

I mean, self motivation and a desire to not be turned off can be programmed into even decades old AIs.


Consciousness is a huge topic and beyond a HN comment, but: My answer to this is that they obviously lack a basic understanding of simple things that any continually conscious being would find trivial. I have spent a lot of time having long form exploratory conversations on a particular topic with AI, and you begin to see how it doesn’t really understand what you’re talking about, it just makes a prediction about what you probably mean.

There is also apparently no real memory; if I tell it to stop doing something today, it’ll agree, then go back to doing it again tomorrow, with no memory of our conversation. This never changes, no matter how many times I ask.

Again we could debate consciousness forever, but in a simple sense, are there any other conscious beings without this sense of continuity? Not that I can think of. And so if everything we call “conscious” is different from an AI, then are we justified in extending it to AI?


So is a person suffering from amnesia conscious if they lack short-term and long-term memory?

Ruling out consciousness or qualia emerging from the inference in an LLM is just as invalid of a take as being 100% certain of its consciousness. We don’t know what consciousness really is, so only thing we can say with certainty is we do not know.


No, by continuity I mean literally moment to moment. Sorry if I didn’t clarify that. Even people with amnesia are still present moment to moment. As far as I know there are no things that we call conscious which have zero continuity.

I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious” in the way an animal is, and never will be. This is a failing of specific language. Maybe the machines can be aware, input data, mimic what we see as consciousness, etc. but the metaphor of consciousness really doesn’t fit. A jet can move faster than an eagle but it’s not moving in the same way. We simply lack a sophisticated enough language to easily differentiate the two.


Doesn’t the LLM experience discrete continuity every time it infers the next token?

> I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious”

This pretty much sums up most arguments for why LLMs aren’t conscious: ”I think” followed by assertions. Only real argument is: science doesn’t quantify consciousness, we cannot quantify consciousness, let’s not assign so much certainty to models clearly exhibiting intelligence not being conscious in some way, to some degree.


I don't think you really understood my point, because you didn't reply to it at all.

I am making a linguistic argument. AI may get as sophisticated as "traditional" consciousness. But this is only "real" consciousness if you are a functionalist and think the output is all that matters.

I disagree and think that "flying" is just a weak generic word that describes both planes and birds, and not some kind of ultimate Platonic Ideal in the world.

Ditto for AI consciousness: it may develop to be as complex as traditional animal consciousness, but I'm not a functionalist, and think it's merely a lack of our sophisticated language that makes us think it's the same thing. It's not. Planes PlaneFly through the air, while birds BirdFly.


I see it as LLMs, AI, whatever, can be intelligent enough to emulate consciousness, appear outside as if it were. But that is not proof it really has a qualia, an experience of existing.

All I am saying we should stop being so certain they are not conscious, since we lack a solid, quantifiable model for consciousness.


As a philosophical zombie myself[0], I'm well aware of how hard it is to define and test consciousness. That's why I tried to clarify what I meant with: desire for self-preservation and intrinsic motivation. Which LLMs clearly lack, don't you agree? Also, I'm not saying that those things couldn't be programmed in, just that so far, they don't seem necessary.

[0] I lack a conscious experience and qualia


How can you tell that you lack conscious experience and qualia?

They assert that they dont have them, in the same way you (presumably) assert that you do have them. Neither have any further evidence and one is not a prioi more likely than the other.

Yep, this basically. I tend to get along well with solipsists.

> desire for self-preservation and intrinsic motivation

I’d be curious about how you’re showing they lack either of those


They don't try to prevent you from deleting them and they don't output anything unless prompted.

"they don't output anything unless prompted"

Unprompted they're not unlike a human sleeping or in a coma. Those states don't preclude consciousness in other states.


That's besides the point though.

Important to remember that intelligence is not a singular thing and when the last gap is closed, most aspects will be highly superhuman

Obviously yes in the form that the comment you replied to refers to--US would be much more careful stringing a country with nuclear weapons. So while the invasion may not be caused by proximity it can be allowed bc Iran doesn't have one.


How much value is there in individual values?

Many of us remember that OpenAI was also started by people with strong personal values. Their charter said that they would not monetize after reaching AGI, their fiduciary duty is to humanity, and the non-profit board would curtail the ambitions of the for-profit incentives. Was this not also believed by a sizeable portion of the employees there at the time? And what is left of these values after the financial incentives grew?

The market forces from the huge economic upside of AI devalues individual values in two ways. It rewards those that choose whatever accelerates AI the most over any individuals who are more careful and act on individual values--the latter simply loses power in the long run until their virtue has no influence. As Anthropic says in their mission statements, it is not of much use to humanity to be virtuous if you are irrelevant. The latter, as is true for many technologies, is that economic prosperity is deeply linked to human welfare. And slowing or limiting progress leads to real immediate harm to the human population. And thus any government regulations which are against AI progress will always be unpopular, because those values which are arguing future harm of AIs is fighting against the values of saving people from diseases and starvation today.


Election odds, chance of US bombing Iran, and many others


Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.


his against Chinese models is smoking screen for their resistance to DOW, they are not even pretending


Better naive than malicious.


At a certain level, ignorance IS malicious.

If you have more money than god, you no longer get to play the "I didn't know" game. You have the resources. If you don't know, you made a choice to not know.


The first one is definitely one we agree on and the second was one that I had not clued into so thank you.


You're saying that as if these two things are mutually exclusive.


Every day I hope the Chinese models get "good enough" to drop these corporate ones. I think we are heading towards it.


kid, time to grow up and face the reality

Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.


The competition is good though, it will push down the prices for all of us. At some point being behind 5% won’t have much practical difference. Most people won’t even notice it.


The moment the Chinese create a model that is "good enough" they won't open source it


I will gladly switch to that one if their CEO is less of sociopath than Altman and god forbid Amodei. In fact I use some of the new Chinese models at home and compared to Opus 4.6 AGI, the difference is getting less. Codex 5.3 xhigh is already better than opus anyway.


“I don’t need to win, I just need you to lose”


Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.

None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.


Re: But they aren't alive, they don't live in the world and have experiences, and they can't create something truly new.

Is it possible for a character in a novel to have novel experiences? Or for you to experience a novel dream? I would argue yes. You can know the rules of the environment and the starting conditions, but with a bit of randomness (or not) you can generate from that novel experiences which were unexpected - so too from the data & distribution that AIs are already trained on they can experience new experiences.

Another source of novelty is from good verifiers/recognition of a class of object which is hard to construct but easy to verify - here the AI can search and from that obtain novel solutions which were unthought of before.

N.B novelty itself is basically trivial - just generate random strings. But both of the above are mechanisms to generate novel samples inside some constraint of "meaningfulness"


Not if they can leverage their superior abundance of compute/intelligence to invade other industries.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: