Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's no connection between AI and AGI, apart from hopes. Besides which, if you're talking about AGI, you're talking about artificial people. That means:

• They don't really want to be servants.

• They have biases and preferences.

• Some of them are stupid.

• If you'd like to own an AGI that thinks for you, the AGI would also like one.

They are people with cognition, even if we stop being.



AGI just means what it says it is: Artificial General Intelligence. AGIs don't have to have selfish traits like we do, they don't have to follow the rules of natural selection, they just need to solve general problems.

Think of them like worker bees. Bees can solve general problems, though not on level as humans do, they are like some primitive kind of AGI. They also live and die to be servants to the queen and they don't want to be queens themselves, the reason why is interesting btw, it involves genetics and game theory.

This is highly theoretical anyways, we have no idea how to make an AGI yet, and LLMs are probably a dead end as they can't interact with the physical world.


You’re anthropomorphizing too much.


These postulated entities are by definition people. Not humans, because they lack the biology, but that's a detail.

If you think they're going to be trained on all the world's data, that's still supposing them to be an extension of AI. No, they'll have to pick up their knowledge culturally, the same way everybody else does, by watching cartoons - I mean by interactions with mentors. They might have their own culture, but only the same way that existing groups of people with a shared characteristic do, and they can't weave it out of air; it has to derive from existing culture. There's a potential for an AGI to "think faster", but I'm skeptical about what that amounts to in practice or how much use it would be to them.


> These postulated entities are by definition people.

Why? Does your definition postulate that people are the only thing in the universe that can measure up to us? Or the inverse, that every entity as sentient and intelligent as us must be called a person?

My opinion is that a lot of what makes us like this is physiological. Unless the developers go out of their way to simulate these things, a hypothetical AGI won't be similar to us no matter how much human-made content it ingests. And why would they do that? Why would you want to implement physical pain, or fear, or human needs, or biases and fallacies driven from our primal instincts? Would implementing all these things even be possible at the point where we find an inroad towards AGI? All of that might require creating a comprehensive human brain simulation, not just a self-learning machine.

I think it's almost certain that, while there would be some mutual understanding, an AGI would almost certainly feel like a completely different species to us.


The latter, that intelligence is one thing, and that to imagine that an artificial intelligence would be some kind of beyond-intelligence, and would be a beyond-person, is to needlessly multiply entities. The assumption should be there's only (potential to create) people like us, because to imagine beyond-people is to get mystical about it. "Beyond-rats" is what I say to that.

I have sympathy with the point about physiology, though, I think being non-biological has to feel very different. You're released from a lot of the human condition, you're not driven by hormones or genes, your plans aren't hijacked to get you to reproduce or eat more or whatever animal thing, you don't have the same needs. That's all liable to alienate you from the meat-based folk. However, you're still a person.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: