Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think discussions about superintelligence are mostly pointless. They are storytelling at best, and often little more than bullshitting.

Language is an imperfect model of the reality. The further a discussion deviates from the observable reality, the less confident you can be that the conclusions you make within the model are also valid in the reality. Once you are talking about something as wildly hypothetical as superintelligence, words have basically lost their connection to the observable world.

If the history of science teaches us anything, it's that smart people are often stupid. They come up with all kinds of silly ideas, because they rely too much on reason and too little on hard work. And even the ones who ended up revolutionizing the world usually sound ridiculous in retrospect, if you listen to all of their ideas instead of just the successful ones.

Maybe there are some valuable ideas in the superintelligence discussion, but you can't identify them in advance.



The important part about current ai craze is that it’s the first realized approach that somewhat resembles all the theories, fantasies, and bull shit about it. Can it eventually lead to superintelligence? it seems like it’s stepping in the right direction - and we’ll know soon enough if it’s another dead end (possibly even within few short years… which is a great leap from time scales of decades until recently)


If you have a good enough model, you can identify anything.

To discuss super-intelligence, we should first define it. Wikipedia says that it is intelligence surpassing the brightest of human minds. Taken as a whole, the internet already has the knowledge of a super-intelligence: it contains more useful information than any individual human. But it is far more limited in its control and use of information.

LLMs rely heavily on inferences from their training data, meaning that they struggle to generalize to new situations. If you had a program that could use abstract reasoning to learn any topic, then it could solve any problem better than a human, given that a supercomputer can process and store more data than a human. This program would be a super-intelligence.

I expect that the development of intelligence (software) superior to humans will happen much faster than the development of superior hardware did, based on the timescales of human evolution (billions of years) compared to the evolution of civilization (thousands of years).


I’d like to push back, if you have the time to clarify. Are the following summaries accurate renditions of your points?

1. We shouldn’t worry about SI because we haven’t seen one yet. 2. We shouldn’t listen to smart people, because they’ve been wrong before.

Because if so I obviously don’t find those convincing. The whole reason the SI cultists are so “alarmist” is because by definition this is the kind of problem we have to preempt, not run into and then wing it.

If someone responded to concerns about the atmosphere igniting into nuclear fire with “that’s never happened before and the only people worried about it are scientists, so don’t worry about it” instead of equations… well I’d be damn well terrified


How do you respond to concerns about fairies stealing your children?

For that matter, how do you respond to a calculation (there was one!) that railroad trains could not exceed 41 miles per hour, or all the air would be forced out of the cars and all the passengers would die?

I believe that jtsiren's point is that we're not at the point where we can even define intelligence. We can't calculate anything, because we can't sensibly define any of the variables in the equations. All we can do is make guesses about terms we can't even define. We're like stone-age people worrying that if their neighbors' cooking fire gets too hot, it is going to light the air on fire and we're all going to die, and nobody can reassure us because nobody even knows what burning really is, or what the air's made of. We're millennia away from being able to do the kind of calculation that you want.

So the only choices available are to proceed, or not. And if your answer is "not", you need to convince everybody, because I don't think we're going to (for example) nuke North Korea to stop their AI program unless you've got a really convincing case. Which nobody has now.


>I think discussions about superintelligence are mostly pointless. They are storytelling at best, and often little more than bullshitting.

That's like saying in the 1800s that we shouldn't ever investigate atoms because it's mostly pointless and storytelling at best.

Personally, I think we need to be discussing this now. Smart people need to be. We need to come up with models for AI intelligence, which might help us predict when and if superintelligence occurs.


Philosophers have been talking about atoms for thousands of years, and most of it turned out to be pointless bullshitting. Then, a few years before 1800, scientists started using atoms an an explanation for measurable phenomena in chemistry. And that's how scientific theories of atoms came to be: not as hypothetical constructs, but as explanations for something that could be observed.


And we have been talking about a super intelligence since the 60s?

It’s not like it’s completely out of the blue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: