Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We need another major breakthrough first. As I've pointed out previously, so far nobody can even get squirrel-level AI to work. Even OpenWorm doesn't work. The big problem is "common sense", defined as getting through the next 30 seconds of life without a major screwup. There are animals with brains the size of a peanut that can do that.

The hard problems are down near the bottom. It's not about consciousness, souls, etc. It's about running along the branch without falling off, grabbing nuts along the way.

GPT-3 is fun, but it's more a demonstration of the banality of discourse than a breakthrough in understanding.



> GPT-3 is fun, but it's more a demonstration of the banality of discourse than a breakthrough in understanding.

That's the fate of all AI efforts: whenever we understand something well enough, it ceases to be seen as AI.

As a historic example, the A* algorithm hails from a time when searching through a graph was still seen as AI.


Which indicates to me, that we still haven't identified the "secret sauce" of intelligence.


I studied AI at the Batchelors level and have from time to time read up on the discoveries. I think the problem is still the same as a decade ago despite all of the sparkling discoveries made in the meantime. We can't define the problem. We can make a really broad and concise description of what it is supposed to do, but that's not the same as defining the problem. Maybe that's not as relevant as I felt it would be (I was of the opinion back then and still am that AGI is not arriving in our or our children's lifetime). Perhaps we will stumble upon it. That is at least how we arrived at our faculties. Nature tried a billion different combinations and we are the current incarnation of matter trying to figure itself out.


There is none. We are all just bunch of programmable monkeys outside of their original regulation loop.

AI might get to the same level (as we can already see with GPT-3, as it slowly accrues wisdom), but then it will need to get a digital notebook, a calculator and a drawing board.

The advantage it will have over us is that it won't have to sleep, eat, will reproduce at the factory production rate and most importantly, it won't have emotions that would hurt when people are shit.

It won't be coherent super intelligence for quite some time. And if it becomes one, it will be slow. About the same latency as humans on the planetary level. Maybe even slower than our ~100ms.

Till then, there will be squabbling. Prepare for a literal digital ecosystem.


>There is none. We are all just bunch of programmable monkeys outside of their original regulation loop.

https://twitter.com/dmimno/status/949302857651671040

>Optimist: AI has achieved human-level performance!

>Realist: “AI” is a collection of brittle hacks that, under very specific circumstances, mimic the surface appearance of intelligence.

>Pessimist: AI has achieved human-level performance.


> The advantage it will have over us is that it won't have to sleep, eat, will reproduce at the factory production rate and most importantly, it won't have emotions that would hurt when people are shit.

I'm actually not sure about the last one.

Also, what makes you think AI will be slow?



Right now, we've robots and space probes working all over the Solar System with much more intelligence and reliability than any biological rodents.


I am not sure about that.

You see, those probes deal with harsh environments, yes. But they don't have to deal with antagonists. No one is out to eat or infect them. Mars won't adapt its storms, Venus won't adapts its chemistry. They obstacles that don't care about our probes, those obstacles don't adapt against the probes.

Those radically different kinds of environments give you radically different designs of probes vs rodents. So I don't think we can easily compare the intelligence of probes vs rodents.


Sure, but they have extremely limited autonomy. The vast majority of their behaviours are directly controlled, or custom programmed by us for the specific situation.


Most humans can't run along a branch grabbing nuts. I'm not sure that's a fair test. Here's a robot running along the ground - they're a lot better than they were a decade ago https://www.youtube.com/watch?v=vjSohj-Iclc


"Getting through life" is not the correct benchmark. An autonomous system that merely wipes out all of humanity is by definition a superior intelligence, and I would argue that no major breakthrough is needed to create such a thing; just resources. It doesn't matter how long that thing can self-sustain after annihilating us. A win is a win.


Are stars and other stellar phenomena a superior intelligence to humanity? They are autonomous systems that could easily wipe us all out


> An autonomous system that merely wipes out all of humanity is by definition a superior intelligence

I think you got stuck on semantics and missing the forest for the trees, with all due the respect.

>Getting through the next 30 seconds

Might not be "the final" or "best benchmark" but I'd argue it's a damn good problem to solve on the way to discovering true AI and GAI.


All the mammals have roughly similar brain architecture. The same components seem to be present, in different quantities. If we can get into the low-end mammal range of AI, we're most of the way there. So if we can get to squirrel level AI, we're getting close. From then on, it may just be scaling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: