Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me it comes down to signal vs noise.

I’m absolutely confident that AI/LLM can solve things, but you have to shift through a lot of crap to get there. Even further, it seems AI/LLM tend to solve novel problems in very unconventional ways. It can be very hard to know if an attempt is doomed, or just one step away from magic.



At that point, is it really solving or is it just monkeys with typewriters?


"Monkeys with typewriters," is in one sense, a uniform sampling of the probability space. A brute-force search, even when using structured proof assistants, take a very long time to find any hard proof, because the possibility space is roughly (number of terms) raised to the power of (length of the proof).

But similarly to how a computer plays chess, using heuristics to narrow down a vast search space into tractable options, LLMs have the potential to be a smarter way to narrow that search space to find proofs. The big question is whether these heuristics are useful enough, and the proofs they can find valuable enough, to make it worth the effort.


I think the signal-to-noise is demonstrably higher with AI than with a legion of monkeys on typewriters. I think an interesting philosophical question is, is there some threshold of signal-to-noise that by itself would qualify a system as "intelligent", or is "intelligence" some specific property of the search process itself? eg. perhaps real intelligence avoids certain pitfalls, like getting stuck in local minima.


It's stochastic monkeys, but enhanced with a really good bias towards coherent prose, built upon a gigantic corpus.


So, it's monkeys specifically good at typing Shakespeare.


Or finding the solution hidden somewhere among the decimals of pi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: