"Monkeys with typewriters," is in one sense, a uniform sampling of the probability space. A brute-force search, even when using structured proof assistants, take a very long time to find any hard proof, because the possibility space is roughly (number of terms) raised to the power of (length of the proof).
But similarly to how a computer plays chess, using heuristics to narrow down a vast search space into tractable options, LLMs have the potential to be a smarter way to narrow that search space to find proofs. The big question is whether these heuristics are useful enough, and the proofs they can find valuable enough, to make it worth the effort.
I think the signal-to-noise is demonstrably higher with AI than with a legion of monkeys on typewriters. I think an interesting philosophical question is, is there some threshold of signal-to-noise that by itself would qualify a system as "intelligent", or is "intelligence" some specific property of the search process itself? eg. perhaps real intelligence avoids certain pitfalls, like getting stuck in local minima.