I’ve always found this kind of puzzle infuriating because it’s way underspecified. You’re not trying to find a pattern, you’re trying to guess what pattern the test writer would expect.
Most of the ARC tasks are intuitive and have one obvious answer. Both on IQ tests and the ARC challenge, people manage to guess what the test writer expects.
For an AI that's more useful anyway. If the task is specified completely non-ambiguously, you wouldn't need AI. But if it can correctly guess what you want from a limited number of obvious examples that's much more useful.
countless of problems in the world are underspecified in exactly this way, that is effectively what common sense reasoning is. Or what Charles Sanders Peirce called abductive reasoning, making a sensible best guess under conditions of uncertainty.
Yes, real-world problems are often underspecified but also they tend to come with much more context, and to be much more interactive. These sorts of problems are deliberately minimal and abstract meaning there's nothing for 'common sense' to work with.