LLMs make counting mistakes like forgetting the number of columns halfway through. I won't say "much like humans", since that will probably trigger some. But the general tendency for LLMs to be "bad at counting" (this includes computing) is resolved by producing programs that do the counting, and executing those programs instead. The LLMs that do that today are called agentic.
The reality is that for any meaningful work automation, the currently available tooling is not meeting that expectation.
And 99% of us do not have the capabilities nor knowledge to build these SOTA models which is why A. we are not at OpenAI making 10M+ TC and B. We are application developers who are using off the shelf technology to build products and services.
As such, we have real world experience with these technologies.
BTW I use AI heavily every day in cursor and whatever else.