Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs make counting mistakes like forgetting the number of columns halfway through. I won't say "much like humans", since that will probably trigger some. But the general tendency for LLMs to be "bad at counting" (this includes computing) is resolved by producing programs that do the counting, and executing those programs instead. The LLMs that do that today are called agentic.


Right. Except those agents are not working as expected in many cases when the files become more complicated.


I haven't tried working with very large files.

But Claude Code does read the entire file when it reads or writes anything.

Humans don't do anything close to that when the files get big.

So presumably what LLMs need is a finer context granularity than per-file.


The promise is that we can automate work.

The reality is that for any meaningful work automation, the currently available tooling is not meeting that expectation.

And 99% of us do not have the capabilities nor knowledge to build these SOTA models which is why A. we are not at OpenAI making 10M+ TC and B. We are application developers who are using off the shelf technology to build products and services.

As such, we have real world experience with these technologies.

BTW I use AI heavily every day in cursor and whatever else.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: