Subagents can work very well, especially for larger projects. Based on this statement, I think you're experiencing how I felt in my early experience with them, and that your mental model for how to use them effectively is still embryonic.
I've found that the primary benefit for subagents is context/focus management. For example, I'm doing auth using Stytch. What I absolutely don't want to do is load https://stytch.com/docs/llms.txt and instructions for leveraging it in my CLAUDE.md. But it's perfect for my auth agent, and the quality of the output for auth-related tasks is far higher as a result.
I'm unsure if this also qualifies as incompetence/embryonic understanding, though I've used LLMs for hundreds of hours on development tasks and have also found that sub-agents are not good at programming. They're more suitable for research tasks to provide informed context to the parent agent while isolating it from the token consumption which retrieving that context cost.
Zooming out, my findings on LLMs with programming is that they work well in specific patterns and quickly go to shit when completely unsupervised by a SME.
* Prototyping
* Scaffolding (i.e. write an endpoint that does X that I'll refine into a sustainable implementation myself)
* Questions on the codebase that require open-ended searching
* Specific programming questions (i.e. "How do I make an HTTP call in ___ ?")
* Idea generation ("List three approaches for how you'd ____" or "How would you refactor this package to separate concerns?")
The LLMs all fuck up on something in every task that they perform due to the intersection of operating on assumptions and working on large problem spaces. The amount of effort it takes to completely eliminate the presence of assumptions in the agent make the process slower than writing the code yourself. So people try to find the balance they're comfortable with.
> I've found that the primary benefit for subagents is context/focus management. For example, I'm doing auth using Stytch. What I absolutely don't want to do is load https://stytch.com/docs/llms.txt and instructions for leveraging it in my CLAUDE.md.
> But it's perfect for my auth agent, and the quality of the output for auth-related tasks is far higher as a result.
What about just using a sub agent specifically to fetch llms.txt and find the answer to the question for the parent agent? Instead of handing a full task off to it
> You didn't not bother reading my actual criticism against subagent model:
Nope, I did. It's why I was under the impression that you hadn't yet figured out how to use them successfully. That's why I posted a specific example where a subagent is useful and why, hoping you and others might benefit from that.
If the subagent model does not work 90% of the time, why does the workflow model you recommend in another Reddit post you linked to specifically recommend delegating work to sub-agents throughout?
Subagents can work very well, especially for larger projects. Based on this statement, I think you're experiencing how I felt in my early experience with them, and that your mental model for how to use them effectively is still embryonic.
I've found that the primary benefit for subagents is context/focus management. For example, I'm doing auth using Stytch. What I absolutely don't want to do is load https://stytch.com/docs/llms.txt and instructions for leveraging it in my CLAUDE.md. But it's perfect for my auth agent, and the quality of the output for auth-related tasks is far higher as a result.
A recommended read: https://jxnl.co/writing/2025/08/29/context-engineering-slash...