Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The llm itself doesn't even need to do that. The actual system / front end that people interact with can wrap that step. Plandex does it for example and has been doing it for longer than the integrated reasoning models existed.

I mean, it's nice when the models can integrate the step-by-step internally... but I feel people have been missing out on the complex interactions by expecting it all in one adhoc prompt.



I think the feeling is that for this to really be AGI, it has to take in a single prompt and then delegate behind the scenes to an enormous tree of sub-agents if needed.

One app that comes to mind is Google's Conversational agents. The routing is just done by referencing another agent in the instructions, no need to explicitly link beyond the prompt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: