Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know at every company I’m working with they’ve found pretty useful and interesting fits for this tech. It fills a space that’s been impossible to date - an abductive reasoning ability in an abstract semantic space. While they can’t actually reason - any more than our inductive or deductive reasoning systems of yore - the missing piece in a lot of stuff has been the ability to navigate an abstract space and find a likely “meaning” then produce a likely output that’s semantically “accurate.” The optimizing, constraining, informing via goal based agency, information retrieval, etc - those are simply integrations, as this article discusses. By looking at LLMs as a finished product you missed the magic. It’s not a product, it’s a capability in a larger system being displayed in demo ware. The larger systems are where the magic happens. Don’t take my word for it - while we will see more hype than we ever have before, we will also see systems that transcend what was possible by amazing leaps and bounds. The jaded are both right, and profoundly wrong - as are the wild eyed dreamers.


OK, could you please give a concrete example of a problem that was solved by this that wasn't possible to do before?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: