Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But o3 is just a slightly less stupid idiot savant...it still has to brute force solutions. Don't get me wrong, it's cool to see how far that technique can get you on a specific benchmark.

But the point still stands that these systems can't be treated as deterministic (i.e. reliable or trustworthy) for the purposes of carrying out tasks that you can't allow "brute forced attempts" for (e.g. anything where the desired outcome is a positive subjective experience for a human).

A new architecture is going to be needed that actually does something closer to our inherently heuristic based learning and reasoning. We'll still have the stochastic problem but we'll be moving further away from the idiot savant problem.

All of this being said, I think there's plenty of usefulness with current LLMs. We're just expecting the wrong things from them and therefore creating suboptimal solutions. (Not everyone is, but the most common solutions are, IMO.)

The best solutions need to be rethinking how we typically use software since software has been hinged upon being able to expect (and therefore test) dertiministic outputs from a limited set of user inputs.

I work for an AI company that's been around for a minute (make our own models and everything). I think we're both in an AI hype bubble while simultaneously underestimating the benefits of current AI capabilities. I think the most interesting and potentially useful solutions are inherently going to be so domain specific that we're all still too new at realizing we need to reimagine how to build with this new tech in mind. It reminds me of the beginning of mobile apps. It took awhile for most us to "get it".



Can you elaborate about your predictions for how the benefits of current capabilities will be applied? And your thoughts on how to build with it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: