Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the advantage of that, exactly? Why would you want something very compute intensive run on your phone instead of just using an API to data centers with great economy of scale?


My assumption is that most users won't actually care if the LLM is in the cloud or device. That said, quite a few folks have iPhones and Apple's only way into the AI race is to go to it's strength, 1B+ hardware devices that they design the silicon for. They will produce a phone that runs a local LLM and market it as private and secure. People upgrade every couple of years (lose or breaks) so this will drive adoption. I'm not saying people will vibe code on their iphones.


Price, for one. I don't mind running a local model at half the speed if all it costs is electricity.

A local model basically allows me to experiment with running an agent 24x7, 365 days a year with continuous prompting.

SaaS won't be able to match that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: