Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't we have LLaMA? The quantized version is runnable on top end consumer hardware, which is within reach for many people if a bit expensive. You can also use it on a rental GPU unit from something like vast.ai or another commercial vendor. AI on demand for free is not a viable economic proposition for now, lest you create a way to vouch for legitimate users and purposes, but at that point most people would rather pay a bit. Maybe in a few years it will become economical.


It's not legal to use LLaMA for commecial purposes. We need an open source alternative. Not something that was leaked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: