Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a newcomer to the area and a noob in most things ML, one think that I still have a hard time evaluating is the minimal type of hardware to run these LLMs either locally or at scale.

I often see a lot of detail on how the models were trained but do not see much information on what is needed to actually run it. I found some information on /r/LocalLLaMA/ but it still very sparse.

Anyone has tips on how to figure this out besides actually running them (and needing to spend $$)?



poe.com has a pretty broad collection of LLMs, and for the most part is free.


I think I was not very clear, I was talking about self-hosting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: