It's like a docker/package manager for the LLMs. You can easily install them, discover new ones, update them via a standardized, simple CLI. It also auto updates effortlessly.
I have the same question. Noticed that Ollama got a lot of publicity and seems to be well received, but what exactly is the advantage over using llama.cpp (which also has a built-in server with OpenAI compatibility nowadays?) Directly?
Once you've tested to your heart's content, you'll deploy your model in production. So, looks like this is really just a dev use case, not a production use case.
In production, I'd be more concerned about the possibly of it going off on it's own and autoupdating and causing regressions. FLOSS LLMs are interesting to me because I can precisely control the entire stack.
If Ollama doesn't have a cli flag that disables auto updating and networking altogether, I'm not letting it anywhere near my production environments. Period.