If Jellyfin ever fixes the mountain of bugs from the "upgrade". They aren't even acknowledging major bugs that make Jellyfin unusable for like 20% of users.
Do not upgrade Jellyfin if you have a sizeable library. Backup first if you do.
Not sure popularity necessarily suggests it's good, but possibly just what people have most heard of or is easiest to setup with. This is going to be even more true now that Claude subscriptions are going to be essentially vendor locked.
Given how far ahead OpenAI was in mindshare, monthly users, and revenue over Anthropic when Claude Code came out, I think we can conclude there's at least some substance behind claims of Claude Code's better product quality.
Comparing this project to is-odd seems very disingenuous to me. My understanding is this was the only way you could use llama.cpp with Claude Code for example, since llama.cpp doesn't support the Anthropic compatible endpoint and doing so yourself isn't anywhere near as trivial as your comparison. Happy to be corrected if I'm wrong.
That's a correct example, and I agree, it is disingenuous to just trivially call this an `is-odd` project.
Back in the days of GPT-3.5, LiteLLM was one of the projects that helped provide a reliable adapter for projects to communicate across AI labs' APIs and when things drifted ever so slightly despite being an "OpenAI-compatible API", LiteLLM made it much easier for developers to use it rather than reinventing and debugging such nuances.
Nowadays, that gateway of theirs isn't also just a funnel for centralizing API calls but it also serves other purposes, like putting guardrails consistently across all connections, tracking key spend on tokens, dispensing keys without having to do so on the main platforms, etc.
There's also more to just LiteLLM being an inference gateway too, it's also a package used by other projects. If you had a project that needed to support multiple endpoints as fallback, there's a chance LiteLLM's empowering that.
Hence, supply chain attack. The GitHub issue literally has mentions all over other projects because they're urged to pin to safe versions since they rely on it.
I would hazard a guess that it's because there's been many debates about contributing PRs that might be perceived as AI slop. Not saying that's the case here, but it's possible the fix might be a poor one, not follow the project's guidelines, or one which the contributor doesn't fully understand, but doesn't care because it fixed the issue. I would guess the better approach would be to submit a bug report with the same information the LLM used, and maybe suggest there the fix the LLM provided. Unless this really was a tiny patch and none of the above concerns applied.
The prompt processing times I've heard about have put me off wanting to go that high with memory on the M series (hoping that changes for the M5 series though). What's the average and longest times you've had to wait when using opencode? Has any improvements to mlx helped in that regard?
The M5 ultra series is supposed to have some big gains around prompt processing - something like 3-4x from what I've read. I'm tempted to swap out my m4 mini that I'm using for this kind of stuff right now!
> These relationships were robust after adjusting for established risk factors for cardiovascular health, including physical activity, smoking, alcohol, diet, sleep duration, socioeconomic status, and polygenic risk.
It's so subtle I've for a long time wondered if it's something most people experience and don't notice, or they assume is normal, just because unless I think about it I don't really notice it either. I have been known to focus on details more than others do. Not sure if this contributes to my seemingly heightened sense of smell as well. But not being able to experience what others experience, makes me wonder if I'll ever know.
>It's so subtle I've for a long time wondered if it's something most people experience and don't notice, or they assume is normal
I wondered this myself too. One thing I do know is nobody was able to relate from friends and family when mentioning this. Visual snow syndrome (which according to affected people online can be very disabling) was only first described as late as 2015 according to wikipedia. So we may never know at this pace.
[1] https://jellyfin.org/posts/jellyfin-release-10.11.0/#the-lib...
reply