It makes no sense to add an extra layer, and we definitively do not want to make us and our users dependent of docker project.
There exist many OCI runtimes, and our container toolkit already provides a (ball parked) 90% feature overlap with them. Maintaining two stacks here is just needless extra work and asking for extra pain for us devs and our users, so no, thanks.
That said, PVE is not OCI runtime compatible yet, that's why this is marked as tech preview, but it can be still useful for many that control their OCI images themselves or have an existing automation stack that can drive the current implementation. That said, we plan to work more on this in the future, but for the midterm it will be not that interesting for those that want a very simple hand-off approach (let's call it "casual hobby homelabber"), or want to replace some more complex stack with it; but I think we'll get there.
People stuck with Docker for a reason, even after they became user hostile. Almost every selfhosted project in existence provides a docker-compose.yml that's easy to expand and configure to immediately get started. None provide generic OCI containers to run in generic OCI runtimes.
I understand sticking with compatibility at that layer from an "ideal goal" POV, but that is unlikely to see a lot of adoption precisely because applications don't target generic OCI runtimes.
Correction: In Proxmox VE we're not using virsh/libvirt at all, rather we have our own stack for driving QEMU on a low-level, our in-depth integration, especially with live local storage migration our Backup Servers dirty-bitmap (known as change block tracking in vmware worlds) would be possible in the form we have it. Same w.r.t. our own stack for managing LXC container.
The web UI part is actually one of our smaller code bases relative to the whole API and lower level backend code.
Correct sorry I don't use the web-ui's and was confusing oVirt, I forgot that you are using perl modules to call qemu/lxc.
I would strongly suggest more work on your NUMA/cpuset limitations. I know people have been working on it slowly but with the rise of E and P cores, you can't stick to pinning for many use cases and while I get hyperconvergence has it's costs, and platforms have to choose simple, the kernels cpuset proc system works pretty well there and dramatically reduces latency, especially for lakehouse style DP.
I do have customers who would be better served by a proxmox type solution, but need to isolate critical loads and/or avoid the problems with asymmetric cores and non-locality in the OLAP space.
IIRC lots of things that have worked for years in qemu-kvm are ignored when added to <VMID>.conf etc...
PVE itself is still made of a lot of perl, but nowadays, we actually do almost everything new in rust.
We already support CPUsets and pinning for Container VMs, but definitively can be improved, especially if you mean something more automated/guided by the PVE stack.
If you have something more specific, ideally somewhat actionable, it would be great if you could create an enhancement request at https://bugzilla.proxmox.com/ so that we can actually keep track of these requests.
While the input for qemu is called a "pve-cpuset" for affinity[0], it is using explicitly the taskset[1][3] command.
This is different than cpuset[2], or how libvirt allows the creation of partitions[3] using systemd slices in your case.
The huge advantage is that setting up basic slices can be done when provisioning the hypervisor, and you don't have the hard code cpu pinning numbers as you would in taskset, plus in theory it could be dynamic.
As cpusets are hierarchical, one could use various namespace schemes, which change per hypervisor, not exposing that implementation detail to the guest configuration. Think migrating from an old 16 core CPU to something more modern, and how all those guests will be pinned to a fraction of the new cores without user interaction.
Unfortunately I am deep into podman right now and don't have a proxmox at the moment or I would try to submit a bug.
This page[5] covers how even inter CCD traffic even on Ryzen is ~5x compared to local. That is something that would break the normal affinity if you move to a chip with more cores on a CCD as an example. And you can't see CCD placement in the normal numa-ish tools.
To be honest most of what I do wouldn't generalize, but you could use cpusets, with a hierarchy and open the choice to try and improve latency without requiring each person launching a self service VM to hard code the core ID's.
I do wish I had the time and resources to document this well, but hopefully that helps explain more about at least the cpuset part, not even applying the hard partitioning you could do to ensure say ceph is still running when you start to thrash etc...
I’ll take a look. I’ve used some of those purpose built tools before and I was never much of a fan. Usually due to how the furniture was handled.
Back in high school I had extensive experience with AutoCAD R14 (3 years with it, after 2 years of board drafting), and then in college I had some more experience with a couple other packages. But this was all a couple decades ago now.
Your CAD experience level sounds like it is similar, but a bit higher than mine (2 years hand drawing, 2 years CAD, some more hobbyistic CAD & 3D modelling over the years for personal projects), so yeah SweetHome3D might not be that much help for you over using some CAD software directly.
I found furniture handling OK, but certainly has its rough edges. Good thing is that one can just import 3d models and so create the relevant pieces of furniture themselves; or use the generic boxes that SH3D has, if it's just for 2d space usage.
I did a few office space modellings with it, i.e. to get a feeling of how the space could be used best, and for that I found it quite OK. The result I got compared to the time invested was pretty good for my taste.
Obviously, but additionally, providing validation on the frontend can help UX a lot.
Doing that can provide much quicker feedback compared to an error thrown at the user only after submitting a form, which can get especially annoying if the latter loses (some of) its values due to submission.
And one solution for that problem can be using a native picker.
I do understand what "client-side" validation is, but I wish it had a different name, because people think they can just validate client-side and they do not bother doing it on the server... for some reason, I do not know. It should be obvious though, right? Yet it is not.
FWIW, the few non-techie people in my life that I care enough to administer their notebooks and provide support all run KDE on Debian happily.
While I had some reservations about acceptance when I made the switch from Windows 7, it turned out that it was one of my better choices of my life, and resulted in much less work for me compared to what Windows caused for me previously. And GNOME just did not work out well for most of these people and the workflows they are used to.
I like pass and use it a lot, especially as it provides a good and safe backup for the case my vaultwarden instance goes up in smokes.
There is also a drop-in replacement with has some extra features and a bit better UX in some parts, personally I only really use it for the better support for handling multiple GPG keys, as I got some physical backup keys and it can be also nice teams for a shared vault.
There are a myriad of companies that have thrived in "IP locked" environments, a host that have failed too. Equally there are heaps that have thrived and failed in "IP open" environments.
I think at best you could say it's more challenging or perhaps risky being a bit restricted with IP, but I'd call it miles away from a "graveyard".
You can hardly call Intel/amd/qualcomm etc all struggling due to the architectures being locked down.
Look at powerpc/Isa. It's (entirely?) open and hasn't really done any better than x86.
Fundamentally you're going to be tied to backwards compatibility to some extent. You're limited to evolution, not revolution. And I don't think x86 had failed to evolve? (eg avx10 is very new)
> Is it like writing frontend code in Rust and compiled to WASM ?
Exactly, it's actually quite lightweight and stable plus mostly finished, so don't let the slower upstream releases discourage you from ever trying it more extensively.
We build a widget library with our products as main target around Yew and native web technologies, you can check out:
For code and a little bit more info. We definitively need to clean a few documentation and resource things up, but we tried to make it so that it can be reused by others without tying them to our API types or the like.
FWIW, the in-development Proxmox Datacenter Manager also uses our Rust / Yew based UI, it's basically our first 100% rust project (well, minus the Linux / Debian foundation naturally, but it's getting there ;-)
The linked forum post has an FAQ entry, this was a carefully weighted decision with many factors playing a role, including having more staff available to manage any potential release fall-out on our side. And we're in general pretty much self-sufficient for any need that should arise, always have been that way and provide enterprise support offerings that back our official support guarantees if your org would have the need for that.
Finally, we provide bug and security updates for the previous stable release for over a year, so no user has any rush to upgrade now, they can safely choose any time between now and until August 2026.
We manage anything, including package builds, ourselves if the need should arise; we also monitor Debian and release critical bugs closely, we see no realistic potential for any Proxmox relevant package to disappear, at least nothing higer compared to that happening after the 9th.
FWIW, we got staff members that are also directly involved with Debian which makes things a bit easier.
There exist many OCI runtimes, and our container toolkit already provides a (ball parked) 90% feature overlap with them. Maintaining two stacks here is just needless extra work and asking for extra pain for us devs and our users, so no, thanks.
That said, PVE is not OCI runtime compatible yet, that's why this is marked as tech preview, but it can be still useful for many that control their OCI images themselves or have an existing automation stack that can drive the current implementation. That said, we plan to work more on this in the future, but for the midterm it will be not that interesting for those that want a very simple hand-off approach (let's call it "casual hobby homelabber"), or want to replace some more complex stack with it; but I think we'll get there.