Stripe do this in a cool way. Their REST API is version based on date, and each time they change it they add a stackable compatibility layer. So your decade old code will still work.
Ceph overheads aren't that large for a small cluster, but they grow as you add more hosts, drives, and more storage. Probably the main gotcha is that you're (ideally) writing your data three times on different machines, which is going to lead to a large overhead compared with local storage.
Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.
Arguably, "comma as a separator" is close enough to comma's usage in (many) written languages that it makes it easier for less technical users to interact with CSV.
https://www.openssh.com/legacy.html - Legacy algorithms in OpenSSH, which explains a little what they do. Then there is also your Identity key that you authenticate yourself with, which is placed in the servers authorized_keys.
Usually smooth. But if you're running a production workload definitely do your prep work. Working and tested backups, upgrade one node at a time and test, read release notes, wait for a week after major releases, etc. If you don't have a second node I highly recommend it, Proxmox can do ZFS replication for fast live migrations without shared storage.
Unfortunately clustered storage is just a hard problem, and there is a lack of good implementations. OCFS2 and GFS2 exist, but IIRC there are challenges for using them for VM storage, especially for snapshots. Proxmox 9 added a new feature to use multiple QCOW2 files as a volume chain, which may improve this, but for now that's only used for LVM. (Making Proxmox 9 much more viable on a shared iSCSI/FC LUN).
If your requirements are flexible Proxmox does have one nice alternative though - local ZFS + scheduled replication. This feature performs ZFS snapshots + ZFS send every few minutes, giving you snapshots on your other nodes. This snapshot can be used for manual HA, auto HA, and even for fast live migration. Not great for databases, but a decent alternative for homelab and small business.
> IP does indeed have broadcast/multicast capabilities that cause the sender's egress traffic to remain independent of the number of recipients rather than being equal to the sum of recipients' ingress traffic, right?
Yes multicast, however you can't do multicast over the internet. In practise the technology is mainly used in production and enterprise scenarios (broadcast, signage, hotels, stadiums, etc).
Instead big streaming platforms like netflix or twich use CDN boxes installed locally at major ISPs. Also with so much hardware acceleration on modern NICs these days, it's surprisingly easy to handle Gbits of throughput for audio/video streaming.
They're probably referring to the podman.socket, which isn't quite like a daemon-mode but means it can emulate it pretty well. Unless there is some daemon mode I missed that got added, but I'd be rather surprised at that.
In places where you're doing a `dnf install podman` all you typically need to do is start the service and then point either the podman cli or docker cli directly at it. In Fedora for example it's podman.service.
I honestly prefer using the official docker cli when talking to podman.
reply