Only slightly related, but the biggest increase in space I saw was after transitioning my 1.8 TB Postgres database to ZFS, which has compression turned on by default. Afterwards, the size needed was 310 GB, with no noticeable loss in speed.
Yes. This is likely not the typical workload. Our PG database is only used in research, in "burst" situations - e.g. big batch jobs written to the DB (2 days 400 Million tuples) and read in big chunks (e.g. 400 Million tuples exported in 2 hours to CSV/Postgres FDW). ZFS is on spinning Rust (Sata 6GB), 6x8TB drives in a Raidz2 pool, the ZFS dataset is both compressed and encrypted. In Proxmox, I do not see I/O in any way limited during these burst writes/reads, the bottleneck is the CPU. However, the CPU was the bottleneck also before ZFS, so I cannot say how much impact the compression/encryption has. Other ZFS parameters are default (eg. filesystem recordsize is 128KB - lower values will yield better read speed, but less compression, and we were aiming for a lot of compression).
Likely no impact, but we use Postgres in Docker on unprivileged LXC, the `/data` directory is mounted from the LXC from the host ZFS pool. Since LXC runs all processes on the host, the performance impact is negligible (unlike, e.g. running this in a full VM).
I have the _feeling_ (not actually tested) that the Postgres database is faster with ZFS, since less data needs to be read, especially since we have a lot of sequential scans.
> LZ4 is lossless compression algorithm, providing compression speed > 500 MB/s per core (>0.15 Bytes/cycle). It features an extremely fast decoder, with speed in multiple GB/s per core (~1 Byte/cycle).