I generally see object storage systems advertise 11 9s of availability. You would usually see a commercial distributed file system (obviously stuff like Ceph and Lustre will depend on your specific configuration) advertise less (to trade off performance for durability).
In general if you actually do the erasure coding math, almost all distributed storage systems that use erasure coding will have waaaaay more than 11 9s of theoretical durability
S3's original implementation might have only had 11 9s, and it just doesn't make sense to keep updating this number, beyond a certain point it's just meaningless
Like "we have 20 nines" "oh yeah, well we have 30 nines!"
To give an example of why this is the case, if you go from a 10:20 sharding scheme to a 20:40 sharding scheme, your storage overhead is roughly the same (2x), but you have doubled the number of nines
So it's quite easy to get a ton of theoretical 9s with erasure coding