Those durability numbers for both Amazon and Wasabi are pure marketing and don't really mean anything even remotely important. Durability of data stored by a single company, even a company like Amazon, is actually very low, you should be scared of how low it really is. You could get kicked out from the service, lose data because of a bug or an operational mistake, be prevented from using the service by your government for political reasons and so on.
You are absolutely correct. The odds of all of the company's data centers being simultaneously destroyed by meteors are far more likely than 11 9s. It's complete marketing fluff that is not targeted at engineers.
Presumably Amazon made it up because people kept asking them "how likely are you to lose my data?" and Amazon needed to be able to say something other than "it's impossible to say due to human factors being the dominant likely cause but it's very unlikely."
Do you have any evidence for these claims of low durability?
(None of those issues, except for bugs in the service, would count against them, by the way.)
I think factors beyond the storage algorithm are pretty important to consider when thinking about storing data that's important to your business. To your specific point though:
1. Amazon claims 99.999999999% durability of objects over a year.
2. I store 1EB of data with an object size of 4MB for a year (so 250,000,000,000 objects).
3. I can expect to lose 250 objects in a year, or 1GB.
Now to my experience:
I have stored in excess of that amount of data in S3. I have lost considerably more data -- solely because of data losses internal to S3 -- that these numbers would suggest. It was a tolerable amount of data loss, I didn't curse Amazon's name or swear vengeance, but it was definitely not 1 gig.
The standard S3 SLA provides credits only based on uptime. There is no mention of durability whatsoever. That tells you that Amazon is not willing to put their money where there mouth is on their 99.999999999% durability claims. The reality is the number is a design target, not an operational guarantee.
All of my objects used standard redundancy. My recollection is that regardless of object class, you will get a 405 error if you try to fetch an object that has been lost.
I didn't use SNS notifications at the time (which might only work for reduced redundancy).
So that left two options: find out when attempting to fetch the object, or run bookkeeping jobs against the object catalog to periodically spider the data and ferret out any objects that are lost.
The second option may be a tad nicer, but it is also more complex and more expensive and the end result is the same either way.
You're mixing the technical availability, as in the amount of downtime, with organizational availability, as in your contract being abruptly terminated.
Even with 100% technical availability (no downtime ever), organizational / legal risks exist. No company can realistically be free of them. Amazon, compared to many smaller companies, may have somehow lower risks of this sort: they are hard to shut down.
If you care about your data really much, you likely have backups and / or mirror copies of it across several providers, in multiple countries, and have a well-tested contingency plan to move a complete copy of your production service to any of 2-3 other providers. (And likely most people don't have your risks and the amount of money enabling this all.)
It doesn't make sense to me to think of durability for cloud storage services as anything other than the probability of data retention for a client, which is impacted by your contract being abruptly terminated too, although not by the amount of downtime.