I've done it twice. If you're experiencing significant growth or change in access patterns, you may for example go from Postgres to a KV store.
In one of the cases where I had to switch, we swapped from Cassandra to S3 for 100x OpEx savings since C* couldn't scale cost effectively to our needs, so we rolled a database on top of S3 instead that well out performed C* for our use case (e.g. need to export a 3B row CSV in a minute?).
I'm sure there are rare exceptions but I would imagine if you dug deeply into the business rules around "I need to export a 3,000,000,000 row CSV file" and into what the users are actually trying to accomplish at the end of that workflow, you could find a solution that meets those goals better while also obviating the need to export a 3,000,000,000 row CSV file.
> you may for example go from Postgres to a KV store.
If its easy to do this then you are using a tiny fraction of Postgres.
If you want it to be easy to switch your database then you need to code to the lowest common denominator. I would rather use my databases to their fullest potential, rather than purposefully handicap myself because I might have to change it in the future.
I used to design systems so this was possible, but eventually realised it just wasn't needed - I was adding more abstraction and complexity for no reason.
I also don't think it's a good idea. If you don't use the database specific functions out of fear you aren't able to switch anymore, you are probably wasting a lot of potential performance.