I've worked on an 'eventually consistent' system with read/write SQLite dbs on each host with a background worker that replayed the log into a central source of truth db and workers that made updates across each host instance of SQLite.
It could have been made a lot faster, I think the replication sla for the service was 10 minutes usually done in seconds. But our specific workflow only progressed in one direction, so replaying a step wasn't a huge issue now and again though that was quite rare. If you were to put a little more effort than we did into replication layer and tuning your master db, it could be a really effective setup.
One of the best parts is that when instances are stopped or isolated, they were also almost isolated from everything that used the service so if you go into a black box with your clients, you work as normal and when connection or other hosts are brought back up they replay the db before accepting connections. We could take entire availability zones offline and the workers and clients would keep humming and update neighbors later.
It could have been made a lot faster, I think the replication sla for the service was 10 minutes usually done in seconds. But our specific workflow only progressed in one direction, so replaying a step wasn't a huge issue now and again though that was quite rare. If you were to put a little more effort than we did into replication layer and tuning your master db, it could be a really effective setup.
One of the best parts is that when instances are stopped or isolated, they were also almost isolated from everything that used the service so if you go into a black box with your clients, you work as normal and when connection or other hosts are brought back up they replay the db before accepting connections. We could take entire availability zones offline and the workers and clients would keep humming and update neighbors later.