It depends a lot of the write. If they can be batched, you can put them in a queue or in redis until it reach a threshold and write the update down in the RDBMS. It won't work for all the use cases, but more often that people think.
Yes microbatching is a great way to get a fixed write rate regardless of traffic, can even do it right at the application layer. The trade-off is that you can theoretically lose one interval of data when your service goes down. This might not matter for analytics workloads with a margin of error but some usecases require confirmed writes