The most common approach is to add versions to the events. The good thing is that with event sourcing, the exact cutover and lifetimes of these schema versions can be known (and even recorded as events themselves).
Downstream apps and consumers that don't need to be compatible with the entire timeline can then migrate code over time and only deal with the latest version. You have to deal with schemas anytime you have distributed communications anyway, but event sourcing provides a framework for explicit communication and this is one area where it can make things easier.
For me an issue, not usually made explicit, is that those benefits seem to require a pretty stable business and a stable application landscape.
Why? Because a changing business requires changes to the Events. Since a change in an Event requires that all consumers to be updated, immediately or at the schema version deprecation, the cost of change seems to increase faster in comparison to an application landscape without event sourcing. At the same time, the cost to know who needs to be changed also grows since "any" application can consume any Event.
An stable application landscape also seems to be required because if the number of an Event consumer grows quickly the ability to update, and deprecated schemas, seems to be related with the number of Event consumers (which require update).
If your org is anything like mine, mostly things (data) are "additive" onto the existing structure. When you want to deprecate something, you can notify all the consumers like a thirst party would, if you were going to change something enough. But this later happens much rarer for us, though tends to leave traces of technical debt....
> What about when event structures change? Now you’re having to push versions into your events and keeping every version of your serialisation format.
Sure. Events are like filled out forms, and there is a reason forms that are significant, modified over time, and where determining meaning from historical forms is a expected business need tend be versioned (often by form publication date). If you ever need to reconstruct/audit history (whatever your data model), you need a log of what happened when that faithfully matches what was recorded at the time, and you need definitions of the semantics as to how the aggregation of all that should map to a final state. Event sourcing is a pretty direct reification of that need, and, yes, versioning is part of what is needed to make that work.
The protocol version only handles the version of the structure. It does not change when you change the meaning of the field. For example "this is still foo-index, but from now on it's an index in system X, not system Y". (Yeah, bad example, you could change the field name here, but sometimes it's not that clear)
Its not that hard to deal with. Dealing with it through ad-hoc "we will come up with something as we go along" can be a bit of a pain though. It is in fact fairly trivial to handle if some thought is put into it.
Typically you write an in-place migration for every version, yes. Or you snapshot your working-copy database and archive the event stream up to that point, and you play it back using the appropriate version of the processing code.
It kinda sucks and there isn't a great answer. But there are a lot of use cases where there are real benefits to having that log go back to the start.
There is no easy way (in any platform as far as I'm aware), of discovering and updating all clients that depend on some data structure if you plan to change the data structure in an incompatible way.
Better to "grow" your data structures than "change" them.
Redux often does not have to keep track of versions, because the event stream is consistent for that session.