This is far from transparent. It's a massive change in behavior. Fork may be very hard to get right, but every python developer using multiprocessing has already paid that cost - and expects it to keep working!
With fork, you could pass objects that couldn't be pickled (lambdas, local functions, file handles, database connections). With forkserver, everything must be pickleable. That alone breaks thousands of repos of code.
You can no longer work with global module-level objects, so it fundamentally changes how scoping rules work.
It launches a server with some extra machinery at runtime - startup cost and hidden complexity just snuck its your app without you knowing.
forkserver may be technically a better choice. But that's irrelevant. Changing the default breaks existing code.
2. The fork option didn't disappear. It takes a single line of code to reconfigure it, and code that depends on the fork behaviour is better off for making that fact explicit.
Eh, not letting the language ever evolve is a sure way to death, ymmv.
Forkserver is probably a better default, inheriting file handles, globals, and sockets leads to a bunch of subtle bugs - I'm not sure that's even a good feature, also ymmv.
And fork() is still available, so if it breaks things, the solution would be to explicitly ask for fork() - and I'd say for most casual uses of multiprocessing, a user won't know one way or the other which is what I meant by transparent.
With fork, you could pass objects that couldn't be pickled (lambdas, local functions, file handles, database connections). With forkserver, everything must be pickleable. That alone breaks thousands of repos of code.
You can no longer work with global module-level objects, so it fundamentally changes how scoping rules work.
It launches a server with some extra machinery at runtime - startup cost and hidden complexity just snuck its your app without you knowing.
forkserver may be technically a better choice. But that's irrelevant. Changing the default breaks existing code.