In Thunder[1], a PyTorch to Python JIT compiler for optimizing DL models, we are maintaining a bytecode interpreter covering 3.10-3.12 (and 3.13 soon) for our jit. That allows to run Python code while re-directing arbitrary function calls and operations but is quite a bit slower than CPython.
While the bytecode changes (and sometimes it is a back-and-forth for example in the call handling), it seems totally good once you embrace that there will be differences between Python versions.
What has been a large change is the new zero cost (in the happy path) exception handling, but I can totally why Python did that change to that from setting up try-block frames.
I will say that I was happy not to support Python <= 3.9 as changes were a lot more involved there (the bytecode format itself etc.).
Of course, working on this has also means knowing otherwise useless Python trivia afterwards. One of my favorites is how this works:
In Thunder[1], a PyTorch to Python JIT compiler for optimizing DL models, we are maintaining a bytecode interpreter covering 3.10-3.12 (and 3.13 soon) for our jit. That allows to run Python code while re-directing arbitrary function calls and operations but is quite a bit slower than CPython.
While the bytecode changes (and sometimes it is a back-and-forth for example in the call handling), it seems totally good once you embrace that there will be differences between Python versions.
What has been a large change is the new zero cost (in the happy path) exception handling, but I can totally why Python did that change to that from setting up try-block frames.
I will say that I was happy not to support Python <= 3.9 as changes were a lot more involved there (the bytecode format itself etc.).
Of course, working on this has also means knowing otherwise useless Python trivia afterwards. One of my favorites is how this works:
1. https://github.com/Lightning-AI/lightning-thunder/