>"For a short list of "IDT exception handling problems", I'll just enumerate some of them:
(a) IDT itself is a horrible nasty format and you shouldn't have to parse memory in odd ways to handle exceptions. It was fundamentally bad from the 80286 beginnings, it got a tiny bit harder to parse for 32-bit, and it arguably got much worse in x86-64.
(b) %rsp not being restored properly by return-to-user mode.
(c) delayed debug traps into supervisor mode
(d) several bad exception nesting problems (NMI, machine checks and STI-shadow handling at the very least)
(e) various atomicity problems with gsbase (swapgs) and stack pointer switching
(f) several different exception stack layouts, and literally hundreds of different entrypoints for exceptions, interrupts and system calls (and that's not even counting the call gates that nobody should use in the first place).
I love that someone randomly asks about switching Linux to a microkernel in the middle of this thread and the kernel devs respond and it's way more interesting than the x86 discussion.
But anyway, can anyone explain to me why x86-64 interrupts are so bad?
(a) IDT itself is a horrible nasty format and you shouldn't have to parse memory in odd ways to handle exceptions. It was fundamentally bad from the 80286 beginnings, it got a tiny bit harder to parse for 32-bit, and it arguably got much worse in x86-64.
(b) %rsp not being restored properly by return-to-user mode.
(c) delayed debug traps into supervisor mode
(d) several bad exception nesting problems (NMI, machine checks and STI-shadow handling at the very least)
(e) various atomicity problems with gsbase (swapgs) and stack pointer switching
(f) several different exception stack layouts, and literally hundreds of different entrypoints for exceptions, interrupts and system calls (and that's not even counting the call gates that nobody should use in the first place).
But I suspect I forgot some."