I wonder if "normal" RDIMM ECC would be enough to mitigate most of those radiation bit-flipping issues. If so it wouldn't really make a difference to earth-based servers since most enterprise servers use RDIMM ECC too
> Because caches hold the most recent and most relevant data to the current processing, it is critical that this data be accurate. To enable this, AMD has designed EPYC with multiple tiers of cache protection. The level 1 data cache includes SEC-DED ECC, which can detect two-bit errors and correct single-bit errors. Through parity and retry, L1 data cache tag errors and L1 instruction cache errors are automatically corrected. The L2 and L3 caches are extended even further with the ability to correct double errors and detect triple errors.
Sun Microsystems famously had this problem with their servers using the UltraSPARC II chips, with cache SRAM that didn’t have ECC. Later versions of their processors had ECC added.