Posit has solved similar problems with their Package Manager as well, the benefit being that it's hosted on-prem, but the user has to build wheels for their desired architecture (if they're not on pypi).
I don't think the term "Microkernel Architecture" should be used in this context. I think "Modular Architecture," (or Plug-in like is mentioned) gets closer to this extension-based pattern.
The reason being that there's no relevance to the kernel, and modular kernels, also take this approach with replaceable plug-ins or extensions.
I think people gravitate to it because "kernel" feels like a cool word and some people have heard of OS microkernels being modular. As you say, "modular architecture" is a much clearer way to express the intent and conveys the purpose without being pretentious.
I don't think "Microkernel Architecture" is the same as "Modular Architecture". I've only heard the term "Microkernel Architecture" used for systems that have clear public extensions points that enables users to choose which plugins to run or even add third-party plugins.
"Modular Architecture" is more broad in my opinion and rather a description of internal structure. A "modular monolith" for example is modular but doesn't necessarily have a "core" nor is it required to be extensible with plugins by users.
There is the "Yank" PEP 592 semantic that can be used to mark vulnerable packages. It's adoption has been a little slow, but I agree, having these packages available and marked accordingly makes it easier for security scanning and future detection research.
Even better would be allow their install, but to have them start up with an immediate panic() sort of function (i.e., print("This package has been found to be malicious; please see pypi/evilpackagename for details"); sys.exit(99)) to force aborts of any app using those packages.
Just that it isn't as simple as adding the lines to when the code gets executed. I think I misunderstood you, instead of prepending the code you are suggesting the entire compromised package get replaced with `throw "You got Hacked"` at import time.
Correct, when the program starts to run and imports the modules, as nothing will make admins more aware that something is really wrong here. Maybe raise an exception which, if not handled, executes sys.exit() with a predefined code.
And some mechanism to detect this at install/build time as well, so that automated built systems can cleanly abort a build and issue a specific message which can then be forwarded via email or SMS through some custom code.
The entire package gets replaced by a standardized, friendly one. No harmful code gets downloaded.
It's not like an already running process will be affected by this.
This would only occur when the package gets updated or reinstalled, which shouldn't happen without supervision if the program is running in a sensitive context.
Else a Denial of Service is a good last resort measure in order to prevent running a malicious service. Ideally this gets detected at install/build time.
Skimming through that link, it seems that `yank` is for pulling _broken_ packages, whereas the suggestion above is to explicitly mark them as malicious.
If you're working on a serious project, hosting it mainly on GitHub via Git and don't already have a backup solution in place, I'm afraid you're late. But better late than never! Make sure you can always deploy when less reliable services are down, and GitHub has always been one of those. Git makes it incredibly easy as well, as long as you have your CI/CD externalized already.
I think if revenue or product quality is tied to a VCS, having an active-active or active-passive setup is the way to go.
Fortunately, I'm on an on-prem product so that investment hasn't seemed worth it yet.
This doesn't mean we don't escrow our code, but rather than try to rebuild from source, I just take a short coffee break and wait for the impacted service to come back up :)
It's possible that wasn't the default setting on Macs back then. I don't know that cgo would be a good choice either, if you're resolving a ton of domains at once. Early versions of Go would create new threads if a goroutine made a cgo call, and an existing thread was not available. I remember this required us to throttle concurrent dial calls, otherwise we'd end up with thousands of threads, and eventually bring the crawler to a halt.
To make DNS resolution really scale, we ended up moving all the DNS caching and resolution directly into Go. Not sure that's how you'd do it today, I'm sure Go has changed a lot. Building your own DNS resolver is actually not so hard with Go, the following were really useful: