Hacker Newsnew | past | comments | ask | show | jobs | submit | tylfin's commentslogin

Posit has solved similar problems with their Package Manager as well, the benefit being that it's hosted on-prem, but the user has to build wheels for their desired architecture (if they're not on pypi).


I don't think the term "Microkernel Architecture" should be used in this context. I think "Modular Architecture," (or Plug-in like is mentioned) gets closer to this extension-based pattern.

The reason being that there's no relevance to the kernel, and modular kernels, also take this approach with replaceable plug-ins or extensions.


I think people gravitate to it because "kernel" feels like a cool word and some people have heard of OS microkernels being modular. As you say, "modular architecture" is a much clearer way to express the intent and conveys the purpose without being pretentious.


I don't think "Microkernel Architecture" is the same as "Modular Architecture". I've only heard the term "Microkernel Architecture" used for systems that have clear public extensions points that enables users to choose which plugins to run or even add third-party plugins.

"Modular Architecture" is more broad in my opinion and rather a description of internal structure. A "modular monolith" for example is modular but doesn't necessarily have a "core" nor is it required to be extensible with plugins by users.


This is actually something they address in the Risk Factors section (here: https://www.sec.gov/Archives/edgar/data/1607939/000119312521...) if you're curious about what they have to say.


I'm impressed how much "plain English" there is there.


There is the "Yank" PEP 592 semantic that can be used to mark vulnerable packages. It's adoption has been a little slow, but I agree, having these packages available and marked accordingly makes it easier for security scanning and future detection research.

https://www.python.org/dev/peps/pep-0592/


Even better would be allow their install, but to have them start up with an immediate panic() sort of function (i.e., print("This package has been found to be malicious; please see pypi/evilpackagename for details"); sys.exit(99)) to force aborts of any app using those packages.


python packages run arbitrary code at install/build time, so this isn't viable.


It's no longer arbitrary if the PyPI crew is the one who controls the code, or did I understand you wrong?


Just that it isn't as simple as adding the lines to when the code gets executed. I think I misunderstood you, instead of prepending the code you are suggesting the entire compromised package get replaced with `throw "You got Hacked"` at import time.


Correct, when the program starts to run and imports the modules, as nothing will make admins more aware that something is really wrong here. Maybe raise an exception which, if not handled, executes sys.exit() with a predefined code.

And some mechanism to detect this at install/build time as well, so that automated built systems can cleanly abort a build and issue a specific message which can then be forwarded via email or SMS through some custom code.

The entire package gets replaced by a standardized, friendly one. No harmful code gets downloaded.


Denial of Service by panicing is also harmful for some processes.


It's not like an already running process will be affected by this.

This would only occur when the package gets updated or reinstalled, which shouldn't happen without supervision if the program is running in a sensitive context.

Else a Denial of Service is a good last resort measure in order to prevent running a malicious service. Ideally this gets detected at install/build time.


.whl packages don't run arbitrary code, they're just zips.


Skimming through that link, it seems that `yank` is for pulling _broken_ packages, whereas the suggestion above is to explicitly mark them as malicious.


Should we call the "mark them malicious" version "Yeet" or "Yoink"?


Good point. The keyword for uninstall and remove residual files should be Yeet indeed.

Downloading the latest, bleeding edge version should be Yoked

pip install yoked <package>


`npm yeet malcious-package`


I could imagine a civil or criminal case relying on shredded documents as a source of evidence.


Can't reproduce the issue after a few minutes? Sorry wont-fix, mercurial core.

Joking aside, it's really neat to see the scale and sophistication of error detection appearing in these data centers.


The last time this happened, it was after they shipped the phone app.

I wonder if they have a big feature underway or are just migrating more infrastructure to Azure?

EDIT: Either way, some postmortems would be appreciated before more customers have to look for a backup solution...


I'm still awaiting the promised post-mortem [1] of a retracted blog post [2][3] which has annnounced the deprecation of the GitHub Developer Program.

> There must be quite a story behind this - will you be putting up a post-mortem ? (Post mortems of business "outages" are usually more instructional)

> Yes. We will. Please stay tuned.

[1] https://news.ycombinator.com/item?id=21718171

[2] https://news.ycombinator.com/item?id=21718083

[3] https://web.archive.org/web/20191205225751/https://developer...


If you're working on a serious project, hosting it mainly on GitHub via Git and don't already have a backup solution in place, I'm afraid you're late. But better late than never! Make sure you can always deploy when less reliable services are down, and GitHub has always been one of those. Git makes it incredibly easy as well, as long as you have your CI/CD externalized already.


Yeah, this is very good advice.

I think if revenue or product quality is tied to a VCS, having an active-active or active-passive setup is the way to go.

Fortunately, I'm on an on-prem product so that investment hasn't seemed worth it yet.

This doesn't mean we don't escrow our code, but rather than try to rebuild from source, I just take a short coffee break and wait for the impacted service to come back up :)


From Google, I think this is the page you're looking for:

https://www.mastercard.us/en-us/vision/corp-responsibility/c...


> You are using a developer version of DXP, please register your application.

Nice...


There's more - the URL it points to, https://dxp.mastercard.com/index.html#/onboard/request, doesn't work as of now.


any idea of what that means?


Is it too early for an acquisition?


Yeah, I've never had to implement my own DNS cache for a language before...

If you're on a system with cgo available, you can use `GODEBUG=netdns=cgo` to avoid making direct DNS requests.

This is the default on MacOS, so if it was running on four Mac Pro's I wouldn't expect it to be the root cause.


It's possible that wasn't the default setting on Macs back then. I don't know that cgo would be a good choice either, if you're resolving a ton of domains at once. Early versions of Go would create new threads if a goroutine made a cgo call, and an existing thread was not available. I remember this required us to throttle concurrent dial calls, otherwise we'd end up with thousands of threads, and eventually bring the crawler to a halt.

To make DNS resolution really scale, we ended up moving all the DNS caching and resolution directly into Go. Not sure that's how you'd do it today, I'm sure Go has changed a lot. Building your own DNS resolver is actually not so hard with Go, the following were really useful:

https://idea.popcount.org/2013-11-28-how-to-resolve-a-millio...

https://github.com/miekg/dns


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: