Hacker Newsnew | past | comments | ask | show | jobs | submit | m000's commentslogin

Are there more tools like hexora?

GuardDog, but it's based on regexes

Nuclear weapons gave us global stability (i.e. no WW3). Hypersonics, hopefully, will also give us regional stability.

> Similarly a 41 million dollar weapon only costs that much until a wartime powers clause forfeits your factory to state production.

I seriously doubt such clauses still exist today. The entrenchment of the MIC in the US political structure is so deep and stretches for so long, that they have probably managed to avoid having such clauses by now. After all, that's their obligation to their shareholders.

Also, the more high-tech the weapon, the more complex and fragile are its supply chain logistcs. So, scaling up the production of high-tech weapons is much harder, especially in wartime.


> Regarding these cluster munitions though, other than very densely populated areas, do they inflict much damage ? Are they more powerful than a grenade, say ?

Also not an expert, but I get the feeling that "cluster munitions" is pretty much an umbrella term.

Because of the CCM [1], we tend to associate the term with the "ligther" variants, which are used as anti-personnel weapons. These variants probably wouldn't be much more destructive than a few grenades.

But what Iran is currently using, appears to be missiles with 500-1000kg payload. This puts each submunition in the 50-100kg range. This should deliver a lot more of a punch than a grenade. Also, because of their weight, they probably wouldn't be covered by CCM, had Iran ratified it.

And, yes, it is unsettling geeking out on this stuff, that may actually be killing people as we write our comment.

[1] https://en.wikipedia.org/wiki/Convention_on_Cluster_Munition...


I'm very sceptical on how well AI can "read the full diff and summarise the changes properly".

A colleague has been using Claude for this exact purpose for the past 2-3 months. Left alone, Claude just kept spewing spammy, formulaic, uninteresting summaries. E.g. phrases like "updated migrations" or "updated admin" were frequent occurrences for changes in our Django project. On the other hand, important implementation choices were left undocumented.

Basically, my conclusion was that, for the time being, Claude's summaries aren't worthy for inclusion in our git log. They missed most things that would make the log message useful, and included mostly stuff that Claude could generate on demand at any time. I.e. spam.


Same experience here, I see many people in the company (5-10k employees) pushing commits with Claude-generated comments that are absolutely useless.

I got praised for my commit messages by another team, they asked me how I was making Claude generate them, and I had to tell them I'm just not using Claude for that.

I like writing my own commit messages because it helps me as well, I have to understand what was done and be able to summarise it, if I don't understand quickly enough to write a summary in the commit message it means something can be simplified or is complex enough to need comments in the code.


Don't you find it problematic that the only reason Thiel can organize these lectures is because he is a billionaire? Is he a bona fide scholar on the subject? Would any tenured theology scholar be welcome to hold the same lectures at the Vatican?

I guess that's what you get for electing an American as the Pope. /s


The lectures were not given in the Vatican but somewhere nearby, and if you read the article you would see that all the Catholic institutions names denied involved with the lectures.


He didn't give lectures at Vatican, not even at the Catholic university close to Vatican, and even Catholic University of America didn't have anything to do with it.


I am very much not a billionaire; but I can hire a village hall and give a lecture on the antichrist. I may have to work a little harder to get as much press coverage but that is not what is stopping me.


A key point that TFA misses (probably for the sake of story-telling) is that, unlike the 2006 iMac the author fondly remembers of, MacBook Neo is not a hand-me-down computer.

It is not the proverbial gift horse. You are paying fresh $ for it. So, it is only reasonable to have some baseline expectations on redeeming value from it.

Also, an important point of the MacBook Neo criticism is that because of its cut-down features, a Neo may never graduate to a "hand-me-down computer", but instead head straight to the e-waste pile.


ARM macs are too new for us to know how the reuse/hand-me-down/legacy support world will shake out for them. There’ll be signs when the first M1 machines get axed but for now, I have no clue.


Apple wants to give me $250 trade-in for my M1 Air with 16GB, but it seems to be worth $500+ on the open market, so yeah, still above hand-me-down territory. It feels as good as the day I bought it, and literally the only reason I'm considering replacing it is now we have 2/3 laptops with magsafe and I'd like to start distributing those chargers around the house. So tempting to just swap for a used M2 for a couple hundred dollars, but the chore of moving to a new computer is holding me back.


The text merged to the documentation is more concise than the PR title:

> not actively developed anymore


Which is just as wrong.


I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.

There may be non-zero maintenance work happening, but a project that only maintains support for old versions and will never adopt new ones is functionally one that the ecosystem will eventually forget about. Maybe you call that "under active development" but my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."

On the other hand, if you don't support new minor versions day one, but you eventually support them, that's quite different.


More specifically, the Scientific Python community through SPEC 0[0] recommends that support for Python versions is dropped three years after their release. Python 3.12 was released in October 2023[1], so that community is going to drop support for it in October 2026.

Considering that PyPy is only just now starting to seriously work on supporting 3.12, there's a pretty high chance that it won't even be ready for use before becoming obsolete. At that point it doesn't even matter whether you want to call it "in active development", it is simply too far behind to be relevant.

[0]: https://scientific-python.org/specs/spec-0000/

[1]: https://www.python.org/downloads/release/python-3120/


What's the point of a three year window? It seems like a weird middle-point. Either you are in a position to choose/install your own interpreter and libraries or you are not.

If you can choose your own versions and care at all about new releases, you can track latest and greatest with at the very most a few months of lag. Six months of "support" is luxurious in this scenario.

If you can't choose your own versions, you are most likely stuck on some sort of LTS Linux and will need to make do with what they provide. In that case three years is a cruel joke, because almost everything will be more than three years old when it is first deployed in your environment.


I guess the point of a three year window is to be able as an ecosystem to at some point adopt new language features.

When you have some kind of ecosystem rule for that, you can make these upgrade decisions with a lot more confidence.

For example in my project I have a dependency on zstandard. In 3.14 zstandard was added to the standard library. With this ecosystem wide 3 year support cycle I can in good confidence drop the dependency in three years and use the standard lib from then on.

I feel like it just prevents the ecosystem from going stale because some important core library is still supporting a really old version, thus preventing other smaller libraries from using new language features as well, to not exclude a large user base still on an old version.


This is silly, there's no killer feature for scientific computing being added to python that would make an existing pypy codebase drop that dependency, getting a code validated takes a long time and dropping something like pypy will require re-valditating the entire thing.


The phenomena you're describing is why Cobol programmers still exist, and simultaneously, why it's increasingly irrelevant to most programmers

The killer feature is ecosystem: Easily and reliably reusing other libraries and tools that work out-of-the-box with other Python code written in the last few years . There are individually neato features motivating the efforts involved in upgrading a widely-used language & engine as well, but that kind of thinking misses the forest for the trees unfortunately.

It's a bit surprising to me, in the age of AI coding, for this to be a problem. Most features seem friendly to bootstrapping with automation (ex: f-strings that support ' not just "), and it's interesting if any don't fall in that camp. The main discussion seems to still be framed by the 2024 comments, before Claude Code etc became widespread: https://github.com/orgs/pypy/discussions/5145 .


The alternative is when you run a script that you last used a few years ago and now need it again for some reason (very common in research) and you might end up spending way too much time making it work with your now upgraded stack.

Sure you can were you should have pinned dependencies but that's a lot of overhead for a random script...


Most programmers aren't writing scientific software, which you can tell by claims that nicer f-strings is a pressing concern.


We can play that game - items like GIL-free interpreters and memory views are pretty relevant to folks on the more demanding side of scientific computing. But my point is this is a head-in-sand game when the community vastly outweighs any individual feature. My experience with the scientific computing community is that the non-pypy portion of it is much bigger.

I'm not a pypy maintainer, so my only horse in this race is believing cpython folks benefit from seeing the pypy community prove Things Can Be Better. Part of that means I rather pypy live on by avoiding unforced errors.


Unfortunately python does add features in a drip-drip kind of way that makes being behind an experience with a lot of niggles. This is particularly the case for the type annotation system, which is retrofit to a language that obviously didn't have one originally. So it's being added slowly in a very conservative way, and there are a lot of limitations and pain points that are gradually being improved (or at least progressed on). The upcoming lazy module loading will also immediately become a sticking point.


They appear to be talking about CPython implementations, taking into account when those versions continue to be sorted (in the sense of security updates). That's irrelevant for PyPy, which clearly supports version numbers on a different schedule.


It's not irrelevant, because if SPEC 0 says that a particular Python version is no longer supported, then libraries that follow it won't avoid language or standard library features that that version doesn't have. And then those libraries won't work in the corresponding PyPy version. If there isn't a newer PyPy version to upgrade to, then they won't work in PyPy at all.


You might make a different decision if you were targeting PyPy.


> I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.

There is literally a Python 3.12 milestone in the bug tracker.

> my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."

It sounds a lot more like your actual response is "I don't care about pypy".

Which is fine, most people don't to start with. You don't have to pretend just to concern-troll the project.


Maybe it's high time for some regulation?

E.g. EU enforced mandatory USB-C charging from 2025, and pushes for ending production of combustion engine cars by 2035. Why not just make ECC RAM mandatory in new computers starting e.g. from 2030?

AMD is already one step away from being compliant. So, it's not an outlandish requirement. And regulating will also force Intel to cut their BS, or risk losing the market.


OMG no. Politician have no business making technological decisions. They make it harder to innovate, i.e. to invent the next generation of ECC with a different name.


I would argue that in the present conditions, regulation can actually foster and guide real innovation.

With no regulations in place, companies would rather innovate in profit extraction rather improving technology. And if they have enough market capture, they may actually prefer to not innovate, if that would hurt profits.


ECC is like Ethernet. The name doesn’t have to change for the technology to update.


If companies are allowed to change the meaning of terms in legislation we are in even more trouble.


Ethernet was once carried over thick coax at like 2 then 3 megabits per second. By the time it was standardized as IEEE 802.3 it was at 10 megabits. 802.3 was thin coax. 802.3e took a step back in speed to 1 megabit, but over phone-type wire. 10 base T, Ethernet over twisted pair at 10 megabits per second, wasn’t until 802.3i in 1990. Then 10 base F (fiber) in 1992.

Then there are various speeds of 100 M, 1000 M / 1G, 2.5 G, 5 G, 10 G, 25 G 40 G, 50G, 100 G, 200 G, and 400 G. Some of the media included twisted pair, single mode fiber, multimode fiver, twinax cable, Ethernet over backplanes, passive fiber connections (EPON), and over DWDM systems.

There have also been multiple versions of power over Ethernet using twisted pair cable. Some are over one pair, some two pairs, and some over the data pairs while other use dedicated pairs for power.

There are also standards for negotiation among multiple of these speeds. There have been improvements to timestamping. There have been standards to bring newer speeds to fewer pairs or current speeds over longer distances.

There’s currently work on 1.6 Tbps links up to 30 or possibly 50 meters. There has been work on the past to use plastic optical fibers instead of glass ones. Oh, and there are standards specific to automative Ethernet.

Ethernet itself, the name and the first implementation of a network with that name, were from 1972 and 1973. It was on the market in 1980 and first standardized in 1983 as ECMA-82.

Ethernet supports in its different configurations direct host-to-host connections, daisy chains, hubbed networks, switched networks, tunnels over routed protocols like TCP or UDP, bridges over technologies like MOCA or WiFi, and even being tunneled across the open Internet.

All of these are Ethernet. They have a common lineage. They are all derived from the same origin. Token Ring, FDDI, ATM, and SONET have all been more than one thing over time too. So has WiFi. 802.11a is very little like 802.11be, but those are also similar enough to carry the same family name.

The IEEE 802.3 series has a lot of history buried in those documents.


Politicians don’t have to be dumb.


Reading this again, did you forget your trailing /s?


Cost. You are about to making computers 10-20% more expensive.

Computers also aren't used much these days, and phones and tables don't have ECC


ECC has only 10-15% more transistor count. So you're only making one component of the computer 15% more expensive. This should have been a non-brainer, at least before the recent DRAM price hikes.

Also, while computers may not be used much for cosmic rays to be a risk factor, but they're still susceptible to rowhammer-style attacks, which ECC memory makes much harder.

Finally, if you account for the current performance loss due to rowhammer counter-measures, the extra cost of ECC memory is partially offset.


It's still weird. Why not just use an effing install.sh script like everybody else? And don't tell me "security". Because after installation you will be running an unknown binary anyway.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: