FWIW, this figure looks to be the fraction of 20–69 year olds in the entire population who are unemployed[0]. Referencing the official definitions[1], the standard unemployment figure of 2.6 (as of 2026-02) narrows that denominator to people who are receiving wages or actively looking for work.
> which naturally could be shifted by incentives like money or training.
From the above, 18% seems like the wrong number to look at. Heck, why not quote 38.1%, since it captures everyone who can legally work (including 15 and 90 year olds)?
IMO, the base population we want to look at is people who actually want a job, which is captured by various Labor Underutilization (LU) metrics. These all hover around 2.5–6.0% according to public records[2], and are also defined in the official docs[1].
> Hit is muchel to seggen all þat pinunge hie on me uuroȝten, al þar sor and al þat sorȝe. Ne scal ic nefre hit forȝeten, naht uuhiles ic libbe!
My reading was "There is (too) much to say all that pain he wrought on me, all there sour and all that sorryness. Not shall I never forget, not while I live!"
Busy Beaver gets a lot of love, but the fast growing hierarchy is both constructive and can go way, way, waaaaay beyond current known BB bounds. This makes their size much more viscerally apparent than gesturing vaguely at BB(BB(BB(100))) or whatever, IMHO.
David Metzler has this really cool playlist "Ridiculously Huge Numbers" that digs into the details in an accessible way:
By the end, you're thinking about functions that grow so fast TREE is utterly insignificant. Surprisingly, getting there just needs a small bit of machinery beyond Peano Arithmetic [0].
Then you can ponder doing all that but making a tiny tweak by replacing succesorship with BB. Holy cow...
Only the part for which we have well-defined fundamental sequences is constructive. As far as I know, there is no such system of FS defined up to
PTO(Z_2), the Proof Theoretic Ordinal of second order arithmetic, while growth rate at that ordinal can be programmed in under 42 bytes.
> waaaaay beyond current known BB bounds
I have to disagree here. The Proof Theoretic Ordinal of ZFC + infinitely many inaccessibles can be reached with a program under one kilobyte in size, and that is already extremely high up into the FGH.
By default, you get snapshots every minute for the last hour, every hour for the last day, and every day into perpetuity. This is configurable. You can set as many cadences as you wish, with the ability to configure their frequency and lifetimes.
Actually, snapshots are like btrfs volumes in many ways, meaning they can be mounted, read from, and written to as desired. This allows the filesystem root to just be another snapshot with a default backup cadence as described above.
The gefs(4) manpage [0] has more info for those interested. It's a short and sweet read. The source [1], is under 12k lines of well-written code, comments and whitespace included. The author is also extremely responsive to issues and a pleasure to talk shop with.
Anyway, given the parsimony of the OS and the small community size, I find 9front to be a really nice incubator for playing around with new ideas.
FWIW that snapshot system sounds almost the same as zrepl with zfs. The only difference being the zfs snapshots are read-only so you'd have to do a zfs send+recv to a dataset to mount it read/write. That and the defaults for the tiered snapshotting are different (mine is set to a snapshot every hour for the past day and every day for the past 2 weeks but I don't know what the default is).
zfs clone creates a writable dataset initially backed by a snapshot (and will prevent that snapshot from being deleted without first deleting the clone, even if entirely ship-of-theseused out of sharing any data with it)
IIRC, K rationalizes arrays and dictionaries with functions, e.g. you see arr[x;y] and fun[x;y]. Interestingly, this also highlights the connection between currying and projection, i.e. we can project/curry the above like arr[x] and fun[x].
Googol, googolplex, Graham's number, Ackermann's function, TREE, BB, Rayo's number ... For some reason big numbers seem to tickle some childlike fascination in me.
I randomly stumbled across this series that incrementally walks through the construction of truly gargantuan (computable) numbers. The fact that this touches on deep areas of math feels unexpected and fascinating.
> This is by design. ... The federal government was never intended to lord over everyone's lives.
So behavior of the system fails to meet its design goals? It honestly sounds like you kind of agree with the excerpt you quote.
> The expansion of the federal government ... [is] what needs to change
What are you proposing though? Even assuming the premise here, achieving said goals requires changes to lots of little details and incentives. It's not like there's a single potentiometer controlling Gov't Size™. So what are you actually suggesting?
Certainly, the details of fundamental electoral structure engage deeply with the operation of our government, and the legal scholars in the article seem to be honestly pointing out levers (and big ones at that) we could possibly pull to create a less expansive federal government, or whatever the goal may be.
Imagine a plane crashes and analysts start attempting a root cause analysis, discussing control system specifics and whatnot. To me, your stance reads like "This is by design. Plane parts are united but independent. Control systems were never intended to lord over every part of the plane. The expansion of control systems is what needs to change."
I mean... maybe? But even if we agree on that point, any random contraction of the control system seems unlikely to make a plane that flies better. We have to actually engage with the details of what's going on here.
It actually has quite good UX affordances. More than that, however, I find the code imminently hackable, even as someone with very little Prolog experience. Reading through the plwm code really demystified the apparent gap between toy and practical Prolog for me. Heck, even the SWI-Prolog codbase itself is quite approachable!
I'm also mildly surprised at some of OG's gripes. A while back, I ran through Triska's The Power of Prolog[0], which crisply grounds Prolog's mental model and introduces standard conventions. In particular, it covers desugaring syntax into normal predicates, e.g. -/2 as pairs, [,]/2 as special syntax for ./2 cons cells, etc. Apparently, I just serendipitously stumbled into good pedagogical resources!
I'd be interested in ways that people utilize logical programming concepts and techniques into non-LP languages.
Last year I picked up a bamboo Hemi and worked through the (70yo!) workbook. The trigonometric scales are cool. Making a single slide to find all the sides of a triangle is surprisingly satisfying. It got me to realize that, sliderules with the right scales can solve the roots of any 3-variable equation. I guess this is why there was a proliferation of industry-specific sliderules back in the day.
More generally, aren't simple, well-engineered analog tools so satisfying?
That's so cool. Like mathematical primitive archeology. The history of these sorts of analog computing devices that physically encode non-linear mathematical relations is fascinating.
With much tutoring, I learned to use a sextant and doing that gives one some sense of the "sorcery" and power achievable with blue-water navigation.
Boyer and Merzbach cover some of the development of these tools in their "History of Mathematics". Highly recommended.
FWIW, this figure looks to be the fraction of 20–69 year olds in the entire population who are unemployed[0]. Referencing the official definitions[1], the standard unemployment figure of 2.6 (as of 2026-02) narrows that denominator to people who are receiving wages or actively looking for work.
> which naturally could be shifted by incentives like money or training.
From the above, 18% seems like the wrong number to look at. Heck, why not quote 38.1%, since it captures everyone who can legally work (including 15 and 90 year olds)?
IMO, the base population we want to look at is people who actually want a job, which is captured by various Labor Underutilization (LU) metrics. These all hover around 2.5–6.0% according to public records[2], and are also defined in the official docs[1].
[0]:https://www.stat.go.jp/data/roudou/sokuhou/tsuki/pdf/gaiyou....
[1]:https://www.stat.go.jp/data/roudou/pdf/hndbk5_2.pdf
[2]:file:///var/folders/96/k0p95wxn7sg5_xjnv5n233bc0000gn/T/gaiyou-1.pdf
reply