I think the punctuation makes it clear -- imagine "How I invented Facebook. In 2001." The full stop in the middle of the sentence breaks it and makes you realise he's speaking figuratively.
The article by mathematician John Kemeny, who amongst other things was an assistant to Albert Einstein at the IAS, describes four methods of applying mathematics to problems that are not innately about numbers (algebraic) or space (geometric). He divides the space of such methods firstly into a) not using numbers, b) introducing artificial numbers, and secondly also into using either 1) algebra or 2) geometry.
For geometry not using numbers, he shows how graph theory can be applied to the problem of social balance as defined by psychologist Fritz Heider. This example is based on work by Dorwin Cartwright and Frank Harary.
For algebra not using numbers, he chooses the theory of group actions, and applies it to a way of preventing incestuous relationships that was used in some cultures, which works by assigning each child a group that they are exclusively allowed to marry in. This example is based on work by André Weil and Robert R. Bush.
For geometry using numbers, he uses an adjancency matrix to show how you can find out how many ways there are to send a message from one person to another in a network.
For algebra using numbers, he defines axioms for a distance function for rankings with ties, which can be shown to be unique (probably up to some isomorphy), and which can be used to derive a consensus ranking from a set of rankings. This appears to be the central piece of the article, as that is an example that he developed himself together with J.L. Snell and which was yet to be published.
Because HN readers can't know if the summary is an accurate representation of the original article, nor what detail or nuance has been winnowed out in the summarizing process. But if there is a summary that seems "good enough" to form an opinion, then the discussion on HN will be based on the summary, not on the complete article. We see the same thing with editorialized titles.
A better way to get a taste of the article is to look over the HN discussion. The top comment(s) should give people a hint as to what it's about and whether it's worth the time to read the whole thing. Otherwise just reading the HN discussion should be a good way to get the jist of it. But that only works if enough of the commenters have actually read the whole article rather than a summary.
Aren’t many algebraic results dependent on counting/divisibility/primality etc...?
Numbers are such a fundamental structure. I disagree with the premise that you can do mathematics without numbers. You can do some basic formal derivations, but you can’t go very far. You can’t even do purely geometric arguments without the concept of addition.
Addition does not require numbers. It turns out, no math requires numbers. Even the math we normally use numbers for.
For instance, here is associativity defined on addition over non-numbers a and b:
a + b = b + a
What if you add a twice?
a + a + b
To do that without numbers, you just leave it there. Given associativity, you probably want to normalize (or standardize) expressions so that equal expressions end up looking identical. For instance, moving references of the same elements together, ordering different elements in a standard way (a before b):
i.e. a + b + a => a + a + b
Here I use => to mean "equal, and preferred/simplified/normalized".
Now we can easily see that (a + b + a => a + a + b) is equal to (b + a + a => a + a + b).
You can go on, and prove anything about non-numbers without numbers, even if you normally would use numbers to simplify the relations and proofs.
Numbers are just a shortcut for dealing with repetitions, by taking into account the commonality of say a + a + a, and b + b + b. But if you do non-number math with those expressions, they still work. Less efficiently than if you can unify triples with a number 3, i.e. 3a and 3b, but by definition those expressions are respectively equal (a + a + a = 3, etc.) and so still work. The answer will be the same, just more verbose.
Lena Söderberg expressed her wish for her image to be "retired from tech" in 2019 (see the end of this clip, https://vimeo.com/372265771), when the above alternative image was published.
According to that blog post (https://security.googleblog.com/2024/09/eliminating-memory-s...), the vulnerability density for 5 year old code in Android is 7.4x lower than for new code. If Rust has a 5000 times lower vulnerability density, and if you imagine that 7.4x reduction to repeat itself every 5 years, you would have to "wait" (work on the code) for... about 21 years to get down to the same vulnerability density as new Rust code has. 21 years ago was 2004. Android (2008) didn't even exist yet.
If you want to keep XSLT in browsers alive, you should develop an XSLT processor in Rust and either integrate it into Blink, Webkit, Gecko directly, or provide a compatible API to what they use now (libxslt for Blink/Webkit, apparently; Firefox seems to have its own processor).
But that would apply to any app that deals with files like this one does.
This one is open source and we can run some code analysis on it, compile locally, etc. I am not well versed in security checks but I guess you get the idea.
What really grinds my gears is that I have devices that only work with AFT and not OpenMTP. Like my Hisense A9. Because AFT will crash if you try to transfer hundreds of files. I wish I could get rid of AFT but I can't
I also have a usb-c flash drive for copying as well.
Amazon has a great MTP app but it only works with Kindles.
I have used it a year ago with macOS 14 or 15 and it worked. I've had problems copying too many files at once (don't remember the problem exactly), that's why I only copy about 100 at a time.
Are you speaking of chroma subsampling, or is there a property of the discrete cosine transform that makes it more effective on luma rather than chroma?
Probably chroma subsampling - storing color at lower resolution than luminance to take advantage of the aforementioned sensitivity difference. Since it’s stored at 1/4 resolution it can alone almost halve the file size.
Saying it’s the insight that led to JPEG seems wrong though, as DCT + quantization was (don’t quote me on this) the main technical breakthrough?
TFA itself has an incorrect DOCTYPE. It’s missing the whitespace between "DOCTYPE" and "html". Also, all spaces between HTML attributes where removed, although the HTML spec says: "If an attribute using the double-quoted attribute syntax is to be followed by another attribute, then there must be ASCII whitespace separating the two." (https://html.spec.whatwg.org/multipage/syntax.html#attribute...) I guess the browser gets it anyway. This was probably automatically done by an HTML minifier. Actually the minifier could have generated less bytes by using the unquoted attribute value syntax (`lang=en-us id=top` rather than `lang="en-us"id="top"`).
Edit: In the `minify-html` Rust crate you can specify "enable_possibly_noncompliant", which leads to such things. They are exploiting the fact that HTML parsers have to accept this per the (parsing) spec even though it's not valid HTML according to the (authoring) spec.
For anyone else furiously going back and forth between TFA and this comment: they mean the actual website of TFA has these errors, not the content of TFA.
Maybe a dumb question but I have always wondered, why does the (authoring?) spec not consider e.g. "doctypehtml" as valid HTML if compliant parsers have to support it anyway? Why allow this situation where non-compliant HTML is guaranteed to work anyway on a compliant parser?
It's considered a parse error [0]: it basically says that a parser may reject the document entirely if it occurs, but if it accepts the document, then it must act as if a space is present. In practice, browsers want to ignore all parse errors and accept as many documents as possible.
> a parser may reject the document entirely if it occurs
Ah, that's what I was missing. Thanks! The relevant part of the spec:
> user agents, while parsing an HTML document, may abort the parser at the first parse error that they encounter for which they do not wish to apply the rules described in this specification.