> The Sloot Digital Coding System is an alleged data sharing technique that its inventor claimed could store a complete digital movie file in 8 kilobytes of data
8 kilobytes? Rookie numbers. I'll do it in 256 bytes, as long as you're fine with a somewhat limited selection of available digital movie files ;)
I can shrink any file down to just 32 bits using my unique method.
I call it the High Amplitude Shrinkage Heuristic, or H.A.S.H.
It is also reversible, but only safely to the last encoded file due to quantum hyperspace entanglement of ionic bonds. H.A.S.H.ing a different file will disrupt them preventing recovery of the original data.
This claim itself is probably a hoax and not relevant to the article at hand; but these days with text-to-image models and browser support, you could probably do something like <img prompt="..."> and have the browser render something that matches the description, similar to the "cookbook" analogy used in the Wikipedia article.
That's an interesting concept, although it would generate a ton of bogomips since each client has to generate the image themselves instead of one time on the server.
You'd also want "seed" and "engine" attributes to ensure all visitors see the same result.
Unless you don't actually care if everyone sees the same results. So long as the generated image is approximately what you prompted for, and the content of the image is decorative so it doesn't really need to be a specific, accurate representation of something, it's fine to display a different picture for every user.
One of the best uses of responsive design I've ever seen was a site that looked completely different at different breakpoints - different theme, font, images, and content. It's was beautiful, and creative, and fun. Lots of users saw different things and had no idea other versions were there.
You could at least push the work closer to the edge, by having genAI servers on each LAN, and in each ISP, similar to the idea of a caching web proxy before HTTPS rendered them impossible.
Push the work closer to the edge, and multiply it by quite a lot. Generate each image many times. Why would we want this? Seems like the opposite of caching in a sense.
That would require a lot of GBs in libraries in the browser and a lot of processing power on the client CPU to render an image that is so unimportant that it doesn't really matter if it shows exactly what the author intended. To summarize that in three words: a useless technology.
That idea is something that is only cool in theory.
Currently, we're definitely not there in terms of space/time tradeoffs for images, but I could imagine at least parameterized ML-based upscaling (i.e. ship a low-resolution image and possibly a textual description, have a local model upscale it to display resolution) at some point.
AFAIK it's not been proved that every combination does exist in π.
By comparison, you could easily define a number that goes 0,123456789101112131415… and use indexes to that number. However the index would probably be larger than what you're trying to encode.
Huh, I presumed that any non-repeating irrational number would include all number sequences but I think you’re right. Even a sequence of 1 and 0 could be non-repetitive.
I am curious what the compression ratios would be. I suspect the opposite, but the numbers are at a scale where my mind falters so I wouldn’t say that with any confidence. Just 64 bits can get you roughly 10^20 digits into the number, and the “reach” grows exponentially with bits. I would expect that the smaller the file, the more common its sequence is.
Well with my number you can at least calculate the digits at any position without needing to calculate all the digits before.
Let's do it for a similar number but in binary format… a 1mb file has 2²⁰ digits. If we optimize the indexing to point to the "number" instead of the "digit"… so that the index is smaller. Magically the index is as long as the file!
The real version of this is Nvidia's web conferencing demo where they make a 3d model of your face and then only transfer the wireframe movements which is super clever.
You can really feel the "compute has massively outpaced networking speed" where this kind of thing is actually practical. Maybe I'll see 10G residential in my lifetime.
Reminds me of when I was like 13 and learned about CRC codes for the first time. Infinite compression here we come! Just calculate the 32bit CRC code for say 64 bits, transmit the CRC, then on the other hand just loop over all possible 64 bit numbers until you got the same CRC. So brilliant! Why wasn't this already used?!
Of course, the downsides became apparent once the euphoria had faded.
Time Lords probably, saving us from the inevitable end of this technology path, where all data in the universe is compressed into one bit which leads to an information-theoretic black hole that destroys everything.
[1] https://en.wikipedia.org/wiki/Sloot_Digital_Coding_System