On many Linux desktop environments it is the default - or can be configured: To hold the Windows Key ('meta') and left-mouse-drag a window around from _anywhere inside the window_! No need to get the mouse into the 'title bar'!
Additionally, meta+middle-mouse-drag allows one to resize a window from anywhere in the whole window!! (it chooses the closest corner when the drag starts) and this, being able to resize a window without needing to put the mouse in a usually-very-thin window border, is extremely liberating in my opinion! To the point where I really miss it on sub-windows where the app is handling resizing/etc itself!
There's a Windows app I used to use that supports the same kind of thing for Windows (different key I think), no idea if there's one for Mac I'm afraid - or whether it can be configured to work that way, but there probably is one so it would be worth investigating if this sounds useful to you I'd say!
I rarely use Windows but any box I do need to use for a while, I put Taekwindow on it. I only want the Linux feature of middle-clicking the titlebar to send to the back, myself, I don't want or need moving or resizing, but they're there.
yes, Alt+drag (it's always meta, not meta4, by default on systems i use) has been and still is a killer feature to me. on desktops which does not support it, like windows, i feel like my hands were tied.
When there's physical access to the device it's nearly impossible to make any system unhackable I think, at least with current tech. In this case it's a deliberately injected (twice!) hardware fault, and requires intervention at the hardware-level to reproduce the privilege escalation.
Yeah Apple does have "secure enclave" on some devices, and maybe in many cases it would wipe itself before you got in, but maybe that just means a more careful-hand is needed? (Again, physical access and extreme care/caution when debugging/investigating the chip should work eventually I think!) - I am not a hardware hacker, just have read about it quite a bit!
Having played a bit with Discrete FFT (with FFTW on 2D images in a Shake plugin we made at work ages ago) makes the DCT coefficients make so much more sense! I really wonder whether the frequency-decomposition could happen at multiple scale levels though? Sounds slightly like wavelets and maybe that's how jpeg2000 works?.. Yeah I looked it up, uses DWT so it kinda sounds like it! Shame it hasn't taken off so far!? Or maybe there's an even better way?
The discrete wavelet transform (DWT) compresses an image by repeatedly downscaling it, and storing the information which was lost during downscaling. Here's an image which has been downscaled twice, with its difference images (residuals): https://commons.wikimedia.org/wiki/File:Jpeg2000_2-level_wav.... To decompress that image, you essentially just 2x-upscale it, and then use the residuals to restore its fine details.
Wavelet compression is better than the block-based DCT for preserving sharp edges and gradients, but worse for preserving fine texture (noise). The DCT can emulate noise by storing just a couple of high-frequency coefficients for a 64-pixel block, but the DWT would need to store dozens of coefficients to achieve noise synthesis of similar quality.
The end result is that JPEG and JPEG 2000 achieve roughly the same lossy compression ratio before image artefacts show up. JPEG blurs edges, JPEG 2000 blurs texture. At very low bitrates, JPEG becomes blocky, and JPEG 2000 looks like a low-resolution image which has been upscaled (because it's hardly storing any residuals at all!)
FFmpeg has a `jpeg2000` codec; if you're interested in image compression, running a manual comparison between JPEG and JPEG 2000 is a worthwhile way to spend an hour or two.
I found a jpeg2000 reference PDF somewhere. It may as well have been written in Mandarin.
I got as far as extracting the width and height. Its much more advanced than jpeg. Forget about writing a decoder.
Both formats are DCT-based (except for lossless JPEG XL). JPEG 2000's use of the DWT was unusual; in general, still-image lossy compression research has spent the last 35 years iteratively improving on JPEG's design. This is partly for compatibility reasons, but it's also because the original design was very good.
Since JPEG, improvements have included better lossless compression (entropy coding) of the DCT coefficients; deblocking filters, which blur the image across block boundaries; predicting the contents of DCT blocks from their neighbours, especially prediction of sharp edges; variable DCT block sizes, rather than a fixed 8x8 grid; the ability to compress some DCT blocks more aggressively than others within the same image; encoding colour channels together, rather than splitting them into three completely separate images; and the option to synthesise fake noise in the decoder, since real noise can't be compressed.
You might be interested in this paper: https://arxiv.org/pdf/2506.05987. It's a very approachable summary of JPEG XL, which is roughly the state of the art in still-image compression.
Thanks. The paper is fascinating. I only skimmed around so far and it is full of interesting details. Even beyond compression. They really tried hard to make the USB of image formats, by supporting as many features and use cases as possible. Even things like multiple layers and non-destructive cropping. I like the section where they talk about previous image formats, why many of them failed and how they tried to learn from past mistakes.
Regarding algorithms: Searching for "learned image compression", there are a lot of research papers which use neural networks rather than analytic algorithms like DCT. The compression rates seem to already outperform conventional compression. I guess the bottleneck is more slow decoding speed than compression rate. At least that's the issue with neural video compression.
As I understand it, very small neural networks have already been incorporated into both VVC and AV2 for intra prediction. You're correct that this strategy is limited by decoding performance, especially when predicting large blocks.
In general, I'm pessimistic about prediction-and-residuals strategies for lossy compression. They tend to amplify noise; they create data dependencies, which interfere with parallel decoding; they require non-local optimisation in the encoder; really good prediction involves expensive analysis of a large number of decoded pixels; and it all feels theoretically unsound (because predictors usually produce just one value, rather than a probability distribution).
I'm more optimistic about lossy image codecs based on explicitly-coded summary statistics, with very little prediction. That approach worked well for lossy JPEG XL.
Everything after JPEG is still fundamentally the same, but individual parts of the algorithm are supercharged.
JPEG has 8x8 blocks, modern codecs have variable-sized blocks from 4x4 to 128x128.
JPEG has RLE+Huffman, modern codecs have context-adaptive variations of arithmetic coding.
JPEG has a single quality scale for the whole image, modern codecs allow quality to be tweaked in different areas of the image.
JPEG applies block coefficients on top of a single flat color per block (DC coefficient), modern codecs use a "prediction" made by smearing previous couple of block for the starting point.
I just put on a Venetian Snares album (Rossz Csillag Alatt Született) and thought I'd come back here to say the same as you have!
I'd also add: It's like Aaron's whole career is slightly resting atop Amen Break, at least as far back as (first I heard and still my favorite) The Chocolate Wheelchair Album! Amazing detailed work with Amen and similar samples that's for sure!
This worked in Opera on my (Android) phone, but on both Firefox and Chrome on Linux it seemed to not animate the rotation, and the rotation slider didn't seem to do anything either, so it was not showing off the real beauty of the MRS Fractal. I still think it's awesome but I wanted to check it out on a bigger screen and haven't been able to yet.
Very cool! Reminds me a bit of some formulas that are available in UltraFractal (and Visions of Chaos by the looks) called 'ducky' or 'ducks' fractals, here's a blogpost about them from Softology:
I really like how Ducks fractals produce detail across the whole image (a pet favourite feature in fractal renderings), but I find that Ducks huge abundance of symmetries made them a bit less natural feeling, and have less variety than say, the cloudy 'inside' of a Nova-family fractal when relaxation is turned up.
Thanks heaps! I very much love the 'old-school' jungle/uk-hardcore sound and didn't know about these more recent Suburban Base releases, and your other reccs were also great too! Amen break went soooo far!
My particular favourite is in demoscene tracker music where Amen also went all over the place (and sampling more generally too!)
I'm not sure if the below is actual Amen-break (need to ask BrothomStates probably!) but it's certainly in the spirit of it and this is definitely near or at the top of my favourite demos ever, I just find it so damned cool! "The Day the Earth was Born" by TPOLM:
The ratio of computer minutes per programmer-minute has indeed gone to an amazing number nowadays! I work in VFX (at RSP) and this fact is vividly illustrated for me all the time by the millions of thread-hours we go through on the renderfarm each week!
Despite all the astounding developments in AI/ML though, I still think there's still a critical need for the application of human/biological imagination and creativity. Sure the amount of leverage between thoughts and CPU cycles can be utterly giant now, but it doesn't seem to diminish the need (where performance or correctness/less-bugs are needed) for a full understanding of what the computer actually gets up to in the end.
For what it's worth, we do have an ML department at RSP and they are doing great! But I'm not sure we'd get very far if we tried to vibe-code the underlying pipeline, as it really requires full understanding of many interlocking pieces.
Agree, but can't we just include both average _and_ mean? And maybe min/max while we're at it? Seems like that could give a much clearer picture (without even needing a graph!?)
Min & max are also meaningless for most distributions, so probably you should instead look at P1 and P99 or something, and all of a sudden you're now talking about 5 numbers when all you wanted was a quick point.
I totally loved the plasma effect from whenever I first saw it, and implementing it myself in Pascal/DOS was one of the first times I really started to understand a 'shading'-like context where you are coming up with a value for every pixel, the pixels can be made to have 2D 'coordinates' (even though they are actually a 1D chunk of VRAM! -> modulo to the rescue!) and that you could transform the 'space' such that you feed in the coordinates (including time) and evaluate different-enough sine functions (then sum them, in this case) to create a beautiful soft-waves-evolving-over-time result! Was definitely an eye-opener about how to make it have nice colors as well! Great to see things like this being documented in this way!
Additionally, meta+middle-mouse-drag allows one to resize a window from anywhere in the whole window!! (it chooses the closest corner when the drag starts) and this, being able to resize a window without needing to put the mouse in a usually-very-thin window border, is extremely liberating in my opinion! To the point where I really miss it on sub-windows where the app is handling resizing/etc itself!
There's a Windows app I used to use that supports the same kind of thing for Windows (different key I think), no idea if there's one for Mac I'm afraid - or whether it can be configured to work that way, but there probably is one so it would be worth investigating if this sounds useful to you I'd say!
reply