Hacker Newsnew | past | comments | ask | show | jobs | submit | jeffbee's commentslogin

What's counter-intuitive about this outcome?

maybe that was too strongly worded but there was an expectation for zstd to outperform. So the fact it didnt means the result was unexpected. i generally find it helpful to understand why something performs better than expected.

Isn't zstd primarily designed to provide decent compression ratios at amazing speeds? The reason it's exciting is mainly that you can add compression to places where it didn't necessarily make sense before because it's almost free in terms of CPU and memory consumption. I don't think it has ever had a stated goal of beating compression ratio focused algorithms like brotli on compression ratio.

I actually thought zstd was supposed to be better than Brotli in most cases, but a bit of searching reveals you're right... Brotli, especially at the highest compression levels (10/11), often exceeds zstd at the highest compression levels (20-22). Both are very slow at those levels, although perfectly suitable for "compress once, decompress many" applications which the PDF spec is obviously one of them.

Yep.

Are you sure? Admittedly I only have 1 PDF in my homedir, but no combination of flags to zstd gets it to match the size of brotli's output on that particular file. Even zstd --long --ultra -22.

on max compression (11 vs zstd's 22) of text brotli will be around 3-4% denser... and a lot slower. Decompression wise zstd is over 2x faster.

The pdfs you have are already compressed with deflate (zip).


That mentions zstd in a weird incomplete sentence, but never compares it.

They don’t seem to provide a detailed comparison showing how each compression scheme fared at every task, but they do list (some of) their criteria and say they found Brotli the best of the bunch. I can’t tell if that’s a sensible conclusion or not, though. Maybe Brotli did better on code size or memory use?

Hey, they did all the work and more, trust them!!!

> Experts in the PDF Association’s PDF TWG undertook theoretical and experimental analysis of these schemes, reviewing decompression speed, compression speed, compression ratio achieved, memory usage, code size, standardisation, IP, interoperability, prototyping, sample file creation, and other due diligence tasks.


I love when I perform all the due diligence tasks. You just can't counter that. Yes but, they did all the due diligence tasks. They considered all the factors. Every one. Think you have one they didn't consider? Nope.

But they didn't write "all". They wrote "other", which absolutely does not imply full coverage.

Maybe read things a bit more carefully before going all out on the snide comments?


In fact, they wrote "reviewing […] other due diligence tasks", which doesn't imply any coverage! This close, literal reading is an appropriate – nay, the only appropriate – way to draw conclusions about the degree of responsibility exhibited by the custodians of a living standard. By corollary, any criticism of this form could be rebuffed by appeal to a sufficiently-carefully-written press release.

"Intel CPUs were downclocking their frequency when using AVX-512 instructions due to excessive energy usage (and thus heat generation) which led to performance worse than when not using AVX-512 acceleration."

This is an overstatement so gross that it can be considered false. On Skylake-X, for mixed workloads that only had a few AVX-512 instructions, a net performance loss could have happened. On Ice Lake and later this statement was not true in any way. For code like ChaCha20 it was not true even on Skylake-X.


This was written in the past tense, and was true in the last decade. Only recently Intel came up with proper AVX-512

"Recently" is 6 years ago, so not so recent.

The real Intel mistake was that they have segregated by ISA the desktop/laptop CPUs and the server CPUs, by removing AVX-512 from the former, soon after providing decent AVX-512 implementations. This doomed AVX-512 until AMD provided it again in Zen 4, which has forced Intel to eventually reintroduce it in Nova Lake, which is expected by the end of this year.

Even the problems of Skylake Server and of its derivatives were not really caused by their AVX-512 implementation, which still had a much better energy efficiency than their AVX2 implementation, but by their obsolete implementation for varying the supply voltage and clock frequency of the CPU, which was far too slow, so it had to use an inappropriate algorithm in order to guarantee that the CPUs are not damaged.

The bad algorithm for frequency/voltage control was what caused the performance problems of AVX-512 (i.e. just a few AVX-512 instructions could lower preventively the clock frequency for times comparable with a second, because the CPU feared that if more AVX-512 instructions would come in the future it would be impossible to lower the voltage and frequency fast enough to prevent overheating).

The contemporaneous Zen 1 had a much more agile mechanism for varying supply voltage and clock frequency, which was matched by Intel only recently, many years later.


It wasn't. My comment covers the entire history of the ISA extension on Intel Xeon CPUs.

Yeah I would have loved benchmarks across generations and vendors.

I netted huge performance wins out of AVX512 on my Skylake-X chips all the time. I'm excited about less downclocking and smarter throttling algorithms, but AVX512 was great even without them -- mostly just hampered by poor hardware availability, poor adoption in software, and some FUD.

"We will simply access the index" has always struck me as wild hand-waving that would instantly crumble at first contact with technical reality. "At marginal cost" is doing a huge amount of work in this article.

It depends on what "this" you meant, but in general the ways of netbooting an OS are many and varied. You'd have to declare what kind of root device you ultimately want, such as root on iSCSI.

Personally, I feel that "smartOS does not support booting from a local block device like a normal, sane operating system" might be a drawback and is a peculiar thing to brag about.


There was a brilliant incident back in the joyent days where they accidentally rebooted an entire datacenter and ended up dossing their dhcp server ;)

SmartOS can, of course, boot from a local zfs pool, but it treats it logically as just another source for the bootable image. See the piadm(8) command.

What I'm looking to achieve are three identical proxmox host boxes. As soon as you finish the install you now have three snowflakes no matter how hard you try.

In the case of smartOS (which I've never used) it would seem like that is achieved in the design because the USB isn't changing. Reboot and you are back to a clean slate.

Isn't this how game arcades boot machines? They all netboot from a single image for the game you have selected? That is what it seems smartOS is doing but maybe I'm missing the point.


It doesn't look like it's achievable with vanilla Proxmox.

I think if you really-really want declaratively for host machines, you'd need to ditch Proxmox in favor of Incur on top of NixOS.

There is also https://github.com/SaumonNet/proxmox-nixos, but it's pretty new and therefore full of rough edges.


The number of points actually being rendered doesn't seem to warrant the webgpu implementation. It's similar to the number of points that cubism.js could throw on the screen 15 years ago.

I feel this sub thread can keep going if we introduce the complication of the whole-house vacuum system.

We can also spin off a subthread about pets, and another one about using vacuum cleaners on surfaces other than floor/carpet.

Training is pretty much irrelevant in the scheme of global energy use. The global airline industry uses the energy needed to train a frontier model, every three minutes, and unlike AI training the energy for air travel is 100% straight-into-your-lungs fossil carbon.

Not to mention doesn't aviation fuel still make heavy (heh) use of lead?

I think thats only true for propeller planes, which use leaded gasoline. Jet fuel is just kerosene

Pistons, rather than all propellers. Basically imagine a really old car engine, because simplicity is crucial for reliability and ease of maintenance so all those "fancy" features your car had by the 1990s aren't available, however instead of turning wheels the piston engine turns a propeller. Like really old car engines these piston engines tend to be designed for leaded fuel. Because this is relatively cheap to do, all the cheapest planes aimed at GA (General Aviation, ie you just like flying a plane, not for pay) are like this.

Propellers are a very common means to make aeroplanes work though, instead of a piston engine, which is cheap to make but relatively unreliable and expensive to run, you can use turbine engines, which run on JetA aka kerosene, and the rotary motion of the turbine drives the propeller making a turboprop. In the US you won't see that many turboprop engines for passenger service, but in the rest of the world that's a very common choice for medium distance aeroplane routes, while the turbofan planes common everywhere in the US would in most places be focused on longer distances between bigger airfields because they deliver peak efficiency when they spend longer up in the sky.

JetA, whether for a turbofan or turboprop does not have lead in it, so to a first approximation no actual $$$ commercial flights spew lead. They're bad for the climate, but they don't spew lead into the atmosphere.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: