In practice you see noticeable degradation of performance for streaming reads of large files written after 85% or so. Files you used to be able to expect to get 500+MB/sec could be down to 50MB/sec. It's fragmentation, and it's fairly scale invariant, in my experience.
I scrub once a quarter because scrubs take 11 days to complete. I have 8x 18TB raidz2 pool, and I keep a couple of spare drives on hand so I can start a resilver as soon as an issue crops up.
In the past, I've gone for a few years between scrubs. One system had a marginal I/O setup and was unreliable for high streaming load. When copying the pool off of it, I had to throttle the I/O to keep it reliable. No data loss though.
Scrubs are intensive. They will IMO provoke failure in drives sooner than not doing them. But they're the kind of failures you want to bring forward if you can afford the replacements (and often the drives are under warranty anyway).
If you don't scrub, eventually you generally start seeing one of two things: delays in reads and writes because drive error recovery is reading and rereading to recover data; or, if you have that disk behaviour disabled via firmware flags (and you should, unless you're reslivering and on your last disk of redundancy), you see zfs kicking a drive out of the pool during normal operations.
If I start seeing unrecoverable errors, or a drive dropping out of the pool, I'll disable scrubs if I don't have a spare drive on hand to start mirroring straight away. But it's better to have the spares. At least two, because often a second drive shows weakness during resilver.
There is a specific failure mode that scrubs defend against: silent disk corruption that only shows up when you read a file, but for files you almost never read. This is a pretty rare occurrence - it's never happened to me in about 50 drives worth of pools over 15 years or so. The way I think about this is, how is it actionable? If it's not a failing disk, you need to check your backups. And thus your scrub interval should be tied to your backup retention.
That's a fine fit of pique - and I once had an awkward file on one of my zfs pools, about three pools ago - but how does it leave you better off, if you want what zfs offers?
If by block by block you mean you stop using an IDE and spend most of your time looking at diffs, sure. Because in a well structured project, that's all you need to do now: maintain a quality bar and ensure Claude doesn't drop the ball.
I'm like you. I get on famously with Claude Code with Opus 4.5 2025.11 update.
Give it a first pass from a spec. Since you know how it should be shaped you can give an initial steer, but focus on features first, and build with testability.
Then refactor, with examples in prompts, until it lines up. You already have the tests, the AI can ensure it doesn't break anything.
> focus on features first, and build with testability.
This is just telling me to do this:
> To use it the way you are using it we would instead have to allow it to replace the part that happens (or can happen) away from the keyboard: the mental processing of the code.
I feel like some of these proponents act like a poet has the goal to produce an anthology of poems and should be happy to act as publisher and editor, sifting through the outputs of some LLM stanza generator.
The entire idea using natural language for composite or atomic command units is deeply unsettling to me. I see language as an unreliable abstraction even with human partners that I know well. It takes a lot of work to communicate anything nuanced, even with vast amounts of shared context. That's the last thing I want to add between me and the machine.
What you wrote futher up resonates a lot for me, right down to the aphantasia bit. I also lack an internal monologue. Perhaps because of these, I never want to "talk" to a device as a command input. Regardless of whether it is my compiler, smartphone, navigation system, alarm clock, toaster, or light switch, issuing such commands is never going to be what I want. It means engaging an extra cognitive task to convert my cognition back into words. I'd much rather have a more machine-oriented control interface where I can be aware of a design's abstraction and directly influence its parameters and operations. I crave the determinism that lets me anticipate the composition of things and nearly "feel" transitive properties of a system. Natural language doesn't work that way.
Note, I'm not against textual interfaces. I actually prefer the shell prompt to the GUI for many recurring control tasks. But typing works for me and speaking would not. I need editing to construct and proof-read commands which may not come out of my mind and hands with the linearity it assumes in the command buffer. I prefer symbolic input languages where I can more directly map my intent into the unambiguous, structured semantics of the chosen tool. I also want conventional programming syntax, with unambiguous control flow and computed expressions for composing command flows. I do not want vagaries of natural language interfering here.
Well if you walk backwards 10 paces and look at the big picture here, what MS did enables anti-cheat attestation via TPM, and that in turn can act as a feature that structurally - via the market - reduces the appeal of Linux.
Signing your own custom-built kernel (if you need to adjust flags etc., like I do) won't result in a certification chain that will pass the kind of attestation being sketched out by the OP article here.
Yes because you’re trying to communicate that trust to other players of the game you’re playing as opposed to yourself.
It’s why I hate the term “self-signed” vs “signed” when it comes to tls/https. I always try to explain to junior developers that there is no such a thing as “self-signed”. A “self-signed” certificate isn’t less secure than a “signed” certificate. You are always choosing who you want to trust when it comes to encryption. Out of convenience, you delegate that to the vendor of your OS or browser, but it’s always a choice. But in practice, it’s a very different equation.
When I had a problem with video handoff between one Linux kernel and the next with a zfsbootmenu system, only Gemini was helpful. ChatGPT led me on a merry chase of random kernel flags that didn't have the right effect.
What worked was rebuilding the Ubuntu kernel with a disabled flag enabled, but it took too long to get that far.
The shorthand makes inline style more ergonomic, so you can see the wood for the trees, rather than long strings of style attributes in your markup.
Inline style is the thing. That's what tailwind is enabling in a readable way. And inlined style is what makes style more maintainable and less susceptible to override rot.
The separation between form and function is always a bit illusionary, but particularly so with CSS. Almost all markup is written to look a specific way, not a configurable way.
reply