Some time ago I wanted the original MS solitaire playing card files. Wasn't too hard to find a copy of the binary, but the interesting thing to me is it appeared the files were handwritten- a couple possible typos in color and not a single byte longer than they needed to be.
You don't.
The normal procedure here is to have multiple unique keys with multiple unique secrets. If one breaks that's it it's broken. This also allows you to revoke a key without removing all keys.
It's not even just that- with component selection you have a handful of datasheets that give you (ideally) fairly truthful information about the device. You can rather deterministically look at these and compare them.
Regular consumer products? Good fucking luck. Anywhere an LLM pulls from is probably going to be mostly SEO'd listicles.
Better than ever! Getting some awareness raised. This whole post should be flagged, and people need to quit crapping on the people of Iran by trying to make this a Trump thing.
The only thing that seems to have gotten a lot worse is the trend of ai articles- which isn't kagi's fault but it would be nice if they could figure out how to filter them. They all follow the same patter- "specific thing you want" with a table of contents with loads of repeated chapters and unrelated information, spattered with effectively random images.
They’re starting to with their stopslop. Sites that are mostly ai content get flagged and deranked. Still not perfect and I think they only just started working on the backlog of reports so hopefully it holds up helping.
I don't remember exactly what I wrote and how the logic works, but I generally remember the broad flow of how things tie together, which makes it easier to drop in on some aspect and understand where it is code-wise.
There's code structure but then there's also code philosophy.
The worst code bases I have to deal with have either no philosophy or a dozen competing and incompatible philosophies.
The best are (obviously) written in my battle tested and ultra refined philosophy developed over the last ~25 years.
But I'm perfectly happy to be working in code bases written even with philosophies that I violently disagree with. Just as long as the singular (or at least compatible) philosophy has a certain maturity and consistency to it.
I think this is well put. Cohesive philosophy, even if flawed, is a lot easier to work with than a patchwork of out-of-context “best practices” strewn together by an LLM
Almost anything I write in Python I start in jupyter just so I can roll it around and see how it feels- which determines how I build it out and to some degree, how easy it is to fix issues later on.
I recently accepted-ish a position at a very ai-forward company. Manual programming was somewhat discouraged entirely.
I've used AI tools in the past for maths I didn't understand or errors I couldn't make sense of, and wrote the bulk myself, but now we have as mentioned, opus/sonnet 4.5- which work great.
As part of this, I had to integrate two new apis- nornally, when I write an API wrapper I end up learning a lot about how the API feels, what leads to what and how it smells, etc. This time? I just asked Claude to read it's docs, then gave suggestions about how I wanted it to be laid out. As a result? I have no idea how these apis feel, their models, etc. If I want to interact with them, I ask Claude how I do a thing with the library it made.
Mind you, the library is good. I looked over everything, it's fairly thin and it's exactly how I would write it, as I suggested it do. But I have no deep understanding, much less an understanding of how it got integrated in.
Like, normally when I integrate something in I learn a bit about the codebase I'm integrating it into. Do that enough times, and I understand the codebase at depth, how things plug in. This time? Nada.
It's.... Deeply uncomfortable, to know so little but still be able to do so much. It doesn't matter if I get it to explain it, that's just information that washes off when I move onto the next thing. The reflexive memory isn't built.
reply