A binary format that is only readable by some very specific version of the program writing it. The older xls comes to mind, but there must be thousands of examples.
Lots of sites publish outages, incidents, downtime over RSS/atom. Works great for monitoring, post them into slack with a bot and you can start a discussion thread about that incident where you first hear about it.
The curl alias in powershell is not compatible so it is an inconvenience. Must be one of the worst decisions to make it into windows, which is saying a lot.
The worst part is that Windows does ship cURL as a binary at `C:\Windows\System32\curl.exe` (may be dependent on some optional feature, dunno). Nowadays it does invoke this for me on my system, but I don't remember if I did something for this to be the case.
Most of the aliases are for convenience when working in an interactive shell, which will generally be dealing with more basic functions of a command. For scripting it is best practice to use the full commandlet names.
Also you can schedule it a bit off. Every hour? Delay it a few seconds. Can’t do that with a chat message. Also, batch up a bunch of them, maybe save some compute that way? Latency is not an issue.
he seems to think his times better spent on software than science it seems. i take it he didnt really crack anything of worth on the physics side then?
it's not the focus or very performant but you can have it spill to disk if you run out of memory. I wouldn't suggest building a solution based on this approach though; the sweet-spot is memory-constrained.
I honestly don’t think the models are as important as people tend to believe. More important is how the models are given tools - find, grep, git, test runners, …
> I honestly don’t think the models are as important as people tend to believe.
I tend to disagree. While I don't see meaningful _reasoning power_ between frontier models, I do see differences in the way they interact with my prompts.
I use exclusively Anthropic models because my interactions with GPT are annoying:
- Sonnet/Opus behave like a mix of a diligent intern, or a peer. It does the work, doesn't talk too much, gives answers, etc.
- GPT is overly chatty, it borderline calls me "bro", tend to brush issues I raise "it should be good enough for general use", etc.
- I find that GPT hardly ever steps back when diagnosing issues. It picks a possible cause, and enters a rabbit hole of increasingly hacky / spurious solutions. Opus/Sonnet is often to step back when the complexity increases too much, and dig an alternative.
- I find Opus/Sonnet to be "lazy" recently. Instead of systematically doing an accurate search before answering, it tries to "guess", and I have to spot it and directly tell it to "search for the precise specification and do not guess". Often it would tell me "you should do this and that", and I have to tell it "no, you do it". I wonder if it was done to reduce the number of web searches or compute that it uses
unless the user explicitly asks.
reply