> Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.
Money is never backed by nothing, or it's worthless. It may not be backed by anything physical, but it's always backed by some form of trust. National currencies are backed by trust in the corresponding government and institutions.
But that trust is often backed by nothing. Especially if you don't own assets; then from that perspective money is really working against you and is backed by pure coercion... But coercion is not an asset and it doesn't have net positive value; at least not to the victim.
It has value from the perspective of the oppressor I guess... I think this is where it derives its value.
Reserves matter even if reserve ratios are zero. If Bank A lends too much money, then when its customers spend that money, a lot of it will end up deposited at other banks. These banks will then ask Bank A for reserves (as in, central bank money) to clear the inter-bank transfers, which Bank A will need to borrow from the central bank, at a cost.
This probably won't make you feel any better, but banks don't really loan out money that's not theirs. When they lend money, they literally create it out of thin air. Creating that money has a cost, which is what ultimately limits how much they can lend, and having more deposits can lower that cost somewhat, but there's no direct connection between the money you deposit in your account and the money that the bank lends to someone else.
If I were to do this (and I might give it a try, this is quite an interesting case), I would try to run a detection model on the image, to find bounding boxes for the planets and their associated text. Even a small model running on CPU should be able to do this relatively quickly.
On the professional side, they also often let you interact with their experts and architects directly, as part of your support contract. With most other companies, you either have to go through front-office support exclusively, or pay extra for Professional Services.
> I’m downplaying because I have honestly been burned by these tools when I’ve put trust in their ability to understand anything, provide a novel suggestion or even solve some basic bugs without causing other issues.?
I've had that experience plenty of times with actual people...
LLMs don't "think" like people do, that much is pretty obvious. But I'm not at all sure whether what they do can be called "thinking" or not.
> In the examples given, it’s much faster, but is that mostly due to the missing indexes? I’d have thought that an optimal approach in the colour example would be to look at the product.color_id index, get the counts directly from there and you’re pretty much done.
So I tried to test this (my intuition being that indexes wouldn't change much, at best you could just do an index scan instead of a seq scan), and I couldn't understand the plans I was getting, until I realized that the query in the blog post has a small error:
> AND c1.category_id = c1.category_id
should really be
> AND p.category_id = c1.category_id
otherwise we're doing a cross-product on the category. Probably doesn't really change much, but still a bit of an oopsie. Anyway, even with the right join condition an index only reduces execution time by about 20% in my tests, through an index scan.
Interestingly, "aggregate first, join later" has been the standard way of joining fact tables in BI tools for a long time. Since fact tables are typically big and also share common dimensions, multi-fact joins for drill-across are best done by first aggregating on those common dimensions, then joining on them.
Makes you wonder how many cases there are out there of optimizations that feel almost second nature in one domain, but have never been applied to other domains because no one thought of it.
Something I really appreciate about PostgreSQL is that features don't land in a release until they are rock solid.
I don't think it's that nobody thought of it for PostgreSQL - I think it's that making sure it worked completely reliably across the entire scope of existing PostgreSQL features to their level of required quality took a bunch of effort.
It's not that nobody thought of it. Group pushdown has been a thing in papers for ~10 years at least, but it's hard to plan; your search space (which was already large) explodes, and it's always hard to know exactly how many rows come out of a given grouping. I have no idea how Postgres deals with these. Hopefully, they're doing something good (enough) :-)
Next up would hopefully be groupjoin, where you combine grouping and hash join into one operation if they are on the same or compatible keys (which is surprisingly often).
I wonder if PG will ever implement plan caching like MSSQL so that the speed of the optimizer is less of a concern and it can take more time finding better plans rather than replanning on every execution of the same statement.
Postgres used to have plan caching inside the same session, and that was so disastrous that it was limited severely by default.
Plan caching is very much a two-edged sword; cache too aggressively, and the situation will be different between the runs. Cache too little, and your hit rates are useless.
Not sure how that makes sense, if the stats change significantly then caches would be evicted during the gathering of statistics.
I believe popular connection poolers and clients attempt to do plan caching through prepared statements and keeping the connection open.
My understanding its not easy to do in PG since connections are process based instead of thread based and the query plans are not serializable between processes, so they cannot be shared between connections.
MSSQL has been doing statement plan caching for at least 20 years and it did stored procedure plan caching before that.
It's not about not knowing about an optimization. The challenge is to know when to apply it, so that it does not cause regressions for cases that can't benefit from it. It may be less risky in specialized systems, like BI systems typically don't need to worry about regressing OLTP workloads. Postgres absolutely needs to be careful of that.
I believe that's one of the reasons why it took about ~8 years (the original patch was proposed in 2017).
> - It says it's done when its code does not even work, sometimes when it does not even compile.
> - When asked to fix a bug, it confidently declares victory without actually having fixed the bug.
You need to give it ways to validate its work. A junior dev will also give you code that doesn't compile or should have fixed a bug but doesn't if they don't actually compile the code and test that the bug is truly fixed.
Believe me, I've tried that, too. Even after giving detailed instructions on how to validate its work, it often fails to do it, or it follows those instructions and still gets it wrong.
Don't get me wrong: Claude seems to be very useful if it's on a well-trodden train track and never has to go off the tracks. But it struggles when its output is incorrect.
The worst behavior is this "try things over and over" behavior, which is also very common among junior developers and is one of the habits I try to break from real humans, too. I've gone so far as to put into the root CLAUDE.md system prompt:
--NEVER-- try fixes that you are not sure will work.
--ALWAYS-- prove that something is expected to work and is the correct fix, before implementing it, and then verify the expected output after applying the fix.
...which is a fundamental thing I'd ask of a real software engineer, too. Problem is, as an LLM, it's just spitting out probabilistic sentences: it is always 100% confident of its next few words. Which makes it a poor investigator.
I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.