Hacker Newsnew | past | comments | ask | show | jobs | submit | lordnacho's commentslogin

This is artisans vs industry.

You can get a hand-crafted, beautiful, solid, chair made by a guy who knows all the tricks of carpentry. He's spent his whole life perfecting the technique of how to drill the holes, how to fit everything together, how to balance the chair.

It used to be the only way.

Then someone invents a machine that can make you a hundred crappy chairs. Sometimes the legs don't fit in the seat, sometimes the chair isn't balanced. But it's close enough often enough.

On top of it, the new tool is not in the lineage of the old tools. It doesn't FEEL like you are crafting a chair.


> This is artisans vs industry. ... But it's close enough often enough.

If I had a nickle every time I heard the equivalent of "close enough often enough" on root-cause analysis bridge calls during prod outages...


Except that chairs (or anything that's automated today, even cars) don't rapidly evolve or are dynamic like Software is.

Not the OP, but I’ve been thinking about why LLMs feel different and I think it’s closer to the chair analogy than I initially thought. Not able to fully articulate it but here’s my try.

Conventionally programming software needed you to know your tools like language, framework, OS, etc. pretty well. There’s a divergent set of solutions dependent on your needs and the craftsmen (programmers/engineers) you went to. Many variables you needed to know to produce something useful. You need to know your raw materials, like your wood.

With LLMs it’s weirdly convergent. Now there’s so many ways to get the same thing because you just have to ask with language. It’s like mass produced furniture because it’s the most common patterns and solutions it’s been trained on. Like someone took all the wood in the world, ran it through some crazy processing, and now you are just the assembler of IKEA like pieces that mostly look the same.

There’s a lost in necessity in craft. It helps to know the underlying craft, but it’s been industrialized and most people would be happy enough with that convergent solution.


I have a similar background, and I largely agree.

What's dying is the programmer-first job. That guy whose main use is that he knows how computers work, and secondly that he is a human who can understand how some business works, and can do the translation.

The other type of programmer is the business programmer. I started on this end before an incredibly long rabbit hole swallowed up my life. This is the person who thinks he's a finance guy, or an academic, or an accountant, or any number of things, who realizes that he can get a computer to help him.

This type of person is grounded in the specific business they come from, and has business-level abstractions for what he wants the computer to do.

AI is still imperfect, so it is still in your interest to know how the computer works, especially as you dive into things where your model of the machine actually matters. But it allows the person with the business view to generate code that would previously be their second job. He can QA the code on a business level. This used to just be called Excel, which would generate horrors for anyone who could actually program, but it is still the glue behind a huge number of business systems, and it still works because ugly often works.

I liken this to previous revolutions in IT. At one point schools had begun churning out literate people, and they started spilling out into the business world as clerks. You could learn how to read and write, and that would get you a job sending correspondence to India, that sort of thing. And that would be your way into the organization, and maybe you'd eventually learn the business itself.

People who typed stuff had a similar fate. There used to be rooms of people who would type letters and send them. Now the executive just types the letters and sends them off by email.

If you're a translator first, AI is not great for you. If you managed to turn your translation skills into executive skills, then you are happy to pull the ladder up.


> the business programmer

I work in ERP. It is full of people like this. Accountants who learned SQL, some VB and you can get incredibly far.

They're also smart enough to know when they need an actual programmer, like I am smart enough to call them when it's time to do year end close / financial reporting


By and large, I agree with the article. Claude is great and fast at doing low level dev work. Getting the syntax right in some complicated mechanism, executing an edit-execute-readlog loop, making multi file edits.

This is exactly why I love it. It's smart enough to do my donkey work.

I've revisited the idea that typing speed doesn't matter for programmers. I think it's still an odd thing to judge a candidate on, but appreciate it in another way now. Being able to type quickly and accurately reduces frustration, and people who foresee less frustration are more likely to try the thing they are thinking about.

With LLMs, I have been able to try so many things that I never tried before. I feel that I'm learning faster because I'm not tripping over silly little things.


> I feel that I'm learning faster

Yes, you are feeling that. But is that real? If I take all LLMs from you right now, is your current you still better than your pre-LLM you? When I dream I feel that I can fly and as long as I am dreaming, this feeling is true. But the subject of this feeling never was.


If you use coding agents as a black box, then yes you might learn less. But if you use them to experiment more, your intuition will get more contact with reality, and that will help you learn more.

For example, my brother recently was deciding how to structure some auth code. He told me he used coding agents to just try several ideas and then he could pick a winner and nail down that one. It's hard to think of a better way to learn the consequences of different design decisions.

Another example is that I've been using coding agents to write CUDA experiments to try to find ways to optimise our codegen. I need an understanding of GPU performance to do this well. Coding agents have let me run 5x the number of experiments I would be able to code, run, and analyse on my own. This helps me test my intuition, see where my understanding is wrong, and correct it.

In this whole process I will likely memorise fewer CUDA APIs and commands, that's true. But I'm happy with that tradeoff if it means I can learn more about bank conflicts, tradeoffs between L1 cache hit rates and shared memory, how to effectively use the TMA, warp specialisation, block swizzling to maximise L2 cache hit rates, how to reduce register usage without local spilling, how to profile kernels and read the PTX/SASS code, etc. I've never been able to put so much effort into actually testing things as I am learning them.


I feel like my calculator improves my math solutions. If you take away my calculator, I'll probably be worse at math than I was before. That doesn't mean I'm not better off with my calculator however.

That's a pretty interesting take on it, I hadn't considered it like that before when I was considering if my skills were atrophying or not from LLM usage with coding.

Your calculator doesn't charge per use

If calculators were invented today, they’d only be sold with a monthly subscription

If it did, would it change its usefulness in terms of the value it outputs? (through agreed, if I had to pay money it would increase the cost, and so the tradeoff)

One guy I work with has little formal training (and mid level experience), but do a lot with LLM's. But in every situation he has to do anything without an LLM he heavily struggles/are not able to anything (say a basic sql query). There is no way someone with his experience and position would still be at that level.

I guess people differ in thinking that is a good or a bad thing. I think it makes up for a huge risk, as he cant really judge good from bad code (or architecture), but his supervisors have put him in a position where he should.


It’s a bit like the shift from film to digital in one very specific sense: the marginal cost of trying again virtually collapsed. When every take cost money and setup time, creators pre-optimized in their head and often never explored half their ideas. When takes became cheap, creators externalized thought as they could try, look, adjust, and discover things they wouldn’t otherwise. Creators could wander more. They could afford to be wrong because of not constantly paying a tax for being clumsy or incomplete, they became more willing to follow a hunch and that's valuable space to explore.

Digital didn’t magically improve art, but it let many more creatives enter the loop of idea, attempt and feedback. LLMs feel similar: they don’t give you better ideas by themselves, but they remove the friction that used to stop you from even finding out whether an idea was viable. That changes how often you learn, and how far you’re willing to push a thought before abandoning it. I've done so many little projects myself that I would have never had time for and feel that I learned something from it, of course not as much if I had all the pre LLM friction, but it should still count for something as I would never have attempted them without this assistance.

Edit: However, the danger isn’t that we’ll have too many ideas, it’s that we’ll confuse movement with progress.

When friction is high, we’re forced to pre-compress thought, to rehearse internally, to notice contradictions before externalizing them. That marination phase (when doing something slowly) does real work: it builds mental models, sharpens the taste and teaches us what not to bother to try. Some of that vanishes when the loop becomes cheap enough that we can just spray possibilities into the world and see what sticks.

A low-friction loop biases us toward breadth over depth. We can skim the surface of many directions without ever sitting long enough in one to feel its resistance. The skill of holding a half formed idea in our head, letting it collide with other thoughts, noticing where it feels weak, atrophies if every vague notion immediately becomes a prompt.

There’s also a cultural effect. When everyone can produce endlessly, the environment fills with half-baked or shallow artifacts. Discovery becomes harder as signal to noise drops.

And on a personal level, it can hollow out satisfaction. Friction used to give weight to output. Finishing something meant you had wrestled with it. If every idea can be instantiated in seconds, each one feels disposable. You can end up in a state of perpetual prototyping, never committing long enough for anything to become yours.

So the slippery slope is not laziness, it is shallowness, not that people won’t think, but people won’t sit with thoughts. The challenge here is to preserve deliberate slowness inside a world that no longer requires it: to use the cheap loop for exploration, while still cultivating the ability to pause, compress, and choose what deserves to exist at all.


> Being able to type quickly and accurately reduces

LLMs can generate code quickly. But there's no guarantee that it's syntactically, let alone semantically, accurate.

> I feel that I'm learning faster because I'm not tripping over silly little things.

I'm curious: what have you actually learned from using LLMs to generate code for you? My experience is completely the opposite. I learn nothing from running generated code, unless I dig in and try to understand it. Which happens more often than not, since I'm forced to review and fix it anyway. So in practice, it rarely saves me time and energy.

I do use LLMs for learning and understanding code, i.e. as an interactive documentation server, but this is not the use case you're describing. And even then, I have to confirm the information with the real API and usage documentation, since it's often hallucinated, outdated, or plain wrong.


> I'm curious: what have you actually learned from using LLMs to generate code for you?

I learn whether my design works. Some of the things I plan would take hours to type out and test. Now I can just ask the LLM, it throws out a working, compiling solution, and I can test that without spending my waking hours on silly things. I can just glance at the code and see that it's right or wrong.

If there are internal contradictions in the design, I find that out as well.


> LLMs can generate code quickly. But there's no guarantee that it's syntactically, let alone semantically, accurate.

This has been a non-issue with self-correcting models and in-context learning capabilities for so long that saying it today highlights highly out of date priors.


You're referring to tools that fetch content from the web, read my data on disk, and feed it to the models?

I can see how that would lead to a better user experience, but those are copouts. The reality is that the LLM tech without it still has the same issues it has had all along.

Besides, I'll be damned if I allow this vibe coded software to download arbitrary data from the web on my behalf, scan my disk, and share it with companies I don't trust. So when, and if, I can do so safely and keep it under my control, I'll give it a try. Until then, I'll use the "dumb" versions of these models, feed them context manually myself, and judge them based purely on their actual performance.


The 'copouts' are what the frontier models are designed to do. If you aren't using the tool as they're intended to, you'll get poor results, obviously.

If in order to use a product as intended I have to punch myself in the face, I'll take the poor results, obviously.

You can trade punching yourself in the face for shaving with an angle grinder if you want these kinds of analogies.

But the benefit might not be speed, it might be economy of attention.

I can code with Claude when my mind isn't fresh. That adds several hours of time I can schedule, where previously I had to do fiddly things when I was fresh.

What I can attest is that I used to have a backlog of things I wanted to fix, but hadn't gotten around to. That's now gone, and it vanished a lot faster than the half a year I had thought it would take.


Doesn't that mean you're less likely to catch bugs and other issues that the AI spits out?

No, you are spending less time on fixing little things, so you have more time on things like making sure all the potential errors are checked.

Not a problem! Just ask the AI to verify its output and make test cases!

nah, you rely on your coworkers to review your slop!

Code you never ship doesn't have bugs by definition, but never shipping is usually a worse state to be in.

I'm sure people from Knight Capital don't think so.

Even there, they made a lot of money before they went bust. Like if you want an example you'd be better of picking Therac-25, as ancient an example as it is.

The bit about why not OOP seems a bit old. I think we're past a point where people are going for OOP as the default shape of code.

Overall, it makes sense. C is a systems language, and a DB is a system abstraction. You shouldn't need to build a deep hierarchy of abstractions on top of C, just stick with the lego blocks you have.

If the project had started in 2016, maybe they would have gone for c++, which is a different beast from what it was pre-2011.

Similarly, you might write SQLite in Rust if you started today.


> The bit about why not OOP seems a bit old. I think we're past a point where people are going for OOP as the default shape of code.

That section was probably written 20 years ago when Java was all the rage.


I think they would have still gone with C. I still do.

The author of ZMQ had an article about regretting not using C over C++. Picking Rust for a "modern" version of SQLite could easily go down a similar route in the end.

https://web.archive.org/web/20250130053844/https://250bpm.co...


Regret is possible with any language, but I'd be surprised if someone regretted choosing Rust for the reasons in the article you linked:

* Error handling via exceptions. Rust uses `Result` instead. (It has panics, but they are meant to be strictly for serious logic errors for which calling `abort` would be fine. There's a `Cargo.toml` option to do exactly that on panic that rather than unwinding.) (btw, C++ has two camps here for better or worse; many programs are written in a dialect that doesn't use exceptions.)

* Constructors have to be infallible. Not a thing in Rust; you just make a method that returns `Result<Self, Error>`. (Even in C++ there are workarounds.)

* Destructors have to be infallible. This is about as true in Rust as in C++: `Drop::drop` doesn't return a `Result` and can't unwind-via-panic if you have unwinding disabled or are already panicking. But I reject the characterization of it as a problem compared to C anyway. The C version has to call a function to destroy the thing. Doing the same in Rust (or C++) is not really any different; having the other calls assert that it's not destroyed is perfectly fine. I've done this via a `self.inner.as_mut().expect("not terminated")`. They say the C only has two states: "Not initialised object/memory where all the bets are off and the structure can contain random data. And there is initialised state, where the object is fully functional". The existence of the "all bets are off" state is not as compelling as they make it out to be, even if throwing up your hands is less code.

* Inheritance. Rust doesn't have it.


I'm almost there. I also have tailclscale/SSH/Claude sessions on a VM.

The thing I'm missing is a phone that makes it comfy. I could just SSH feom my standard S23, but what I've got my eye on is one of those foldable things.

Has anyone used one like a laptop? Keyboard on the bottom half, terminal on the top? Does it work decently?


I also work from home, together with my wife. So even though we have kids, there is no necessity of leaving the house, save for 15 minutes a day on weekdays to drop off and pick up.

The main thing people have to get over is passivity. You want to see your friends? Invite a bunch of people to come out. Nowadays it takes very little time to book a restaurant.

I do this every few months. I just think of three or four other people I want to have dinner with, arrange a time, and then invite everyone else I come across. Dinner ends up being anywhere from 4 to 12 people, out of maybe 20 invites. As for who to invite, just invite your friends, and your "friend seeds".

Everyone has a few peripheral people they know, whose bio seems to fit the template of your actual friends: live near you, studied with you, worked with you. People who in all likelihood have the same values as you, except you haven't hung out together due to lack of opportunity. We all know that guy: you know his name, you know he does what you do, you don't know anything else. So you bring that seed along and you and your existing friends water the relationship.

A more modern way to not be lonely is to play an MMO. This isn't quite like real friends, but it also isn't quite the same as being lonely. The big benefit of course is that you can do this at home.

These games are all about cooperating, sharing knowledge and experience. It's not really all that different from cooking a meal together, you're just in your PJs as you're slaying a dragon. You can also end up learning a fair bit about your online friends from just hanging around. Life stories, that kind of thing, they are a basic part of friendship.


IMHO online 'friendships' are dangerous in this context : they would fill part of the need for socialization, thus making you more complacent in not trying to seek out 'real' friendships.

Not to mention the time sink and addiction issues with some of the MMOs.


I think there's something to this idea in the article. I remember my childhood well, perhaps because the cast is still somewhat intact, and we all had a good time. The time after finishing school is more blocky: a few years working in certain places, meeting my wife and having kids. My adult life takes up more calendar time, but less "experienced time". My cousin was on a chat last night, explaining that his day is taken up by taking three kids to different schools, then picking them up again. Over and over, but somehow it is one experience. Plenty of people will tell you the same about going to work.

By contrast, you remember things in your youth that happened only once, like spraining my ankle at a crossing with a train oncoming (it was less dramatic that it sounds lol), or going to a music festival, or finishing high school.

One thing that maybe needs to be talked about is that you can simply relive your life. This works best if you had a good time. So the answer to the question is not just that you should look for new firsts, you can replay some old tapes.

I'm lucky enough that I know people from every time in my life. I have a chat group with three other guys that I met when we were 4 years old, over 40 years ago. They sent messages last night. I got a message from my first grade teacher, and my high school English teacher. I have a chat with all my buddies from school, where we exchange messages that are about as mature as when we were teenagers. People I worked with, I keep in touch with.

I have an online photo album that is basically the only data I care to have a backup of. Now and again, I flip through it, and I see what I was up to, and have nice thoughts about that.

It might sound a bit weird for a mid-40s guy to be so resigned to being old. But I was talking to one of the mentioned buddies from nursery, and I turns out the big milestones have happened already. We already finished school, got jobs, had kids. There's a lot of little things to tick off, but they are little things: visiting various interesting sites, going to some concert, and so on.


The cynic in me thinks that people on their deathbed are about as well considered as people are in general: if you ask most people about some ordinary thing, and life regrets are indeed such a thing, they will give you a canned response that is the zeitgeist regurgitated, and they can't explain why they think what they think, or any related opinions they might have thought instead.

Of course nowadays we have memes to help us completely avoid thinking at all. Ask someone what is best in life, and see how far you get!

Only rarely does one get a considered response. That would be a response that

1) Acknowledges existing thought on the issue. "Socrates mentions regret..." "The mongols thought the open steppe was very important for the good life"

2) Adds personal experience. This can be totally banal, since we don't all live exceptional lives. "I met a girl at the bakery in 1975..." But being banal doesn't mean you can't use the experience to reflect on what regrets actually are, and whether you agree with some POV.

With someone on their deathbed I guess it can be a bit jarring to subject them to the full Oxford tutorial grilling, so I can understand why it can end up being a bit bland.


Probably more to do with HN being a tech community, and the US is the largest highly developed country where a lot of kids can learn tech stuff. So you get a disproportionate number of programmers and related professionals who are American.

If Silicon Valley were in the UK I think most HN contributers would still be American.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: