Hacker Newsnew | past | comments | ask | show | jobs | submit | dimator's commentslogin

I read somewhere (in the myriad blog posts dealing with this Cambrian LLM explosion) that software developers could be put into two camps: those that just want the thing to exist, and those that want to build and understand the thing (in addition to wanting it to exist).

those in the first camp are having a great time.

those in the second camp (which is how you're describing yourself, and how I'd describe myself) are wary and suspicious.

it is somewhat paradoxical, we've watched/read sci-fi/cyberpunk for years and dreamed of this kind of world. after all, when did you see any members of the Enterprise writing code? they just asked the computer to "write a subroutine" and that was that. what a world!

but here we are, with the craft in danger, not entirely impressed by the idea of "just ask and walk away".

i, too, fear for my loss of critical thinking, raw skills, and design sense, as do i think about being one of the few (in 2, 3, 5, 10 years) that didn't abdicate their cognition, their craft, to the tech overlords.

but i wonder if it will matter anyway. i wonder if "source code" will be a deep abstraction that nobody thinks about anyway, similar to how 99% of us don't care/need to care what the machine code we're eventually emitting does or looks like.

in any case, i'll keep my thinking for now.


> I read somewhere (…) that software developers could be put into two camps (…)

Surely you read it more than once, because that has become a talking point. It’s a false dichotomy that, you’ll notice, is most often used by the people who put themselves in the first camp to steer the conversation. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.

You don't have to pick one camp over the other. Good, high quality craft makes good products.

> after all, when did you see any members of the Enterprise writing code?

When did you see anyone in any media taking a dump, or sleeping, or doing any of the boring bits? Rarely, because if it’s not relevant to the story they don’t show it, but it doesn’t mean it didn’t happen.

I’m more of a DS9 fan, and I remember them having computer problems all the time. O’Brien, despite being highly competent and the chief of engineering with a team, was constantly overworked.

And their computers were infinitely superior to the LLMs we have now. When they gave you an answer, you could be confident it was correct. And if they didn’t know, they’d tell you!


I think a notable difference is that the AI that is portrayed in most sci-fi (that I have read/watched anyway) tend to be "logical machines" that act deterministically based on the data available to them.

What we got are "statistical machines" that tend to do the right thing under the right conditions, but can go completely off the rail every now and then.

The former are more akin to a generalization of computers as we typically think of it, whereas the latter is something else. Maybe that something else is closer to human behavior in some ways, but also so very different - unlike humans, where you get to know people, build relationships, know who to trust in what ways, and so forth, you can never really trust an LLM with any critical tasks without close supervision.


I kinda like the woodworking analogy of this.

I, in theory, can plane a piece of wood with a hand planer. But I'll never do it again, we did it at school in ye olden times before the millennium and it was boring then as it is boring now.

I know people who get satisfaction from it, they take one sliver off with the hand planer, feel the wood with their hand and figure out the perfect angle for the next tiny sliver of wood to come out off, repeating the process over and over again.

I, personally, will just feed the damn plank to a mechanical planer with the exact specs of the resulting board set up. I just want the board smooth so I can get to the next step of the process. I'm not doubting the "wood-slop" the machine produces, I can see and measure if it's good enough or not. I don't need to be involved in the process.

We're both making a table, mine will be done faster. It might not be hand-crafted to perfection, but it will hold the stuff I intend to put on it just fine. If I find out it sucks later on, I can make a new one that's slightly better or fix the existing one. My goal was a functional product, not a piece of handcrafted art.


I don't think the analogy works. You're focusing on the "how", not the "what". Using a mechanical planer, you still need to dial in numbers yourself. You design your own table, the more modern tools just make it easier to realize your vision.

Another example: I enjoy writing with a good pen. But whether I write by pen or on a keyboard, it's still me writing it.

However, AI does basically all the real work, only leaving you to guide it. Make a table? AI gives you one with 2 legs. More legs? Guess I can live with 5 legs.

And you wouldn't be making that table, AI is. You cannot have pride in something that you never made yourself. It's the same as 3D-printing something from Thingiverse and claiming you made it.

People who create AI blog posts are not writing. Those that prompt their way to a piece of software are not doing software engineering. The ones that generate AI images are not being artists.


Yea, it's not a perfect analogy =) I've been trying to figure out the perfect one, but the one that hits most would need to be done as a comic strip format - and I can't draw for shit and refuse to use AI for it. Maybe one day.

It all depends on the view you take on the thing. What real problem are you solving? If the problem is "I need a table for X", both ways solve it. How the problem is solved is secondary.

I don't need pride in "making it myself", what I get pride in is "I solved this problem". Printing something out of Thingiverse still solves the problem, as does buying something ready-made. For me, personally, the means doesn't matter - I get zero dopamine in doing something the hard way, quite the opposite.

As for the writing, there are actual studies that writing by hand activates different parts of your brain than typing.


On this; I think we may just have to let go of pride & kudos and their connection to our identity.

Your analogy is not really apples to apples though, is it?

More close is: if there was a table making machine, you just push a button and something like a table comes out, would you still be a woodworker? You haven't planed, nor measured, nor cut, nor jointed, you've only pressed on "make me a table"


And I wouldn't claim I was a woodworker. But I might be a "furniture designer", if I can adjust the parameters of what kind of table it plops out.

I'm in the second camp.

Part of it's that the whole point of going into this industry is that I love coding and have been doing it since I was 8. Part of it is that I'm a control freak and it makes me uncomfortable to have to trust AI generated code. Sure, I already trust interpreters and compilers, but those are much more deterministic, and they don't generally do anything I have to be wary of. Part of it is that anytime I've used Claude to write stuff (using Opus 4.7 via an API key), I've had to handhold it when doing simple things (telling it repeatedly that a given column doesn't exist in Snowflake's task history table and eventually just giving up and taking it out by hand) and had to remove tons of completely pointless Python code it generates. The big difference is that the people in the first camp don't seem to care enough to check. Someone at my company used Claude to write 20k lines of code this past Friday. No way he read and scrutinized all of that in one day.

The other big thing I've noticed is that a lot of the people using it extensively seem to just be spitting out API endpoint after endpoint. Just doing endless CRUD with some light business logic. Yeah, it's not too hard to automate that with AI without any major issues. Hell, back when Ruby on Rails was hot, it was so fast to write those kinds of things with it that I could spin up things as fast as AI is doing now. Full websites or APIs in an hour or two because its syntactic sugar and scaffolding did what AI does with the FastAPI codebases I see these days. You could go from an ER diagram to a working app in minutes sometimes. I don't care that much if that kind of work is automated.


I was in the second camp until last summer, having been hand-writing code since 1979.

Wow! I never thought of this perspective. For most people, procedural is the first concept. If they ever start looking at apl, I would wager that's much more tenable then what you had to do!


For comparison, I heard that people who start with functional programming find it quite intuitive. The hard part isn't learning a new paradigm, but "unlearning" the old one.

(Also I hear they're more than a bit sad about how crude procedural programming is! But unfortunately I came at it the other way around, so my standards are permanently lowered ;)


Similar shit happening in North Korea. Should the US go there next?

Regime change was NOT the goal, right? Wasn't that the party line?


No one goes for NK because they have nuke. The exact situation the US/Israel try to prevent for Iran.


This is Saddam's WMDs all over again.


Regime change isn't the goal per se, but disarmament is. Angry mullahs without missiles and nukes are harmless.


Whoever told you this was lying to you. Trump released a statement on the first night of the war explicitly stating that regime change was the goal. Disarmament is the new goal he fabricated when the first one didn't work.


I don't think there's any point in digging into soil to implant the goal posts anymore, because they'll be moved in 6 hours. Best to just use a couple of shills to hold them up.


and N Korea is sidelined by the USA because N Korea does not have anything we 'want' i.e. oil gold silver rare earth......


Is it?? I need to update my calibration then. What tipped you off?


I know we're supposed to assume good faith comments here on HN, but god damn...


It's like being a contender in the Jordan age, but this is arguably worse because of Carlson's longevity.


you couldn't have missed GP's point any more if you tried. ignoring the ad-hominems about SWE greed:

these tools have been trained on decades of people "obsessing over every last detail". what GP is arguing is that we're detaching from that: you prompt, you get something that works, it doesn't matter how it got there. we're now entering the world where the majority of code will be vibed. So whatever our foredevelopers came up with, that will be the the final chapter of craftsman-produced, understood, code. whatever the previous generation actually learned about software engineering, that's at an end too, because why bother learning when i can prompt.

there's no stopping this transition, obviously. the next generation of tools will be trained on the current generation of tools' generated code. we're passed the "termination shock" of sofwtare understanding.


Oh I got it just fine. I was knocking their point artisanal software will make a comeback.

Am an EE and have argued against all the developer gibberish and self aggrandizement for years. It's just electromagnetic geometry of the machine to me.

Most software out there is all the gibberish devs need to do their job. Burns a lot of resources clinging to it. Completely useless to using a computer how most users will.

Vectors as a uniform layer of abstraction, rather than arbitrary namespaces a programmer finds cheeky, will obsolete a bunch of gibberish.


What shit did he talk about the team's leader? "That project is going to fail" is talking shit? Nothing could be more objective than that.


Yuck. I don't know if it's just me, but something feels completely off about the GH issue tracker. I don't know if it's the spacing, the formatting, or what, but each time it feels like it's actively trying to shoo me away.

It's whatever the visual language equivalent of "low signal" is.


Still gh issues are better than some random discord server. The fact that forums got replaced by discord for "support" is a net loss for humanity, as discord is not searchable (to my knowledge). So instead of a forum where someone asks a question and you get n answers, you have to visit the discord, and talk to the discord people, and join a wave channel first, hope the people are there, hope the person that knows is online, and so on.


Yeah, I suspect that a lot of the decline represented in the OP's graph (starting around early 2020) is actually discord and that LLMs weren't much of a factor until ChatGPT 3.5 which launched in 2022.

LLMs have definitely accelerated Stackoverflow's demise though. No question about that. Also makes me wonder if discord has a licensing deal with any of the large LLM players. If they don't then I can't imagine that will last for long. It will eventually just become too lucrative for them to say no if it hasn't already.


Discord isn’t just used for tech support forums and discussions. There are loads of completely private communities on there. Discord opening up API access for LLM vendors to train on people’s private conversations is a gross violation of privacy. That would not go down well.


Veritassium is in a league of its own. Just take a look at their last year's videos. The production value is just second to none.

They have enough of a following now that they can dedicate 55 minutes to something and not worry about the algorithm, which usually dictates much shorter form factors


This was the first of their videos that impressed me. Looking back, I have watched a few of their videos per year. Previous were videos tended have much less content density and quality.

I really enjoyed the segments where they let ASML's (now former) CTO Martin van den Brink just talk.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: