Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From my point of view, many programmers hate Gen AI because they feel like they've lost a lot of power. With LLMs advancing, they go from kings of the company to normal employees. This is not unlike many industries where some technology or machine automates much of what they do and they resist.

For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.

Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.

Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.

When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.

/signed as someone who writes software





> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.

Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!

The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.

What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.

The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.


Don't get me wrong, I didn't say software devs are now useless. You still need software devs to actually make it work and connect everything together. That's why I still have a job and still getting paid as a software dev.

I'd imagine it won't take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.


This is happening already and it wastes so, so much time. Producing code never was the bottleneck. The bottleneck still is to produce the right amount of code and to understand what is happening. This requires experience and taste. My prediction is, in the near future there will be piles of unmaintainable bloat of AI generated code, nobody's understanding and the failure rate of software will go to the moon.

People have forgotten so many of the software engineering lessons that have been learned over the last four decades, just because now it’s a computer that can spit out large quantities of poorly-understood code instead of a person.

> The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline.

I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just "vibe following" the AI with our eyes we could be automating the validation side.


He's mainly talking about environmental & social consequences now and in the future. He personally is beyond reach of such consequences given his seniority and age, so this speculative tangent is detracting from his main point, to put it charitably.

>He's mainly talking about environmental & social consequences

That's such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let's go back and stop using the steam engine for that matter.


The issue with this line of argumentation is that unlike gen AI, all of the things you listed produce actual value.

Maybe you're forgetting something but genAI does produce value. Subjective value, yes. But still value to others who can make use of them.

End of the day your current prosperity is made by advances in energy and technology. It would be disingenuous to deny that and to deny the freedom of others to progress in their field of study.


Just because somebody believes Gen ai produces value doesn't make it true.

You definitely didn't read what I said. It is subjective value, it will be true to some.

> Then why not stop driving

You mean, we should all drive, oh I don't know, Electric powered cars?


[flagged]


> You criticize society and yet you participate in it. How curious.

I didn't criticize society though?

Ah... the old "all or nothing" fallacy, which in this case quickly leads to "save the planet, kill yourself". We need more nuance.

What is the nuance? Let us know.

What are his hobbies? Let’s pit them against training LLMs in value and pollution rate.


No you’re just deflecting his points with an ad hominem argument. Stop pretending to assume what he ‘truly feels’.

I don't even know who Rob Pike is to be honest. I'm not attacking him.

I'm not pretending to know how he feels. I'm just reading between the lines and speculating.


Maybe you should do some basic research instead of speculating. Rob Pike is not just some random software developer who might worry about his job.

I was just accused of ad hominem. Now you want me to get accused of appeal to authority?

No, the point is that your speculations simply do not make sense for someone like Rob. He is not a random software engineer in some company and also he is retired.

I’m basing this purely on what he said, not who he is. I think that’s the best way to judge this thread. Regardless, I was accused of ad hominem and you want me to appeal to authority.

Sometimes HN is weird.


You've made baseless assumptions about his "true" feelings. If you did some basic research, you would have quickly realized that your speculations were way off. This is about context, not about authority.

I already said many times that I was reading between the lines and it was speculation.

You keep asking me to appeal to authority. No thanks.

It is what it is. To me, it’s clear that he wants things to go back to pre ChatGPT because that’s the world he’s familiar with and that’s the world he has most power.

Otherwise, he wouldn’t make such idiotic claims.


> You keep asking me to appeal to authority.

I don't. I just asked to do some research instead of indulging in wild speculation.

> because that’s the world he’s familiar with and that’s the world he has most power.

Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?

Would you at least consider the possibility that his ethical concerns might be sincere?


  I don't. I just asked to do some research instead of indulging in wild speculation.
You are. https://en.wikipedia.org/wiki/Argument_from_authority

  An argument from authority[a] is a form of argument in which the opinion of an authority figure (or figures) is used as evidence to support an argument.[1] The argument from authority is often considered a logical fallacy[2] and obtaining knowledge in this way is fallible.[3][4]

  Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?
Clout? Historical importance? Feeling like people are forgetting him? If he didn't care about any of this, he wouldn't have a social media account.

I'm not saying that Rob is right because of his achievements. I'm only saying that your speculations in your original post are ridiculous considering Rob's career and personal situation.

> Clout? Historical importance? Feeling like people are forgetting him?

Even more speculation.

Just in case you are not aware: there are many people who really think that what the big AI companies are doing is unethical. Rob may be one of them.


Stop appealing to authority. Just argue about facts and what was said.

You also keep accusing me of speculation but I already mentioned multiple times that it’s speculation. I never said it’s not speculation. It’s you who can’t make a coherent come back argument except to tell me to research and then respect him.


It's you who didn't look up some facts before posting. Please read your original post and then tell me how it possibly relates to Rob's situation.

> You also keep accusing me of speculation but I already mentioned multiple times that it’s speculation.

Yes, you mentioned it yourself, but you don't seem to understand the problem with it.


You mean you can't critize certain parts of society unless you live like a hermite?

> Obviously, it's just what I'm seeing.

Have you considered that this may just be a rationalization on your part?


I'm not entirely convinced it's going to lead to programmers losing the power to command high salaries. Now that nearly anyone can generate thousands upon thousands of lines of mediocre-to-bad code, they will likely be the doing exactly that without really being able to understand what they're doing and as such there will always be the need for humans who can actually read and actually understand code when a billion unforeseen consequences pop up from deploying code without oversight.

I recently witnessed one such potential fuckup. The AI had written functioning code, except one of the business rules was misinterpreted. It would have broken in a few months time and caused a massive outage. I imagine many such time bombs are being deployed in many companies as we speak.

Yeah; I saw a 29,000 line pull request across seventy files recently. I think that realistically 29,000 lines of new code all at once is beyond what a human could understand within the timeframe typically allotted for a code review.

Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.


That’s the fault of the human who used the LLM to write the code and didn’t test it properly.

Exhaustive testing is hard, to be fair, especially if you don’t actually understand the code you’re writing. Tools like TLA+ and static analyzers exist precisely for this reason.

An example I use to talk about hidden edge cases:

Imagine we have this (pseudo)code

  fn doSomething(num : int) {
    if num % 2 == 0 {
      return  Math.sqrt(num)
    } else {
       return Math.pow(num, 2)
    }

  }
Someone might see this function, and unit test it based on the if statement like:

    assert(doSomething(4) == 2)
    assert(doSomething(3) == 9)
These tests pass, it’s merged.

Except there’s a bug in this; what if you pass in a negative even number?

Depending on the language, you will either get an exception or maybe a complex answer (which not usually something you want). The solution in this particular case would be to add a conditional, or more simply just make the type an unsigned integer.

Obviously this is just a dumb example, and most people here could pick this up pretty quick, but my point is that sometimes bugs can hide even when you do (what feels like) thorough testing.


> I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written.

It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.

The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.


Really think it’s entirely wrong to label someone as a bully for not conforming to current, perhaps bad, practices.

I'm a programmer, and am intensely aware of the huge gap between the quantity of software the world could use and the total production capacity of the existing body of programmers. my distaste for AI has nothing to do with some real or imagined loss of power; if there were genuinely a system that produced good code and wasn't heavily geared towards reinforcing various structural inequalities I would be all for it. AI does not produce good code, and pretty much all the uses I've seen are trying to give people with power even more advantages and leverage over people without, so I remain against it.

There's still a lot of confusion on where AI is going to land - there's no doubt that it's helpful, much the same way as spell checkers, IDEs, linters, grammarly, etc, were

But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)

That correction is what's affecting salaries (and "power"), not AI.

/signed someone actually interested in AI and SWE


When I see actual products produced by these "product managers who are writing detailed specs" that don't fall over and die at the first hurdle (see: Every vibe coded, outsourced, half assed PoS on the planet) I will change my mind.

Until then "Computer says No"


If you don't bend your knee to a "king", you are a bully? What sort of messed up thinking is that?

I keep reading bad sentiment towards software devs. Why exactly do they "bully" business people? If you ask someone outside of the tech sector who the biggest bullies are, its business people who will fire you if they can save a few cents. Whenever someone writes this, I read deep rooted insecurity and jealousy for something they can't wrap their head around and genuinely question if that person really writes software or just claims to do it for credibility.

I understand that you are writing your general opinion, but I have a feeling Rob Pike's feelings go a little bit deeper than this.

Grandparent commenter seems to be someone who'd find it heartwarming to have a machine thank him with "deep gratitude".

Maybe evolution will select autistic humans as the fittest to survive living with AI, because the ones who find that email enraging will blow their brains out, out of frustration...


I realize you said "many" and not "all" but FWIW, I hate LLMs because:

1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".

2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.

3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.

4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.

5. And then there are the impacts to society:

5a. OpenAI just made every computer for the next couple of years significantly more expensive.

5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.

5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).

5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?

5e. Astroturfing is becoming significantly cheaper and widespread.

/signed as I also write software, as I assume almost everyone on this forum does.


After bitcoin this site is full of people who don't write code.

I have not been here before bitcoin. But wouldn't the "non-technical" founders be also types that don't write code. And to them fixing the "easy" part is very tempting...

People care far less about gen AI writing slopcode and more about the social and environmental ramifications, not to mention the blatant IP theft, economic games, etc.

I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.


And this is different from outsourcing the work to India for programmers who work for $6000 a year in what way exactly?

You can go back to the 1960s and COBOL was making the exact same claims as Gen AI today.


> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI

The GenAI is also better at analyzing telemetry, designing features and prioritizing issues than a human product manager.

Nobody is really safe.


I’m at Big tech and our org has our sights on automating product manager work. Idea generation grounded with business metrics and context that you can feed to an LLM is a simpler problem to solve than trying to automate end to end engineering workflows.

Agreed.

Hence, I'm heavily invested in compute and energy stocks. At the end of the day, the person who has more compute and energy will win.


> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.

I'll explain why I currently hate this. Today, my PM builds demos using AI tools and then goes to my director or VP to show them off. Wow, how awesome! Everybody gets excited. Now it is time to build the thing. It should take like three weeks, right? It's basically already finished. What do you mean you need four months and ongoing resourcing for maintenance? But the PM built it in a day?


You're absolutely right.

But no one is safe. Soon the AI will be better at CEOing.


That's the singularity you're talking about. AI takes every role humans can do and humans just enjoy life and live forever.

There's nothing about the singularity which would guarantee that humans enjoy life and live forever. That would be the super optimistic, highly speculative scenario. Of course the singularity itself remains a speculative scenario, unless one wants to argue the industrial and computer revolutions already ushered in their own singularities.

Nah, they will fine-tune a local LLM to replace the board and be always loyal to the CEO.

Elon is way ahead, he did it with mere meatbags.


Don't worry I'm sure they'll find ways to say their jobs can only be done by humans. Even the Pope is denouncing AI in fear that it'll replace god.

CEOs and the C-suite in general are closest to the money. They are the safest.

That is pretty much the only metric that matters in the end.


Honestly middle management is going to go extinct before the engineers do

Why, more psychopathic than Musk?

What does any of this have to do with what Rob has written?

Nope and I wholeheartedly agree with Pike for the disgust of these companies especially for what they are doing to the planet.

Many people have pointed out that if AI gets better at writing code and doesn't generate slop, then programmers' roles will evolve to Project Manager. People with tech backgrounds will still be needed until AI can completely take over without any human involvement.

Very true... AI engineers earning $100mn, I doubt Rob Pike earnt that. Maybe $10mn.

This is the reality and started happening at faster pace. A junior engineer is able to produce something interesting faster without too much attitude.

Everybody in the company envy the developers and they respect they get especially the sales people.

The golden era of devs as kings started crumbling.


Producing something interesting has never been an issue for a junior engineer. I built lots of stuff that I still think is interesting when I was still a junior and I was neither unique nor special. Any idiot could always go to a book store and buy a book on C++ or JavaScript and write software to build something interesting. High-school me was one such idiot.

"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.

It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.


This is very true. And similarly for the recently-passed era of googling, copying and pasting and glueing together something that works. The easy 80% of turning specs into code.

Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.

With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.

At least in my company, none of this has actually increased revenue.

So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.


Honestly I think a lot of companies have been grossly overhiring engineers, even well before generative AI; I think a lot of companies cannot actually justify having engineering teams as large as they do, but they have to have all these engineers because OtherBigCo has a lot of engineers and if they have all of them then it must be important.

Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: