From my point of view, many programmers hate Gen AI because they feel like they've lost a lot of power. With LLMs advancing, they go from kings of the company to normal employees. This is not unlike many industries where some technology or machine automates much of what they do and they resist.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!
The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.
What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.
The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.
Don't get me wrong, I didn't say software devs are now useless. You still need software devs to actually make it work and connect everything together. That's why I still have a job and still getting paid as a software dev.
I'd imagine it won't take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.
This is happening already and it wastes so, so much time. Producing code never was the bottleneck. The bottleneck still is to produce the right amount of code and to understand what is happening. This requires experience and taste. My prediction is, in the near future there will be piles of unmaintainable bloat of AI generated code, nobody's understanding and the failure rate of software will go to the moon.
People have forgotten so many of the software engineering lessons that have been learned over the last four decades, just because now it’s a computer that can spit out large quantities of poorly-understood code instead of a person.
> The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline.
I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just "vibe following" the AI with our eyes we could be automating the validation side.
He's mainly talking about environmental & social consequences now and in the future. He personally is beyond reach of such consequences given his seniority and age, so this speculative tangent is detracting from his main point, to put it charitably.
>He's mainly talking about environmental & social consequences
That's such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let's go back and stop using the steam engine for that matter.
Maybe you're forgetting something but genAI does produce value. Subjective value, yes. But still value to others who can make use of them.
End of the day your current prosperity is made by advances in energy and technology. It would be disingenuous to deny that and to deny the freedom of others to progress in their field of study.
No, the point is that your speculations simply do not make sense for someone like Rob. He is not a random software engineer in some company and also he is retired.
I’m basing this purely on what he said, not who he is. I think that’s the best way to judge this thread. Regardless, I was accused of ad hominem and you want me to appeal to authority.
You've made baseless assumptions about his "true" feelings. If you did some basic research, you would have quickly realized that your speculations were way off. This is about context, not about authority.
I already said many times that I was reading between the lines and it was speculation.
You keep asking me to appeal to authority. No thanks.
It is what it is. To me, it’s clear that he wants things to go back to pre ChatGPT because that’s the world he’s familiar with and that’s the world he has most power.
I don't. I just asked to do some research instead of indulging in wild speculation.
> because that’s the world he’s familiar with and that’s the world he has most power.
Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?
Would you at least consider the possibility that his ethical concerns might be sincere?
An argument from authority[a] is a form of argument in which the opinion of an authority figure (or figures) is used as evidence to support an argument.[1] The argument from authority is often considered a logical fallacy[2] and obtaining knowledge in this way is fallible.[3][4]
Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?
Clout? Historical importance? Feeling like people are forgetting him? If he didn't care about any of this, he wouldn't have a social media account.
I'm not saying that Rob is right because of his achievements. I'm only saying that your speculations in your original post are ridiculous considering Rob's career and personal situation.
> Clout? Historical importance? Feeling like people are forgetting him?
Even more speculation.
Just in case you are not aware: there are many people who really think that what the big AI companies are doing is unethical. Rob may be one of them.
Stop appealing to authority. Just argue about facts and what was said.
You also keep accusing me of speculation but I already mentioned multiple times that it’s speculation. I never said it’s not speculation. It’s you who can’t make a coherent come back argument except to tell me to research and then respect him.
I'm not entirely convinced it's going to lead to programmers losing the power to command high salaries. Now that nearly anyone can generate thousands upon thousands of lines of mediocre-to-bad code, they will likely be the doing exactly that without really being able to understand what they're doing and as such there will always be the need for humans who can actually read and actually understand code when a billion unforeseen consequences pop up from deploying code without oversight.
I recently witnessed one such potential fuckup. The AI had written functioning code, except one of the business rules was misinterpreted. It would have broken in a few months time and caused a massive outage. I imagine many such time bombs are being deployed in many companies as we speak.
Yeah; I saw a 29,000 line pull request across seventy files recently. I think that realistically 29,000 lines of new code all at once is beyond what a human could understand within the timeframe typically allotted for a code review.
Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.
Exhaustive testing is hard, to be fair, especially if you don’t actually understand the code you’re writing. Tools like TLA+ and static analyzers exist precisely for this reason.
Except there’s a bug in this; what if you pass in a negative even number?
Depending on the language, you will either get an exception or maybe a complex answer (which not usually something you want). The solution in this particular case would be to add a conditional, or more simply just make the type an unsigned integer.
Obviously this is just a dumb example, and most people here could pick this up pretty quick, but my point is that sometimes bugs can hide even when you do (what feels like) thorough testing.
> I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written.
It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.
The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.
I'm a programmer, and am intensely aware of the huge gap between the quantity of software the world could use and the total production capacity of the existing body of programmers. my distaste for AI has nothing to do with some real or imagined loss of power; if there were genuinely a system that produced good code and wasn't heavily geared towards reinforcing various structural inequalities I would be all for it. AI does not produce good code, and pretty much all the uses I've seen are trying to give people with power even more advantages and leverage over people without, so I remain against it.
There's still a lot of confusion on where AI is going to land - there's no doubt that it's helpful, much the same way as spell checkers, IDEs, linters, grammarly, etc, were
But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)
That correction is what's affecting salaries (and "power"), not AI.
When I see actual products produced by these "product managers who are writing detailed specs" that don't fall over and die at the first hurdle (see: Every vibe coded, outsourced, half assed PoS on the planet) I will change my mind.
I keep reading bad sentiment towards software devs. Why exactly do they "bully" business people? If you ask someone outside of the tech sector who the biggest bullies are, its business people who will fire you if they can save a few cents.
Whenever someone writes this, I read deep rooted insecurity and jealousy for something they can't wrap their head around and genuinely question if that person really writes software or just claims to do it for credibility.
Grandparent commenter seems to be someone who'd find it heartwarming to have a machine thank him with "deep gratitude".
Maybe evolution will select autistic humans as the fittest to survive living with AI, because the ones who find that email enraging will blow their brains out, out of frustration...
I realize you said "many" and not "all" but FWIW, I hate LLMs because:
1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".
2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.
3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.
4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.
5. And then there are the impacts to society:
5a. OpenAI just made every computer for the next couple of years significantly more expensive.
5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.
5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).
5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?
5e. Astroturfing is becoming significantly cheaper and widespread.
/signed as I also write software, as I assume almost everyone on this forum does.
I have not been here before bitcoin. But wouldn't the "non-technical" founders be also types that don't write code. And to them fixing the "easy" part is very tempting...
People care far less about gen AI writing slopcode and more about the social and environmental ramifications, not to mention the blatant IP theft, economic games, etc.
I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.
I’m at Big tech and our org has our sights on automating product manager work. Idea generation grounded with business metrics and context that you can feed to an LLM is a simpler problem to solve than trying to automate end to end engineering workflows.
> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
I'll explain why I currently hate this. Today, my PM builds demos using AI tools and then goes to my director or VP to show them off. Wow, how awesome! Everybody gets excited. Now it is time to build the thing. It should take like three weeks, right? It's basically already finished. What do you mean you need four months and ongoing resourcing for maintenance? But the PM built it in a day?
There's nothing about the singularity which would guarantee that humans enjoy life and live forever. That would be the super optimistic, highly speculative scenario. Of course the singularity itself remains a speculative scenario, unless one wants to argue the industrial and computer revolutions already ushered in their own singularities.
Many people have pointed out that if AI gets better at writing code and doesn't generate slop, then programmers' roles will evolve to Project Manager. People with tech backgrounds will still be needed until AI can completely take over without any human involvement.
Producing something interesting has never been an issue for a junior engineer. I built lots of stuff that I still think is interesting when I was still a junior and I was neither unique nor special. Any idiot could always go to a book store and buy a book on C++ or JavaScript and write software to build something interesting. High-school me was one such idiot.
"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.
It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.
This is very true. And similarly for the recently-passed era of googling, copying and pasting and glueing together something that works. The easy 80% of turning specs into code.
Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.
With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.
At least in my company, none of this has actually increased revenue.
So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.
Honestly I think a lot of companies have been grossly overhiring engineers, even well before generative AI; I think a lot of companies cannot actually justify having engineering teams as large as they do, but they have to have all these engineers because OtherBigCo has a lot of engineers and if they have all of them then it must be important.
Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
/signed as someone who writes software