Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that now, when your colleagues/contributors push out shitty code that you have to fix... you don't know if they even tried or just put in bad code from an LLM.


Yep, this is a hell I've come to know over the past year. You get people trying to build things they really have no business building, not without at least doing their own research and learning first, but the LLM in their editor lets them spit out something that looks convincing enough to get shipped, unless someone with domain experience looks at it more closely and realizes it's all wrong. It's not the same as bodged-together code from Stack Overflow because the LLM code is better at getting past the sniff test, and magical thinking around generative AI leads people to put undue trust in it. I don't think we've collectively appreciated the negative impact this is going to have on software quality going forward.


> you don't know if they even tried or just put in bad code from an LLM

You don't need to. Focus on the work, not the person.

I recall a coworker screaming at a junior developer during a code review, and I set up a meeting later on to discuss his reaction. Why was he so upset? Because the junior developer was being lazy and not trying.

How did he know this?

He didn't. He used very dubious behavioral signals.

This was pre-LLM. As applicable now as it was then. If their code is shitty, let them know and leave it at that.

On the flip side, I often review code that I think has been written by an LLM. When the code is good, I don't really care if he wrote it or asked GPT to write it. As long as he's functioning well enough for the current job.


That's a weird take. I am not blaming people trying, I am blaming people cheating. I have no problem with people being inexperienced, but if the amount of seemingly inexperienced colleagues I have to deal with goes up, and it's because a growing fraction of them are cheating, it is natural to feel frustrated. Especially because you can't know which fraction tried their best and deserve patience and mentoring, and which fraction are lying and wasting your time.

I work for a university. I have literally caught student workers putting the result of an LLM in the code without reading it, because they did it during screen sharing. You can't teach somebody who does not want to learn (because they just want to pass the semester and have no incentive to try hard). Now it's just way more easy to pretend to have put in effort.


> but if the amount of seemingly inexperienced colleagues I have to deal with goes up, and it's because a growing fraction of them are cheating, it is natural to feel frustrated.

It's also natural to ask why you think the growing fraction is because they cheated.

Look, I don't know how old you are, but the quality of the average developer has been dropping for decades. Prior to GPT, I routinely had to deal with developers who refused to learn anything (e.g. version control) and needed hand holding.[1] They didn't want to read man pages, etc. It's silly to suddenly get upset because LLMs are yet another reason for this phenomenon.

If there's a problem, it's hiring practices. Not LLMs.

> Especially because you can't know which fraction tried their best and deserve patience and mentoring, and which fraction are lying and wasting your time.

Given your comments, it's trivial to see why they would lie to you. They wouldn't to me, because I'm not going to discourage LLM usage.

> I work for a university.

Even before I read your comment, I was about to point out how your stance is valid in the domain of education, and not the workplace. And you just validated it :-)

The concept of cheating when it comes to education is vastly different compared to the work place. The goal of teaching is to learn, and even more relevant, for teachers to evaluate how well someone is learning. If they're using LLMs (or calculators, for that matter), then it becomes harder to evaluate. The goal is not to get something done, but to learn. That's why the environment in school is very constrained, and very different.

The worst jobs I've had are when people failed to realize that they're now in a different environment and they continue to apply those principles where they don't make sense. The main goal at work is to get something done, regardless of the tools.[2]

If someone tries to put good code in, stop caring if it comes from an LLM.

If someone tries to put bad code in, stop caring if it comes from an LLM.

[1] And yes, today's LLMs do a better job than many crappy developers I've had to put up with. Had LLMs existed back than and they used them, I'd be a happier man.

[2] As long as you're not violating laws, copyright, licenses, patents, etc.


Does it matter where the bad code came from?


The pace with which ai slop can overwhelm the resources we use to review said code is pretty concerning.


Yeah, but you can teach your colleagues to get better. An LLM will give you the same crap every time.


You missed my point, there is no use teaching those who are cheating. And there is no use learning if you don't get called out.


Thanks tor pointing that out. I missed that, indeed. That’s a stellar point.


> You missed my point, there is no use teaching those who are cheating.

It's a very invalid point. Perhaps in your life you've never encountered people who cheat and later change their ways, but it's fairly common for them to change their ways.


The fact they might change their way does not erase the fact that I am wasting the time I spend mentoring cheaters, time I could spend mentoring non-cheaters.

Would you also say it's ok for people to steal because they might one day not-steal? So an increase in robberies does not hurt? I am not sure I understood you correctly.


> The fact they might change their way does not erase the fact that I am wasting the time I spend mentoring cheaters, time I could spend mentoring non-cheaters.

Several people have asked you, and you still haven't explained why it's OK to waste time with "non-cheaters" and not with "cheaters".

> Would you also say it's ok for people to steal because they might one day not-steal?

No. Unless stealing is permitted, which it usually isn't. LLMs, OTOH, are merely a tool and their use has no morality attached to it. It's you that seem to attach some morality to LLM use.

(Of course, if your company disallows LLM usage, that's another story).

> I am not sure I understood you correctly.

Your comments seem to make the assumption that LLM based code is shittier than what a typical poor developer would write, which I pointed out in another comment is often not true.

Just treat the code based on its merit. If it's utter crap, tell the developer it's utter crap. Just stop second guessing whether he wrote it or an LLM did. There is a threshold below which even I won't review code.


There comment is in response to the article. The article straight up advocates for starting from LLM generated code. This is a perfectly valid counter point.

Aside: it’s confusing that parent commenter used “cheating” in a completely different way than the title of the article.


I did yeah, apologies. Thanks for pointing that out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: