Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation

Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job. So the work you have committed may work or it may have subtle artefacts/bugs that you’re not aware of, because doing the job properly isn’t of interest to you.

This is ‘phoning it in’, not professional software engineering.



Learning an unfamiliar aspect and doing it be hand will have the same issues. If you're new to Terraform, you are new to Terraform, and are probably going to even insert more footguns than the AI.

At least when the AI does it you can review it.


No, you can not. Without understanding the technology, at best you can "vibe-review" it, and determine that it "kinda sorta looks like it's doing what it's supposed to do, maybe?".


> Learning an unfamiliar aspect and doing it be hand will have the same issues. If you're new to Terraform, you are new to Terraform

Which is why you spend time upfront becoming familiar with whatever it is you need to implement. Otherwise it’s just programming by coincidence [1], which is how amateurs write code.

> and are probably going to even insert more footguns than the AI.

Very unlikely. If I spend time understanding a domain then I tend to make fewer errors when working within that domain.

> At least when the AI does it you can review it.

You can’t review something you don’t understand.

[1] https://dev.to/decoeur_/programming-by-coincidence-dont-do-i...


> Learning an unfamiliar aspect and doing it be hand will have the same issues.

I don't think so. We gain proficiency by doing, not by reading.

If all you are doing is reading, you are not gaining much.


It sounds like you've never worked a job where you aren't just supporting 1 product that you built yourslef. Fix the bug and move on. I do not have the time or resources to understand it fully. It's a 20 year old app full of business logic and MS changed something in their API. I do not need to understand the full stack. I need to understand the bug and how to fix it. My boss wants it fixed yesterday. So I fix it and move onto the next task. Some of us have to wear many hats.


> It sounds like you've never worked a job where you aren't just supporting 1 product that you built yourslef

In my 40 years of writing code, I’ve worked on many different code bases and in many different organisations. And I never changed a line of code, deleted code, or added more code unless I could run it in my head and ‘know’ (to the extent that it’s possible) what it will do and how it will interact with the rest of the project. That’s the job.

I’m not against using AI. I use it myself, but if you don’t understand the scope fully, then you can’t possibly validate what the AI is spitting out, you can only hope that it has not fucked up.

Even using AI to write tests will fall short if you can’t tell if the tests are good enough.

For now we still need to be experts. The day we don’t need experts the LLMs should start writing in machine code, not human readable languages

> I do not need to understand the full stack.

Nobody said that. It’s important to understand the scope of the change. Knowing more may well improve decision making, but pragmatism is of course important.

Not understanding the thing you’re changing isn’t pragmatism.


Either you're a true 100x coder who can get a full understanding of every single project and every effect it will have through the full end to end stack.

Or you were never under time pressure and always had enough time to do it.

Either way, I'm jealous for you. For me it's "here's code that Bob wrote 10 years ago, it's not working. Customers are complaining and this needs to be fixed yesterday".

"Sorry I need to understand what it will do and how it will interact with the rest of the project, that'll take a few days and I can't fix it before that" wasn't an option. You fix the immediate issue, run whatever tests it may have and throw it to QA for release approval.

Most likely the fix will work and nobody has to touch that bit in a few years. Should we spend time to understand it fully and document it, add proper and comprehensive tests? Yep. But the bosses will never approve the expense.

If I had an AI agent at point, it could "understand" the codebase in minutes and give me clues as to what's the blast radius for the possible fix.


> Either you're a true 100x coder who can get a full understanding of every single project and every effect it will have through the full end to end stack.

It's hard to state how good I am without sounding like an arsehole, so here goes... I am certainly a very experienced engineer, I've coded from the age of 10 and now at 50 I'm 'retired' after selling the company that I founded. I started in the 8bit era doing low level to-the-metal coding and ended it building an internationally used healthcare SaaS app (with a smattering of games engineering in-between). I've been a technical proof-reader for two Manning books, have at least one popular open-source project, and I still write code for fun and am working on my next idea around data-sovereignty in my now infinite free time... so yeah, I'm decent, and I feel like I've gained enough experience to have an opinion on this.

But also you're not reading what I wrote. I never said "a full understanding of every single project and every effect it will have through the full end to end stack", which I explicitly dealt with in my last reply, when I said: "It’s important to understand the scope of the change. Knowing more may well improve decision making, but pragmatism is of course important."

If the scope is small, you don't need "a full understanding of every single project and every effect it will have through the full end to end stack". But in terms of what it does touch, yeah you should know it, especially if you want to become a better software engineer, and not just an engineer with the same 1 years worth of experience x 30.

It should also not take "a few days" to investigate the scope. If it's taking you that long then you're not exercising the capability that allows you to navigate around unfamiliar code and understand what it's doing. That knowledge accumulates too, so unless you're working on a completely different project every single day, you're going to get quicker and quicker.

I have seen pathological cases where a dev that worked for me went so far down the rabbit hole that he got nothing done, so it has to be a pragmatic process of discovery. It should entirely depend on the extent to which your change could leak out into other areas of the project. For example, if you had a reusable library that had some core functionality that is used throughout the project and you wanted to change some of its core behaviour, then I'd want to find all of the usages of that library to understand how that change will affect the behaviour (if at all). But equally, if I was updating a UI page or control that has limited tentacles throughout the app, then I'd be quite comfortable not doing a deep dive.

> "here's code that Bob wrote 10 years ago, it's not working. Customers are complaining and this needs to be fixed yesterday".

I've been in that exact situation. You need to make a decision about your career. Are you just going to half-arse the job, or are you going to get better? If you think continuing as you are is good for your career, because you've made your idiot boss happy for 5 minutes before they give you the next unreasonable deadline, then you're wrong.

The fact is the approach you're taking is slower. It's slower because you and the team of engineers you're in (assuming everyone takes the same approach) are accumulating bugs, technical debt, and are not building institutional knowledge. When those bugs need dealing with in the future, or that technical debt causes the application to slow to a crawl, or have some customer-affecting side-effects, then you're going to waste time solving those issues and you're sure as hell gonna want the institutional knowledge to resolve those problems. AI doesn't "understand" in the way you're implying. If it did understand then we wouldn't be needed at all.

> Most likely the fix will work and nobody has to touch that bit in a few years. Should we spend time to understand it fully and document it, add proper and comprehensive tests? Yep. But the bosses will never approve the expense.

So you work for a terrible boss. That doesn't make my argument wrong, that makes your boss wrong. You can obviously see the problem, but instead of doing something about it, you're arguing against good software development methodology. That's odd. You should take umbrage with your boss.

The best engineers I have worked with in my career were the ones that fully understood the code base they were working on.


>Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job.

Red flag again! If your protection is to "understand the implementation" it means buggy code. What makes a code worthy of trust is passing tests, well designed tests that cover the angles. LGTM is vibe testing

I go as far as saying it does not matter if code was written by a human who understands or not, what matters is how well it is tested. Vibe testing is the problem, not vibe coding.


> What makes a code worthy of trust is passing tests

(Sorry, but you set yourself up for this one, my apologies.)

Oh, so this post describes "worthy code", okay then.

https://news.ycombinator.com/item?id=18442941

Tests are not a panacea. They don't care about anything other than what you test. If you don't have code testing maintainability and readability, only that it "works", you end up like the product in that post.

Ultimate example: Biology (and everything related, like physiology, anatomy), where the test is similarly limited to "does it produce children that can survive". It is a huuuuuge mess, and trying to change any one thing always messes up things elsewhere in unexpected and hard or impossible to solve ways. It's genius, it works, it sells - and trying to deliberately change anything is a huge PITA because everything is interconnected and there is no clean design anywhere. You manage to change some single gene to change some very minor behavior, suddenly the ear shape changes and fur color and eye sight and digestion and disease resistance, stuff like that.


I wonder if for a large class of jobs, simple unit tests will be enough to be a negative that the llm output will try to match. Test driven delegation in a way.. that said i share the same worries as you. The fact that the LLM can wire multiple files / class / libs in a few seconds to pass your tests doesn't guarantee a good design. And the people who love vibe coding the most are the one who never valued design in the first place, just quick results..


> If your protection is to "understand the implementation" it means buggy code.

Hilarious. Understanding the code is literally the most important thing. If you don't understand the code then you can't understand any unit tests you write either. How could you possibly claim test coverage for something you don't understand?

I suspect you primarily develop code with dynamic languages where you're reinventing type-systems day-in day-out to test your code. Personally, I try to minimise the need for unit-tests by using well-defined types and constraints. The type-system is a much better unit-tester than any human with a poor understanding of the code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: