Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a feeling that people who got bogged down in step 3 were the kind of people who write a lot of wordy corporate boilerplate with multiple levels of abstraction for every single thing. AKA "best practices" type coding.

For me the most important part of a project is working out the data structures and how they are accessed. That's where the rubber meets the road, and is something that AI struggles with. It requires a bit too high a level of abstract thinking and whole problem conceptualization for existing LLMs. Once the data structures are set the coding is easy.



> Once the data structures are set the coding is easy.

I don't always find this, because there's a lot of "inside baseball" and accidental complexity in modern frameworks and languages. AI assist has been very helpful for me.

I'm fairly polyglot and do maintenance on a lot of codebases. I'm comfortable with several languages and have been programming for 20 years but drop me in say, a Java Spring codebase and I can get the job done but I'm slow. Similarly, I'm fast with TypeScript/CDK or Terraform but slow with cfndsl because I skipped learning Ruby because I already knew Python. I know Javascript and the DOM and the principles of React but mostly I'm backend. So it hurts to dive into a React project X versions behind current and try to freshen it up because in practice you need reasonably deep knowledge of not just version X of these projects but also an understanding of how they have evolved over time.

So I'm often in a situation where I know exactly what I want to do, but I don't know the idiomatic way to do it in a particular language or framework. I find for Java in particular there is enormous surface area and lots of baggage that has accumulated over the years which experienced Java devs know but I don't, e.g. all the gotchas when you upgrade from Spring 2.x to 3.x, or what versions of ByteBuddy work with blah maven plugin, etc.

I used to often experience something like a 2x or 3x hit vs a specialised dev but with AI I am delivering close to parity for routine work. For complex stuff I would still try to pair with an expert.


This matches my experience. In practice there's just a lot of stuff (libraries, function names, arguments that go in, library implementation details etc.) you need to remember for most of the programming I do day to day, and AI tools help with recalling all that stuff without having to break out of your editor to go and check docs.

For me this becomes more and more relevant as I go into languages and frameworks Im not familiar with.

Having said that you do need to be vigilant. LLMs seem to love generating code that contains injection vulnerabilities. It makes you wonder about the quality of the code it's been trained on...


> I don't always find this, because there's a lot of "inside baseball" and accidental complexity in modern frameworks and languages. AI assist has been very helpful for me.

My use of esoteric C++ has exploded. Good thing I will have even better models to help me read my code next week.

The much lowered bar to expanding one’s toolkit is certainly noticeable, across all forms of tool expansion.


Can't express it more clearly than this. Data structures are just one part of the story not the only spot where the rubber meets the road IMO too. But going back to top of the thread, for new projects it is indeed steps 2 and 3 that consume most time not step 3


ByteBuddy is atrocious.

>In October 2015, Byte Buddy was distinguished with a Duke's Choice award by Oracle. The award appreciates Byte Buddy for its "tremendous amount of innovation in Java Technology". We feel very honored for having received this award and want to thank all users and everybody else who helped making Byte Buddy the success it has become. We really appreciate it!

Don't misread me. It's solid software. And an instance of a well structure objet-oriented code base.

But it's impossible to do anything without having a deep and wide understanding of the class hierarchy (which is just as deep and wide). Out of 1475 issues on the project's Github page, 1058 are labelled as questions. You can't just start with a few simple bricks and gradually learn the framework. The learning curve is super steep from the get go, all of the complexity is thrown into your face as soon as you enter the room.

This is the kind of space where LLM would shine


> I have a feeling that people who got bogged down in step 3 were the kind of people who write a lot of wordy corporate boilerplate with multiple levels of abstraction for every single thing. AKA "best practices" type coding.

Or they're the kind of people who rushed to step 3 too fast, substantially skipping steps 1 and/or 2 (more often step 2). I've worked with a lot of people like that.


You mean, move fast and break things? This was usually seen as a good thing in a certain culture. Maybe the whole current discussion (here and everywhere) is the two cultures clashing?


> You mean, move fast and break things? This was usually seen as a good thing in a certain culture.

I mean "I don't know what I'm doing, but gotta start now." If that's "move fast and break things," it's even dumber than I thought.


It also often requires knowledge the LLM doesn't contain, which is internal historical knowledge of a long running business. Many businesses have a "person", an oracle of sorts, that without their input you would never be able to deliver a good outcome. Their head is full of years of business operations history and knowledge unique to that business.


While probably not useful for everyone, the best method for myself actually leverages that.

I am using a modified form of TDD's red/green refactor, specifically with an LLM interface independent of my IDE.

While I error on good code over prompt engineering, I used the need to submit it to both refine the ADT and domain tests, after creating a draft of those I submit them to the local LLM, and continue on with my own code.

If I finish first I will quickly review the output to see if it produced simpler code or if my domain tests ADT are problematic. For me this avoids rat holes and head of line blocking.

If the LLM finishes first, I approach the output as a code base needing a full refactor, keeping myself engaged with the code.

While rarely is the produced code 'production ready' it often struggles when I haven't done my job.

You get some of the benefits of pair programming without the risk of demoralizing some poor Jr.

But yes, tradeoff analysis and choosing the least worst option is the part that LLM/LRMs will never be able to do IMHO.

Courses for horses and nuance, not "best practices" as anything more than reasonable defaults that adjust for real world needs.


same, the amount of work I have to put into thinking of what to say to the llm is the same or more work than just telling the compiler or interpreter what I want (in a language I'm familiar with), and the actual coding is the trivial part. in fact I get instant feedback with the code, which helps change my thinking. with the llm, there's an awkward translating for the llm, getting the code, checking that it might do what I want, and then still having to run it and find the bugs.

the balance only shifts with a language/framework I'm not familiar with.


I think it’s useful to think of LLMs performing translation from natural language to a program language. If you already speak the programming language fluently, why do you need a translator?


> If you already speak the programming language fluently, why do you need a translator?

And if you don't speak the language please spare us from your LLM generated vibe coding nonsense


The only way to learn a programming language (beyond the basics) is to use it and gain familiarity, and see code that others wrote for it. Assuming you don't just spin the LLM wheel until you get lucky with something that works, it's a valid strategy for learning a language while also producing working (though imperfect) code.


> The only way to learn a programming language (beyond the basics) is to use it

I don't quite agree.

This may seem like splitting hairs, but I think the only way to learn a programming language is to write it

I don't think any amount of reading and fixing LLM code is sufficient to learn how to code yourself

Writing code from scratch is a different skill


> spin the LLM wheel until you get lucky with something that works

Isn't that exactly what "vibe coding" is supposed to be?

(BRB, injecting code vulnerabilities into my state actor LLM.)


I agree for method-level changes, but the more you’re willing to cede control for larger changes, even in a familiar language, the more an LLM accelerates you.

For me, I give Gemini the full context of my repo, tell it the sweeping changes I want to make, and let it do the zero to one planning step. Then I modify (mostly prune) the output and let Cursor get to work.


> For me, I give Gemini the full context of my repo, tell it the sweeping changes I want to make

If the full context of your repo (which I assume means more or less the entire git history of it, since that is what you usually need for sweeping changes) fits into Gemini's context window, you're working on a very small repo, and so your problems are easy to solve, and LLMs are ok at solving small easy problems. Wait till you get to more than some few thousand lines of code, and more than two years of Git history, and then see if this strategy still works well for you.


> I agree for method-level changes, but the more you’re willing to cede control for larger changes, even in a familiar language, the more an LLM accelerates you.

Another way to phrase this is:

  I agree for method-level changes, but the more you’re 
  willing to cede *understanding* for larger changes, even in 
  a familiar language, the more an LLM accelerates you *to an 
  opaque change set*.
Without understanding, the probability of a code generation tool introducing significant defects approaches 1.


Enterprise code with layers of abstraction isnt best practice. It’s enterprise code.


I would imagine that's why they had "best practices" in quotes. Lots of enterprisey things get pushed as a "good practice" to improve reuse (of things that will never be reused) or extensibility (of things that will never be extended) or modularity (of things that will never be separated).


Enterprise development has particular problems you won't find in other environments, for instance having hundreds of different developers with widely varying levels of skill and talent, all collaborating together, often under immense time and budget pressure.

The result ain't going to be what you get if you've got a focused group of 10x geniuses working on everything, but I think a lot of the aspects of "enterprise development" that people complain about is simply the result of making the best of a bad situation.

I like Java, because I've worked with people who will fuck up repeatedly without static type checking.


I can attest to that and see it as the reason why Angular is still so popular in the enterprise world - it has such a strong convention that no matter the rate of staff rotation the team can keep delivering.

Meanwhile no two React projects are the same because they typically have several dependencies, each solving a small part of the problem at hand.


> for instance having hundreds of different developers with widely varying levels of skill and talent

That's a management problem. Meaning you assess that risk and try to alleviate it. A good solution like you say is languages with good type checking support. Another is code familiarity and reuse through frameworks and libraries. A third may be enforcing writing tests to speed up code review (and checklist rules like that).

It's going to be boring, but boring is good at that scale.


Mindless repetition of something you've internalized and never think about and never get any better at is "Best Reflex" not "Best Practice".


Nah, these are the people who don't know the difference between a variable and a function call and who think FizzBuzz is a "leetcode interview problem".


I hate it when variables won't and constants aren't and functions don't.


100% this. No matter how quick a developer, or its AI-assistant is in spitting out the react frontends (I find them relatively useful in this case), sooner or later you will hit the problem of data structures and their interrelations, i.e. the logic of the program. And not just the logic, its also the simplicity of the relations, often a week spent refining the data structures saves a year worth of effort down the road.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: