Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've felt this too as a person with ADHD, specifically difficulty processing information. Caveat: I don't vibe code much, partially because of the mental fatigue symptoms.

I've found that if an LLM writes too much code, even if I specified what it should be doing, I still have to do a lot of validation myself that would have been done while writing the code by hand. This turns the process from "generative" (haha) to "processing", which I struggle a lot more with.

Unfortunately, the reason I have to do so much processing on vibe code or large generated chunks of code is simply because it doesn't work. There is almost always an issue that is either immediately obvious, like the code not working, or becomes obvious later, like poorly structured code that the LLM then jams into future code generation, creating a house of cards that easily falls apart.

Many people will tell me that I'm not using the right model or tools or whatever but it's clear to me that the problem is that AI doesn't have any vision of where your code will need to organically head towards. It's great for one shots and rewrites, but it always always always chokes on larger/complicated projects, ESPECIALLY ones that are not written in common languages (like JavaScript) or common packages/patterns eventually, and then I have to go spelunking to find why things aren't working or why it can't generate code to do something I know is possible. It's almost always because the input for new code is my ask AND the poorly structured code, so the LLM will rarely clean up it's own crap as it goes. If anything, it keeps writing shoddy wrapper around shoddy wrappers.

Anyways, still helpful for writing boilerplate and segments of code, but I like to know what is happening and have control over how my code is structured. I can't trust the LLMs right now.



Agreed. Some strategies that seem to help exist, though. Write extensive tests before writing the code. They serve as guidance. Commit tests separately from library code, so you can tell the AI didn't change the test. Specify the task with copious examples. Explain why yo so things, not just what to do.


Yeah, this is where I start side-eying people who love vibe coding. Writing lots of tests and documentation and fixing someone else's (read: the LLM's) bad code? That's literally the worst parts of the job.


I also get confused when I see it taken for granted that "vibe coding" removes all the drudgery/chores from programming. When my own experience heavily using Claude Code/etc every day routinely involves a lot of unpleasant clean up of accumulated LLM slop and "WTF" decisions.

I still think it saves me time on net and yes, it typically can handle a lot on its own, but whenever it starts to fuck up the same request repeatedly in different ways, all I can really do is sigh/roll my eyes and then it's on me alone to dig in and figure it out/fix it to keep making progress.

And usually that consists of incredibly ungratifying, unpleasant work I'm very much not happy to be doing.

I definitely have been able to do more side projects for ideas that pop into my head thanks to CC and similar, and that part is super cool! But other times I hit a wall where a project suddenly goes from breezy and fun to me spending hours reading through diffs/chat history trying to untangle a pile of garbage code I barely understand 10% of and have to remind myself I was supposed to be doing this for "fun"/learning, and accomplishing neither while not getting paid for it.


Absolutely. Honestly some days I'm not sure the AI saves me any time at all.

But on the other hand, writing thorough tests before coding the library is good practice with or without an assistant.


Interesting, I haven't tried tests outside of the code base the LLM is working on.

I could see other elements of isolation being useful, but this kind of feels like a lot of extra work and complexity which is part of the issue...


The way I do it is write tests, then commit just the tests. Then when you have any agent running and generating code, before committing/reviewing you can check the diff for any changes to files containing tests. The commit panel in Jetbrains for example will enumerate any changed files, and I can easily take a peek there and see if any testing files were changed in the process. It's not necessarily about having a separate codebase.


Also: detailed planning phase, cross-LLM reviews via subagents, tests, functional QA etc. There at more (and complimentary) ways to ensure the code does what it should then to comb through ever line.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: