Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
You're probably vibe coding wrong (and that's why things spiral) (genie-ops.com)
1 point by Shabamed 4 days ago | hide | past | favorite | 2 comments




I’ll say it straight

Most people arent failing with AI because it’s weak.. They’re failing because they treat it like magic instead of engineering

Ive built production apps this way Real users. Real traffic. Real consequences. Mostly with Cursor. Very little manual intervention

But first… this is likely your current flow:

You open your editor You type “build me X” AI starts strong… then drifts One fix breaks another thing You restart. Again That’s not building That’s rolling dice!!

Here’s the system I use It’s boring. It’s structured And it works every single time

Step 1 : architecture first (before a single line of code)

Before touching Cursor, open ChatGPT and ask for structure, not code

Describe the product in painful detail What it does. Who its for. What matters. What doesnt. Then ask for:

the full architecture folder and file structure what each part is responsible for where state lives how things talk to each other

Nothing fancy. Markdown only

Save this as http://architecture.md and drop it into an empty project folder

This document is the spine of the app If this is vague, everything downstream will be vague too

Step 2 : turn the architecture into small boring tasks

Next ask AI to convert that architecture into a task list

Not “build auth” But “create auth schema”, “wire session state”, “protect route X”.. Each task must:

be small enough to test have a clear start and end touch one concern only

The key detail: tell the AI these tasks will be executed one by one with testing in between

This becomes http://tasks.md At this point, you still havent written code But the chaos is already gone

Step 3 : now I let Cursor work (with rules)

Only now open Cursor

tell it: You’re an engineer joining this project You’ve been given http://architecture.md and http://tasks.md Read them carefully. No guessing Then add strict rules:

minimal code only no refactors unless asked no unrelated changes don’t break existing behavior stop after each task so I can test

One task Run it Test it Commit it Repeat

This sounds slower It’s not

Why this works (and vibe coding usually doesnt)

Most vibe coding fails for one reason: intent isn’t frozen

When intent is fuzzy AI fills gaps with guesses Those guesses compound Thats how you get “it worked yesterday” bugs

This workflow fixes that You’re not dumping everything into the IDE and hoping You’re giving AI a map You’re keeping it on rails You stay the one making decisions AI becomes a fast, obedient engineer Not a creative wildcard.

This is how you ship clean, testable, AI assisted code without the spiral.. without rewrites and without fear of touching things later

Id normally say “follow me for the playbook” but f it.. just use it


What do you do when managing context takes longer than doing everything by hand? For example, for an existing project where reading a Jira ticket may not suffice and you need to examine unbounded amount of something else, which is usually kept in developer head? Do you begin to treat that as new kind of code/documentation debt that only matters with these new tools who are always amnesiac? How much do MDs really scale?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: