Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve been a paying cursor user for 4-5 months now and feeling the same. A lot more mistakes leaking into my PRs. I feel a lot faster but there’s been a noticeable decrease in the quality of my work.

Obviously I could just better review my own code, but that’s proving easier said than done to the point where I’m considering going back to vanilla Code.



There's this concept in aviation of "ahead of or behind the plane". When you're ahead of the plane, you understand completely what it's doing and why, and you're literally thinking in front of it, like "in 30 minutes we have to switch to this channel, confirm new heading with ATC" and so forth. When you're behind the plane, it has done something expected and you are literally thinking behind it, like "why did it make that noise back there, and what does that mean for us?"

I think about coding assistants like this as well. When I'm "ahead of the code," I know what I intend to write, why I'm writing it that way, etc. I have an intimate knowledge of both the problem space and the solution space I'm working in. But when I use a coding assistant, I feel like I'm "behind the code" - the same feeling I get when I'm reviewing a PR. I may understand the problem space pretty well, but I have to basically pick up the pieced of the solution presented to me, turn them over a bunch, try to identify why the solution is shaped this way, if it actually solves the problem, if it has any issues large or small, etc.

It's an entirely different way of thinking, and one where I'm a lot less confident of the actual output. It's definitely less engaging, and so I feel like I'm way less "in tune" with the solution, and so less certain that the problem is solved, completely, and without issues. And because it's less engaging, it takes more effort to work like this, and I get tired quicker, and get tempted to just give up and accept the suggestions without proper review.

I feel like these tools were built without any sort of analysis if they _were_ actually an improvement on the software development process as a whole. It was just assumed they must be, since they seemed to make the coding part much quicker.


That's a great analogy. For me it is a very similar feeling that I get ripped out of "problem solving mode" into "code review mode" which is often a lot more taxing for me.

It also doesn't help reviewing such code that sometimes surprisingly complex problems are solved correctly, while there's surprisingly easy parts that can be subtly (or very) wrong.


Yes great analogy!

A hard pill to swallow is that a lot of software developers have spent most of their careers "behind the code" instead of out ahead of it. They're stuck for years in an endless "Junior Engineer" cycle of: try, compile, run, fix, try, compile, run, fix--over and over with no real understanding, no deliberate and intentional coding, no intimacy, no vision of what's going on in the silicon. AI coding is just going to keep us locked into this inferior cycle.

All it seems to help with is letting us produce A Lot Of Code very quickly. But producing code is 10% of building a wonderful software product....


This is such a great analogy! Exactly how I feel when using AI tools. I have had some incredibly productive conversations about high-level design where I explain my goals and the approaches I'm considering. But then the actual code will have subtle bugs that are hard to find.


Also very much in the spirit of "children of the magenta line" https://www.computer.org/csdl/magazine/sp/2015/05/msp2015050...


Unlike an airplane you can stop using the assistant at any time and catch up. Those who learn to leverage AI will have an advantage.


Same result - I tried it for a while out of curiosity but the improvements were a false economy: time saved in one PR is time lost to unplanned work afterwards. And it is hard to spot the mistakes because they can be quite subtle, especially if you've got it generating boilerplate or mocks in your tests.

Makes you look more efficient but it doesn't make you more effective. At best you're just taking extra time to verify the LLM didn't make shit up, often by... well, looking at the docs or the source.. which is what you'd do writing hand-crafted code lol.

I'm switching back to emacs and looking at other ways I can integrate AI capabilities without losing my mental acuity.


> And it is hard to spot the mistakes because they can be quite subtle

aw yeah; recently I spent half a day pulling my hair debugging some cursor-generated frontend code just to find out the issue was buried in some... obscure experimental CSS properties which broke a default button behavior across all major browsers (not even making this up).

Velocity goes up because you produce _so much code so quickly_, most of which seems to be working; managers are happy, developers are happy, people picking up the slack - not so much.

I obviously use LLMs to some extent during daily work, but going full-on blind mode on autopilot gotta crash the ship at some point.


Can you elaborate on the mistakes you see? What languages are you working with?


Just your run-of-the-mill hallucinations, e.g. mocking something in pytest but only realising afterwards that the mock was hallucinated, the test was based on the mock, and so the real behaviour was never covered.

I mean, I generally avoid using mocks in tests for that exact reason, but if you expect your AI completions to always be wrong you wouldn't use them in the first place.

Beyond that, the tab completion is sometimes too eager and gets in the way of actually editing, and is particularly painful when writing up a README where it will keep suggesting completely irrelevant things. It's not for me.


> the tab completion is sometimes too eager and gets in the way of actually editing

Yea, this is super annoying. The tab button was already overloaded between built-in intellisense stuff and actually wanting to insert tabs/spaces, now there are 3 things competing for it.

I'll often just want to insert a tab, and end up with some random hallucination getting inserted somewhere else in the file.


Seriously, give us our tab key back! I changed accept suggestion to shift TAB.

But still there is too much noise now. I don't look at the screen while I'm typing so that I'm not bombarded by this eager AI trying to distract me with guesses. It's like a little kid interrupting all the time.


I just turned tab off. If I'm writing myself, if I'm in the flow, I don't need any help. If I want the tool to write for me, I'll ask it to.


Can tell it to check it before and after if it doesn't do something and it can improve.

Also telling it not to code, or not to jump to solutions is important. If there's a file outlining how you like to approach different kinds of things, it can take it into consideration more intuitively. Takes some practice to pay attention to your internal dialogue.


I feel like this is also related to cursor getting worse, not better, over the past few months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: