Hacker Newsnew | past | comments | ask | show | jobs | submit | AstroBen's commentslogin

> skimming through an alien looking codebase, scratching your head trying to figure what crazy abstraction the last person who touched this code had in mind. Oh shit it was me? That made so much more sense back then

This is exactly how you learn to create better abstractions and write clear code that future you will understand.


You are right about the learning part. But I’ve been at this for 20 years. Even the best, most pristine and organizad code I’ve seen has not been “clear”. The average LLM code today is a lot more clear than the average developer code.

I wish more was being invested in AI autocomplete workflows. That was a nice middle-ground.

But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).


I've had a lot of enjoyment flipping the agentic workflow around: code manually and ask the agent for code review. Keeps my coding skills and knowledge of the codebase sharp, and catches bugs before I commit them!

if it catches a lot of bugs maybe you’d be better of letting it write it in the first place :)

It also writes lots of bugs which it'll catch some of, in an independent review chat.

This is bogus. If you think LLMs write less buggy software, you haven't worked with seriously capable engineers. And now, of course, everyone can become such an engineer if they put in the effort to learn.

But why not just use the AI? Because you can still use the AI once you're seriously good.


> But why not just use the AI? Because you can still use the AI once you're seriously good.

Perhaps because the jury is still out on whether one can become “seriously good” by using AI if they weren’t before.


This is definitely not correct in my opinion. You’re essentially saying, instead of a person actually getting better at the craft, just give up and let someone else do it.

I was joking :)

Nono, that is the reverse centaur. Structure your own thoughts, that's the human work.

IME, not really. When you prompt it to review its own written code, it will end up finding out a bunch of stuff that should have been otherwise. And then you can add different "dimensions" in your prompt as well like performance, memory safety, idiomatic code, etc.

Statistically LLMs generate more bugs for the same feature.

Man, same here, those early days of Cursor were mindblowing; but since then autocomplete has stagnated, and even the new Cursor version is veering agentic like everything else.

I hope if/when diffusion models get a little more traction down the line it'll put some new life into autocomplete(-adjacent) workflows. The virtually instantaneous responses of Inception's Mercury models [0] still feel a little like magic; all it's missing is the refinement and deep editor integration of Cursor.

On the subject of diffusion models, it's a shame there aren't any significant open-weight models out there, because it seems like such a perfect fit for local use.

[0] https://www.inceptionlabs.ai/


This matches my experience. When I write the boring glue code myself, I get a map of the project in my head.

When I let an agent write too much of the structure, the code may work, but a week later every small change starts with "where did it put that?"


AI autocomplete sucked. Everyone quickly moved on because it is not a useful interface

LLM auto-complete is the most useful experience I've had with LLMs by quite a margin, and those were the early GitHub Copilot versions as well. In terms of models and cost it overperformed. It wasn't always good but it was more immediately useful than vibecoding and spec-driven development (or vibecoding-in-a-nice-dress).

I think most people "moved on" because they both thought the agent workflow is cooler and were told by other people that it works. The latter was false for quite some time, and is only correct now insofar that you can probably get something that does what you asked for, but executed exceedingly poorly no matter how much SpecLang you layer on top of the prompting problem.


> AI autocomplete sucked

> Everyone moved on

> it is not a useful interface

You've made three claims in your brief comment and all appear to be false. Elaborate what you mean by any of this?


Who's "everyone"?

In some codebases, autocomplete is the most accurate and efficient way to get things done, because "agentic" workflows only produce unmaintainable mess there.

I know that because there are several times where I completely removed generated code and instead coded by hand.


Why? I thought it was pretty good, just get the rest of your function a lot of times and no context switching to type to an agent or whatever. It just happens immediately and if it's wrong just keep typing till it isn't. You can still use an agent for more complex things.

I just wish I knew of a good Emacs AI auto complete solution.


It’s wildly useful. Type out a ridiculously long function name that describes what you want it to do and often… there it is.

I can see the logic behind "manual coding" but it feels like driving across country vs taking the airplane. Once I've taken the airplane once, its so hard to go back...

Airplanes are good for certain types of journey, but they're vastly inefficient for almost all of them.

I only see this being the case for throwaway code and prototypes. For production code you want to keep long term it's not so clear cut.

I’m writing production quality code with agents, it was the development ‘harness’ that took time up get right.

It's more like driving across country vs firing a missile with you being the warhead...

Can't understand this mentality. If I had the time I would much rather never set foot in an airport again. I would drive everywhere. And I would much rather write my own code than pilot an LLM too

You’re describing extremely valid approach for a hobby. Less for a business.

The fact so many people think businesses need to do do do, faster faster faster, now now now, at all costs is a major reason everything sucks, everything is fucked up, everyone is exploited.

No, they are not. Even ignoring business where using AI would have consequences for you (medical is one example), there are plenty "normal" software companies that value quality over slop.

How?

I usually code faster with good (next-edit) autocomplete then writing a prompt and waiting for the agent.


Real life measurements show a 25 percent improvement in coding speed when using AI at best. And this is before you take technical debt into account!

Yes, AI unlocks coding for people who fail FizzBuzz. This isn't really relevant to making software though.


Design is very hard to verbally describe, and AI doesn't have good judgement on what is easy to use or attractive.

I think it's because it's non-deterministic too. You can't iteratively improve design the same way you can code.

If they wanted couldn't they do something like RLHF? Instead of humans picking the best of 2 text outputs, they pick the best rendered design

I'd be very surprised if they're not already doing this.

Yeah I'm not a huge fan of it, either. Well organized CSS is much nicer to work with. On the other hand, I'd prefer Tailwind to badly organized CSS.

Figma's stock has been on a sharp downward trend over the last year. This isn't a notice-able change to their stock price at all. They're down 30% just in the last month, with many days being -5% to -10%.

They're down 80% over the last year. Ouch.


"attractive things work better"

There have been studies showing aesthetics matter quite a bit for UX - users perceive things that are attractive as being easier to use and less frustrating.


Surely they weren't trying to be deceptive... surely.

Anthropic is the exact same way, I think they're just trying to avoid having 5 different subscription tiers visible. Probably needing 20x is very niche

seems like this $100 replaced the $200 plan

So.. cheaper?


No, the same $200 plan is still there. They hid it behind the $100 click-through.

This just adds a $100 plan that's 1/4 the usage of the $200 plan..


without a human control how do I put these results into context? It doesn't matter that they "hoped" for $2.50/lead

This was due to Claude Code the agent harness. 4.6 was trained to use tools and operate in an agent environment. This is different from there being a huge bump in the underlying model's intelligence.

The takeaway here I think is that the "breakthrough" already happened and we can't extrapolate further out from it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: