Hacker Newsnew | past | comments | ask | show | jobs | submit | scuff3d's commentslogin

I recently left this comment on another thread. At the time I was focused on planning mode, but it applies here.

Plan mode is a trap. It makes you feel like you're actually engineering a solution. Like you're making measured choices about implementation details. You're not, your just vibe coding with extra steps. I come from an electrical engineering background originally, and I've worked in aerospace most of my career. Most software devs don't know what planning is. The mechanical, electrical, and aerospace engineering teams plan for literal years. Countless reviews and re-reviews, trade studies, down selects, requirement derivations, MBSE diagrams, and God knows what else before anything that will end up in the final product is built. It's meticulous, detailed, time consuming work, and bloody expensive.

That's the world software engineering has been trying to leave behind for at least two decades, and now with LLMs people think they can move back to it with a weekend of "planning", answering a handful of questions, and a task list.

Even if LLMs could actually execute on a spec to the degree people claim (they can't), it would take as long to properly define as it would to just write it with AI assistance in the first place.


I think of "plan" mode as a read-only mode where the LLM isn't chomping at the bit to start writing to files. Rather than being excitable and over-active, it is receptive and listening.

Oh boy, if anyone thought productivity hacks, ultra optimized workflows, and "personal knowledge management" systems could get ridiculous, they haven't seen anything yet. This is gonna be the new thing people waste time on now instead of their NeoVim config.

But it's not really increasing anymore, and the increase has been almost entirely tied to subsidies. When Germany and America pulled back on EV subsidies, sales dropped significantly.

The adoption curve hasn't been nearly as steep as predicted, and the political landscape is unstable. Other manufacturers are also pulling back on their EV investments.

I'm not saying Honda isn't overdoing it, but a retreat from EVs isn't surprising.


> But it's not really increasing anymore

EV's are a half trillion dollar market (20 million cars annually, average selling price $25K) that increased by 20% in 2025.

That's a massive increase in a massive market.

It's not the 50% per annum we were seeing earlier, but 20% of a big number is often more impressive than 50% of a big market.


It's not that simple, some markets are slowing down and others are accelerating.

Two of Honda's biggest markets are Japan and the US. The US is cooling on EVs with incentives and regulation changes making adoption less urgent. Japan already has an extremely low adoption rate. So the incentives for Honda to invest heavily just aren't there right now.

Other manufacturers are also pulling back. Ford is cutting way back on the Lightning for example.


It's too soon to tell on America. In Germany sales pulled back temporarily after the loss of subsidies -- most people who were looking at buying an EV pulled their purchase forward to before the subsidy went away but then after a while growth resumed. 2025 EV sales in Germany without subsidies were higher than 2023 EV sales with subsidies after being down in 2024. I expect the same thing to happen for 2027 US EV sales.

In Japan, it's more a matter of not having good domestic options. Japanese people don't buy non-Japanese cars. When the Leaf was selling well world-wide, it sold well in Japan. But it's been a few years since the Leaf sold well anywhere. Now with good Toyota options and spiking gas prices I expect EV's to pick up in Japan. Nowhere is more dependent than Japan on the straight of Hormuz.


Code duplication is normally referring to duplicate code within the same code base, not writing something yourself instead of using a library.

That's fair, but I suspect the underlying mechanism is the same -- the models prefer re-writing code from scratch rather than looking around for reusable abtsractions, which may exist just a few modules over, or -- for smaller models -- sometimes even in the same file. They're not copy-pasting the code for sure, just regenerating de novo.

This is the most common issue I find, even with the latest models. For normal logic it's not too bad, the real risk is when they start duplicating classes or other abstractions, because those tend to proliferate and cause a mess.

I don't know if it's the training or RL or something intrinsic to the attention mechanism, but these models "prefer" generating new code rather than looking around for and integrating reusable code, unless the functionality is significant or they are explicitly prompted otherwise.

I think this is why AGENTS.md files are getting so critical -- by becoming standing instructions, they help override the natural tendencies of the model.


Yeah I agree that it's not copy/pasted the way a dev would, but I think the end result is the same. The more it needlessly duplicates code, the more brittle things will become. Changes will get harder and harder to implement as the number of sites that have to change increases.

On the other hand, I think driving down the need for external dependencies can be a net win. In my experience you usually need a very tiny slice of what a dependency actually offers, and often you settle for making design compromises to fit the dependency into your system, because the cost of writing it yourself is too high. LLMs definitely change that calculus.

I've found agent.md files are more a bandaid then anything. I've seen agents routinely ignore/forget them, and the larger the code base/number of changes they're making the more frequently they forget.


To add to the person who quotes the relevant part of the study, they also point that the velocity increase disappears after a month or two.

I've gone back and forth on it a lot myself, but lately I've been more optimistic, for a couple of reasons.

While the final impact LLMs will have is yet to be determined (the hype cycle has to calm down, we need time to see impacts in production software, and there is inevitably going to be some kind of collapse in the market at some point), its undoubtable that it will improve overall productivity (though I think it's going to be far more nuanced then most people think). But with that productivity improvement will come a substantial increase in complexity and demand for work. We see this playout every single time some tool comes along and makes engineers in any field more productive. Those changes will also take time, but I suspect we're going to see a larger number of smaller teams working on more projects.

And ultimately, this change is coming for basically all industries. The only industries that might remain totally unaffected are ones that rely entirely on manual labor, but even then the actual business side of the business will also be impacted. At the end of the day I think it's better to be in a position to understand and (even to a small degree) influence the way things are going, instead of just being along for the ride.

If the only value someone brings is the ability to take a spec from someone else and churn out a module/component/class/whatever, they should be very very worried right now. But that doesn't describe a single software engineer I know.


The only part AI auto complete I found I really like is when I have a function call that takes like a dozen arguments, and the auto complete can just shove it all together for me. Such a nice little improvement.

My least favourite part of the auto complete is how wordy the comments it wants to create are. I never use the comments it suggests.

I have been begging Claude not to write comments at all since day 1 (it's in the docs, Claude.md, i say the words every session, etc) and it just insists anyway. Then it started deleting comments i wrote!

Fucking robot lol


I find it writes them like a boring neighbour who hasn't talked to anyone for a few days; it just seems to reiterate the same thing three times, worded slightly differently, but not adding anything extra with each sentence, like there's a word count it's aiming for.

Do you mean suggesting arguments to provide based on name/type context?

Yeah, it usually gets the required args right based on various pieces of context. It have a big variation though between extension. If the extension can't pull context from the entire project (or at least parts of it) it becomes almost useless.

IntelliJ platform (JetBrains IDEs) has this functionality out of the box without "AI" using regular code intelligence. If all your parameters are strings it may not work well I guess but if you're using types it works quite well IME.

Can't use JetBrains products at work. I also unfortunately do most of my coding at work in Python, which I think can confound things since not everything is typed

... you can't use JetBrains? What logic created a scenario where you can't use arguably the best range of cross platform IDEs, but you can somehow use spicy autocomplete to imitate some of their functionality, poorly?

I work in an extremely security minded industry. There are strict guidelines about what we can and can't use. JetBrains isn't excluded for technical reasons, but geopolitical ones.

The AI models we use are all internally hosted, and any software we use has to go through an extensive security review.


> JetBrains isn't excluded for technical reasons, but geopolitical ones.

This makes perfect sense. Who could possibly trust a company run from... the Netherlands.

I get that you don't make the rules you're working under, but Jetbrains of all companies seems like a bizarre "risk" factor, given their history and actions.


Quit your palantir job, spook.

Can you imagine the absolute mayhem at Fox News if Obama had declared himself the greatest president of all time.

Or declared a US company a supply chain risk after trying to weasel out of a contract.

Or, you know, incited a terrorist attack on the US Capitol...


That's different as Fox News is an entertainment channel and not news, despite the name.

And I'll never understand that. If I sell horsemeat in my grocery store, but label it as A5 wagyu beef, there will be legal consequences. When Fox sells "entertainment" (meaning lies) labeled as "news," there are none.

So why do they get all the breaks? False advertising and fraud aren't covered by the First Amendment.


Unfortunately, there is a fairly large part of the population who disagrees with that.

It's almost like the stuff right wing media falsely claimed Obama and Biden were doing.

Like they were preparing for someone to actually do it, because it already happened with the last guy, right?


Go and Java/C# (if you forgo all the OOP nonsense) aren't much harder to write than Python, and you get far better performance. Not all the way to Rust level, bur close enough for most things with far less complexity.

As an AI engineer I kinda wish the community had landed on Go or something in the early days. C# would also be great, although it tends to be pretty verbose.

Python just has too strong network effects. In the early days it was between python and lua (anyone remember torchlua?). GoLang was very much still getting traction and in development.

Theres also the strong association of golang to google, c# to microsoft, and java to oracle...


Yeah that bothers me too, but it's damn hard to get away from these days. Most language projects have significant corporate involvement one way or the other.

Go is criminally underrated in my opinion. It ticks so many boxes I'm surprised it hasn't seen more adoption.


It ticks many boxes for me on the surface, but I've read a few articles that critique some of its design choices.

Rust really ticks the "it got all the design choices right" boxes, but fighting the borrow checker and understand smart pointers, lifetimes, and dispatch can be a serious cognitive handicap for me.


No languages are perfect, they all make tradeoffs. I just like a lot of the ones Go made.

Go and Rust try to solve very different problems. Rust takes on a lot of complexity to provide memory safety without a garbage collector, which is fine, but also unnecessary for a lot of problems.


Why go? That language has absolutely no expressive power, while you surely at least want to add together two vectors/matrices in AI with the + operator.

We're 6 months away from some company's app/infrastructure/whatever going down and staying down, because literally nobody knows how the 500,000 line code base works and Claude is stuck in a loop.

Lol, just press escape then tell it to roll back to the last stable release.

Right, because the same people who vibe coded their applications are the same people who take the time to setup robust infrastructure to allow for easy roll backs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: