Hacker Newsnew | past | comments | ask | show | jobs | submit | crakhamster01's commentslogin

This was a great comment. I don't know if it's common knowledge, but this really helped clarify how the shift happens.

I also remember half coding and half prompting a few months back, only to be frustrated when my manual changes started to confuse the LLM. Eventually you either have to make every change through prompting, or be ok with throwing away an existing session and add back in the relevant context in a fresh one.


> One thing we know for sure is LLMs write code differently than we do.

Kind of. One thing we do know for certain is that LLMs degrade in performance with context length. You will undoubtedly get worse results if the LLM has to reason through long functions and high LOC files. You might get to a working state eventually, but only after burning many more tokens than if given the right amount of context.

> The worst outcome I can imagine would be forcing them to code exactly like we do.

You're treating "code smells" like cyclomatic complexity as something that is stylistic preference, but these best practices are backed by research. They became popular because teams across the industry analyzed code responsible for bugs/SEVs, and all found high correlation between these metrics and shipping defects.

Yes, coding standards should evolve, but... that's not saying anything new. We've been iterating on them for decades now.

I think the worst outcome would be throwing out our collective wisdom because the AI labs tell us to. It might be good to question who stands to benefit when LLMs aren't leveraged efficiently.


> They became popular because teams across the industry analyzed code responsible for bugs/SEVs, and all found high correlation between these metrics and shipping defects.

Yes, based on research of human code. LLMs write code differently. We should question whether the human research applies to LLMs at all. (You wouldn't take your assumptions about chimp research and apply them to parrots without confirming first)

> I think the worst outcome would be throwing out our collective wisdom because the AI labs tell us to.

We don't have to throw it out. But our current use of LLMs are a dramatic change from what came before. We should be questioning our assumptions and traditions that come from a different way of working and intelligence. Humans have a habit of trying to force things to be how they think they should be, rather than allowing them to grow organically, when the latter is often better for a system we don't yet understand.


They write code differently but that doesn't mean that's the kind of code they prefer to read. Don't ascribe too much intention to a stochastic process.

Their coding style is above all else a symptom of their very limited context window and complete amnesia for anything that's not in the window.


I don't think there's intention. And yes, its output is defined by its limits. But it's not just the context, is it? Their coding style is, above all else, a result of an algorithm and input. The training data, the reinforcement, the model design, the tuning, the prompt, the context. Change any one of those things and the code changes. They are a system, like an ecosystem. Let water flow and it finds its own path. But try to dam it and it creates unintended consequences. I think what we're going to find is some of our rules apply more to a human world than an LLM world.


I can maybe see this argument being valid for OSS - as Carmack says, by nature it should be "no strings attached".

I don't think that's all anti-AI activists care about though. Honestly, I would say most activists don't talk about the use of OSS? The most prominent anti-AI sentiment seems to come from creatives. Artists, musicians, designers, etc.

They didn't publish their works with the same notion as OSS developers, but it was scraped up by corporations all the same. In many cases, these works were protected by copyright law and used anyways.

To me that feels like the equivalent of training on "private repos", which Carmack would call a violation [1].

[1] https://x.com/ID_AA_Carmack/status/2031769354401091988


I had a similar reaction to OP for a different post a few weeks back - I think some analysis on the health economy. Initially as I was reading I thought - "Wow, I've never read a financial article written so clearly". Everything in layman's terms. But as I continued to read, I began to notice the LLM-isms. Oversimplified concepts, "the honest truth" "like X for Y", etc.

Maybe the common factor here is not having deep/sufficient knowledge on the topic being discussed? For the article I mentioned, I feel like I was less focused on the strength of the writing and more on just understanding the content.

LLMs are very capable at simplifying concepts and meeting the reader at their level. Personally, I subscribe to the philosophy of - "if you couldn't be bothered to write it, I shouldn't bother to read it".


Alternate theory... a few months into the LLMism phenomenon, people are starting to copy the LLM writing style without realizing it :(


This happens to non-native English speakers a lot (like me). My style of writing is heavily influenced by everything I read. And since I also do research using LLMs, I'll probably sound more and more as an AI as well, just by reading its responses constantly.

I just don't know what's supposed to be natural writing anymore. It's not in the books, disappears from the internet, what's left? Some old blogs for now maybe.


The wave of LLM-style writing taking over the internet is definitely a bit scary. Feels like a similar problem to GenAI code/style eventually dominating the data that LLMs are trained on.

But luckily there's a large body of well written books/blogs/talks/speeches out there. Also anecdotally, I feel like a lot of the "bad writing" I see online these days is usually in the tech sphere.


Books definitely have natural writing, read more fiction! I recommend Children of Time by Adrian Tchaikovsky


> taste scales now.

Not having taste also scales now, and the majority of people like to think they're above average.

Before AI, friction to create was an implicit filter. It meant "good ideas" were often short-lived because the individual lacked conviction. The ideas that saw the light of day were sharpened through weeks of hard consideration and at least worth a look.

Now, anyone who can form mildly coherent thoughts can ship an app. Even if there are newly empowered unicorns, rapidly shipping incredible products, what are the odds we'll find them amongst a sea of slop?


I think this advice is pretty apt for small to medium sized companies. We're all invested in the company succeeding, but you don't want to become known as the person that always says "no".

At large companies, I've rarely found a reason to speak out on a project. Unless it has a considerable effect on my team/work (read: peace of mind), it just doesn't make sense to be the person casting doubt. There's not much ROI for being "right".

If you manage to kill the project before it starts, no one will ever know how bad of a disaster you prevented. If the project succeeds despite your objections, you look like an idiot. And if it fails - as the author notes, that doesn't get remembered either.

As a senior IC, the only real ROI I've found in these situations is when you can have a solution handy if things fail. People love a fixer. Even if you only manage to pull this off once or twice, your perception in the org/company gets a massive boost. "Wow, so-and-so is always thinking ahead."

A basic example I saw at my last company was automated E2E testing in production. My teammate had suggested this to improve our ability to detect regressions, but it was ultimately shot down as not being worth the investment over other features.

A few months later, we had seen multiple instances of users hitting significant issues before we could catch them. My teammate was able to whip out the test framework they had been building on the side, and was immediately showered with praise/organizational support (and I'm sure a great review as well).


The effort required to be ready with the fix is often SO much less than what you need to convince folks the problem exists in the first place. I find it's frequently the only viable option on an individual or team scale.


I've realized that climbing the corporate ladder doesn't make any sense. You put more effort, you take responsibility for stupid people's decisions, and then you get a disproportionately small reward. The smartest move is to find a bottom-tier position where they pay you enough to sustain your desired lifestyle, but where you cannot really be blamed for failures of the management.


Relevant: https://en.wikipedia.org/wiki/Dilbert_principle

> You put more effort, you take responsibility for stupid people's decisions, and then you get a disproportionately small reward

On that I disagree. Managers might have to take responsibility for bad decisions, sure, but get a disproportionately larger reward than those under them. It's certainly less stressful at the bottom of the ladder, but don't expect to get much praise or monetary reward, and you're the first to go as soon as something goes wrong. There's a reason why late-stage companies are full of middle managers, and few people actually doing the work.


> don't expect to get much praise or monetary reward

Yeah so I figured out that if I have a bullshit busyjob for €100k and my option is to actually start working my ass off and maybe double the salary in absolute best-case scenario, then fuck that. But I admit that my position might be exceptional.

> and you're the first to go as soon as something goes wrong.

I live in Europe so I assume I'd survive even a big fuckup as long as I'm following my manager's orders, even if HQ is American. Also, when there are bigger layoffs, they specifically by law must let go in the order of new hires to old hires, which means that I'm not in immediate danger even if they cut workforce.

The biggest danger is someone discovering that I mostly play video games at work and then giving me lots of useless tasks just to keep me occupied.


It still makes little sense to be a line level manager. You can make just as much as a senior+ IC at the right company.


My kind of approach as well. I don't care it shown as not being career oriented, as long as there are options to work elsewhere, even if outside IT.


> At large companies, I've rarely found a reason to speak out on a project.

That's true. And it is currently one of the main reason why startups are so efficient compared to MegaCorps.

In small companies, it takes few engineers voicing out ' this is bullshit ' to stop a disaster.

In large corps, it takes 2y, 10M USD and a team in burnout to reach the same result.

And the main reason is the usual source of all sins: *Politics*.


I feel like both of these examples are insights that won't be relevant in a year.

I agree that CC becoming omniscient is science fiction, but the goal of these interfaces is to make LLM-based coding more accessible. Any strategies we adopt to mitigate bad outcomes are destined to become part of the platform, no?

I've been coding with LLMs for maybe 3 years now. Obviously a dev who's experienced with the tools will be more adept than one who's not, but if someone started using CC today, I don't think it would take them anywhere near that time to get to a similar level of competency.


I base part of my skepticism about that on the huge number of people who seem to be unable to get good results out of LLMs for code, and who appear to think that's a commentary on the quality of the LLMs themselves as opposed to their own abilities to use them.


> huge number of people who seem to be unable to get good results out of LLMs for code

Could it be, they use other definition of "good"?


I suspect that's neither a skill issue nor a technical issue.

Being "a person who can code" carries some prestige and signals intelligence. For some, it has become an important part of their identity.

The fact that this can now be said of a machine is a grave insult if you feel that way.

It's quite sad in a way, since the tech really makes your skills even more valuable.


It's funny that you mention moving outside the city when Zohran's tax plan is centered on bringing the corporate tax rate in-line with our neighboring state.

I'll also caveat that any parallels you might see in Seattle don't really apply to NYC. Besides the low car ownership rates, wealthy individuals choose to in NYC for it's convenience and culture, which really are unique in the US.


> and now, what screen you’re on, what do you see?

There's a "follow me" feature to see what other users are doing. It's been around for several years.


I was referring to prototype viewing, Not about viewing the design itself.


There's undoubtedly a cohort of tourists that come to Japan with the "Disneyland" mindset, and I agree that some sort of government-level change is needed to curb abuse. But I would like to believe these folks are in the minority.

I think a greater proportion of the tourist population are individuals that visit Japan and maybe haven't done enough research, or are just unaware of norms here. Not understanding where to queue, how to order, navigate public transport, what to do at a temple, onsen, etc. This group isn't the 15% of "Best in Class tourists" Craig writes about, but rather the 75% that want to be respectful and don't know any better.

Many locals/expats will see this group and look down in disdain (or lament about them in a blog post...), but why don't more people just ask if they need help? It takes little effort to point someone in the right direction, and if it helps them better understand the country it's a win-win for both tourists and residents alike.

I feel like people love to talk about how considerate Japanese culture is, but don't care to practice it themselves when given the chance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: