Hacker Newsnew | past | comments | ask | show | jobs | submit | jeppester's commentslogin

Can this be revoked after the midterms? In that case I guess the EU can wait it out.

The tariffs are claimed to be a national security emergency and without the approval or Congress, therefore the composition of Congress won't matter unless the Supreme court judges otherwise.

But the Supreme Court is going to judge, sooner rather than later. I sincerely hope they will rule against Trump (that seems to me the way that the merits of the case demand).

If there are midterms.

I think this whole thing is part of a plot to cause war or some protests in order to be able to declare a state of emergency allowing him to delay or cancel elections. If not the midterm, at least the next presidential elections. Because it is the only way he can stay in power.

The US had elections during their civil war in the 1800s, They had elections during WW2, major wars cannot even stop US elections legally. Doesn't mean he won't try, but it's not something he can do AFAIK.

Meanwhile I'm getting a 5000 lines PR with code that's all clearly AI generated.

It's full of bloat; Unused http endpoints, lots of small utility functions that could have been inlined (but now come with unit tests!), missing translations, only somewhat correct design...

The quality wasn't perfect before, now it has taken a noticeable dip. And new code is being added faster than ever. There is no way to keep up.

I feel that I can either just give in and stop caring about quality, or I'll be fixing everyone else's AI code all of my time.

I'm sure that all my particular colleagues are just "holding it wrong", but this IS a real experience that I'm having, and it's been getting worse for a couple of years now.

I am also using AI myself, just in a much more controlled way, and I'm sure there's a sweet spot somewhere between "hand-coding" and vibing.

I just feel that as you inch in on that sweet spot, the advertised gains slowly wash away, and you are left with a tangible, but not as mindblowing improvement.


Agreed, I’m so exhausted while reviewing AI generated MRs.

In my line of work, I keep seeing it generate sloppy state machines with unreachable or superfluous states, bad floating-point arithmetic, and especially trying to do everything in the innermost loop of a nested iteration.

It also seems to love hallucinating Qt features that should exist but don’t, which I find mildly amusing.


I got my hands on Grandia because one of my friends' younger brother thought it was Digimon and begged their mom to buy it.

Being 10 years old or so and not knowing much English yet (being Danes), we were pretty clueless about how to progress, but eventually we succeeded and got pretty far into the game. The game is about a great adventure, but for us it was also an adventure into the English language and a new type of game that we'd never tried before. I miss those experiences!

Later I went back to it and completed it in my teens.

The timing of this article is a bit fun since I'm currently playing it for the third time with my son, translating it on the go. It's awesome to see my "friends" also becoming his friends, and the game is holding up quite well and keeping him interested.

Apart from the charming characters, the visual variety is really good with each town having it's own style. There's also hardly a boring moment (admittedly using fast forward for the battles, which otherwise otherwise a bit repetitive later on), there's a new story beat every half hour or so to keep everything fresh.

The combat is also quite good, although easy if you have a bit of slightest experience with these kinds of games.


Play the game in an emulator that has a shortcut for fast-forward. It makes a world of difference when it comes to "enduring" overly long cut-scenes, load screens, repeated spell animations, endless combat encounters, etc.

I wish modern games would have the same feature!


Fair point, I was playing on MiSTER.


That single thing is great UX.

While I personally very much enjoy all of the things I can do on PC and Steam Deck, I can definitely understand why my wife - who's not as technically inclined - prefers the PS5.


Anyone who's read the law has known this for years.

The GDPR is incompatible with the Cloud Act, and so the only legal (or so it should be) way to use US companies is to treat them like unsafe third countries - no matter the data center location.

But everyone wants to continue like before. Having to ensure that Amazon and Azure never touches unincrypted personal data is hard. So one "compromise" after another has been tried - never solving the actual problem.

As a EU citizen I think it's entirely embarrassing. Either the EU should have the power to force European subsidiaries to be exempted from the cloud act, or everyone should be forced to abide the law, which would greatly boost EU tech. Instead we are just rolling over.


It should not be normal that companies are trying to fool their customers. I may be wrong, but I feel that dark patterns have gotten worse and have become quite normalised.

I'm well aware that companies are not your friends, and they are only in it to earn as much money as possible etc. But in the ideal world it should never be a consideration to willingly deceive your customers. Then something is wrong that needs fixing.


You can thank Friedman for that with the whole "The social responsibility of business is to increase profits" mindset and the Dodge vs. Ford court case that ruled Ford had to operate his company in the interests of its shareholders above all else.

We need to end shareholder primacy and have stronger antitrust enforcement.


> the Dodge vs. Ford court case that ruled Ford had to operate his company in the interests of its shareholders above all else.

That case is from 1919 and it doesn't say what most people think it says.

The problem there was that Ford was trying to claim he could do whatever he wants because he has the most votes, minority shareholders be damned. In practice what companies do now is that they do whatever they want and come up with some explanation for why it's in the interest of the shareholders, e.g. charitable donations are tax deductions and strengthen the company's brand with customers, instead of explicitly telling the other shareholders to eat sand.

The real problem with modern companies is diffuse ownership. You invest your retirement money in some fund, the fund is the thing that actually elects the board and what the fund wants is to increase profits, and typically short-term profits at that, so they elect a board to do it and that's what happens. It's not because the law requires them to do that, it's because that's the result of that incentive structure. And then all the companies that you own as a shareholder are out there screwing you over by double when you're their customer.

Whereas if you have a company owned and operated by the same people, then they can say "hey wait a minute, this is only going to increase short-term profits by a small amount and it's going to make everyone hate us, maybe we shouldn't do it?" Which is the thing that's missing from large publicly-traded companies.

> stronger antitrust enforcement

This is the other thing that's missing. Even if companies are trying to screw you, if they have a lot of competition then they can't, because you'd just switch to one that isn't. But now try that in a market where there are only two incumbents and they're both content to pick your pocket as long as the other one is doing the same.


> The real problem with modern companies is diffuse ownership.

And inheritance taxes and the hate directed at billionaires [1] make any other kind of ownership a rare exception. So every company is headed not by a person with a goal and a conscience, but an amoral board that can agree on only one thing - make more money.

[1] Not specific bad things specific billionaires have done, but their existence in general.


The hate against billionaires wasn't nearly as staunch even a decade ago, let alone two or three. This has nothing to do with the reason why things ended up this way.


The billionaires thing really has the causation reversed. What made people into billionaires? They were the early shareholders of companies that became megacorps. So what caused those companies to become megacorps, instead of developing into competitive markets?


Friedman told people what they wanted to hear.

Unsurprisingly Friedman was lauded and rewarded for this behavior.


Leaving the markets uncontrolled is the problem. Fine the hell of them for acting anti-consumer and they will quickly align themselves with the realities.


Or just lobby harder tbh


Better yet, pursue structural remedies. Break up or shut down bad actors.


The interests of the shareholders doesn't mean extract all profit immediately.


This is ironic as it's the perpetuating of this myth by people like you that sustains this mindset. And I get that you're not intentionally doing it at all, it comes from a place of misunderstanding. But it's incredibly harmful.

To be very clear:

Companies absolutely do not have any responsibility to maximize short-term profit.

They have a responsibility to not actively and intentionally destroy the company, and to not use the company's resources for purely personal gain in a way unrelated to the company.

That's it.

This is also why you never hear about any company getting sued for anything related to this (let alone succesfully). Because it doesn't happen, as it's not a thing and any lawyer would immediately tell you you don't have a case.


There's no accountability either on a liability - legal, prison - level or a personal duty to make sure you Do The Right Thing (when, of course, you have a family to feed)

Behavior like what some of the tech giants do (and I don't crusade against "big tech" but individual cases are ridiculous) wouldn't be justified if you, like, wrote it down on a piece of paper and showed it to them, but they get away with it because you can just ignore all feedback, you don't have to actually answer support tickets from a distance of potentially hundreds of miles away (if you acted like that to my face, well, you wouldn't dare)

Some are worse than others; some legitimately just do not care how much evil they're pumping out into the world (https://news.ycombinator.com/item?id=1692122 https://news.ycombinator.com/item?id=42651178)


If your product is this bad and no one wants to buy it normally, maybe you should build a new product.


But it's so much more profitable for shareholders to force users to engage with the shitty product


It's much cheaper for execs to buy bundled "it can do everything for less!" junk for the peasants.

That and, they're paying for Excel anyway...


Literally the exact reason we ended up with MS teams instead of slack.


Even if you have a great product, you'll still get more money out of people if you apply some dark patterns like this. It's very hard for a company to resist that siren call.


Yea but Satya bet a lot of the company on AI, and if it fails he's fucked as CEO. So he's going to make damn well sure he's shoved AI down everyone's throats as much as possible, even if it alienates some percentage of their customer base.


Making new products is very hard. Just look at the innovation output of the tech giants. Compared to the resources they have it’s pretty pathetic. They are simply out of ideas.


I call it Marketing Driven development. Its also responsible for a drop in higher quality software as business people have to justify their jobs and push developers off maintenance tickets that are “low priority” items but still impact enough customers that it should be embarrassing.


Welcome to 2025 - Cyberpunk without the cool aesthetics but all the downsides.

I realised the last time I was in a major city (I live in a village) at night just how close we are, ebikes wizzing around with youngish adults wearing corporate logos all over themselves while using e-cigs, gangs of others waiting outside each restaurant for a pickup.

Straight out the opening of Snowcrash but without the cool car.

We really did invent Torment Nexus from the classic cautionary tale "Don't Create The Torment Nexus".

I love computers, I love programming (and have for 35 years), I really really am coming to detest larger and larger parts of the modern tech scene - consumer tech and the Microsoft/Meta/Googles of the world.


The things companies can get away with in America is insane. Amazon really feels like Weyland-Yutani.


I'm not in the U. S. but when I tried to cancel my Bitdefender subscription last week (substituted Windows with Linux) - surprise: there isn't a Cancel Option anywhere on my account pages. No chatbot, no e-mail address, no phone number. I opened a ticket with them and the answer I got was: cancel via snail mail with the service provider. I live in a 11th century 200 inhabitants village and the next post office is 10 km away.

These practises have got to stop. We've got to regulate this away, it's borderline fraud.


Assuming it's credit card, file a complaint with your credit card company and do a chargeback - or request a new cc number such that the old one is retired. If you have to justify it with the bank, just tell them Bitdefender has no process for canceling a subscription once started. If they press further, or get pushback from Bitdefender, tell them the customer service rep suggested trying to send a letter to see if that might work.


I'm not in the US so I suspect some of it is slightly blunted by generally stronger worker protections but Amazon has had multiple issues here as well and we still have the "gig economy" stuff just the same.

It's not a good direction things are trending.


We thought computers were different. That freedom of information would throw off the shackles of the old order and usher in a new era of human flourishing.

Turns out computers weren't different at all, they just hadn't caught the full attention of government and business yet.


I think I became depressed because of this. I used to be so enthusiastic about computers. We had the freedom to do anything we wanted. Now they're locking everything down, destroying everything the word "hacker" ever stood for. I'm watching it happen in real time. It's heart breaking.

Computers are world changing technology. They are so powerful they could defeat police, judges, governments, militaries. Left unchecked, they could wipe out entire segments of the global economy. They could literally reshape the world. The powers that be cannot tolerate it.


Computers are different, because of zero-cost copying. It's much easier to achieve a digital monopoly than with physical-world products. That should also mean that antitrust enforcement should be stronger on software companies, and the scope of enforcement should be broader.


So when is Johnny Silverhand gonna show up? He's over two years late by now...


The other Cyberpunk. Not that it's any better but we for sure won't have Judy there to save our asses.


Thank luck we aren’t in the Warhammer 40k universe yet.


If anything we'd be more likely to open a portal to hell for Argent Energy.

`Meta today announced a strategic partnership with Union Aerospace Corporation - the deal will give Meta access to UAC's energy network powering the next revolution in AI.`


Uber, Airbnb and DoorDash are the primary dark pattern users in the industry.

I am an executive design leader and all hires from these three companies are screened in detail about their honesty level in their designs due to how many issues I have with these companies training their workers to lie.

If you work for them know that it’s a black mark on your record.

I have hired two from these companies who literally opened the interview with “I want to leave X because they literally are lying”


Considering their business model is exploitation of regulations (for hotels, for employment), no wonder they're using dark patterns too.

And it seems other companies see them and think "hey, can we do that as well?" (Like the issue of this article...)

Meta with its exploiting of children's (and adults') insecurities is probably worse though.


What are examples of their lies?


Progressive anti disclosure in prices and fees.

Full on fraudulent display of prices then charging another price.

Hiding service/worker fee splits

Global predatory pricing

Blatantly false forecasting revenues to businesses or workers.

And much more.

These are all active UX designs I have seen presented.


> and have become quite normalised.

Enforcement agencies are asleep at the switch. Without any pressure to constrain them then these major corporations will stop at nothing.

> it should never be a consideration to willingly deceive your customers.

They don't see it that way. They just see it as a new profit stream that they're daring enough to capture.


Windows 11 OneDrive that just decides to backup files without consent was certainly daring.

Look I am computer savvy enough to "fix" Windows I can live with it but I advised my mom to get an Apple laptop.


> Enforcement agencies are asleep at the switch.

They are not asleep. They were intentionally weakened, step by step.


Isn't it amazing that big corp is like the stereotypical rug salesman now...

I suppose since they're (they being Amazon, Meta, Google, Microsoft) helping pay for a ballroom for the biggest rug conman..


There aren’t enough opportunities to make the profits they need to keep the stock price up in an ethical manner. So they have to use dark patterns. It will keep getting worse with these trillion dollar behemoths having to maintain their growth rates. Ads everywhere. AI will become more and more of a tool for manipulation.


I build myself a fedora coreos based nextcloud instance with encrypted backup to S3: https://github.com/jeppester/coreos-nextcloud

In short you fill in the env-files, then run butane and ignition. (I should improve the README some time)

I love how it's all configuration. If it breaks I can set up another instance with the same secrets in minutes. It will then grab the latest backup and continue like nothing happened.


I also recommend inertia.

It really doesn't do much, which is very refreshing coming from Nextjs.

It's simple tooling for backend for frontend-style APIs, taking care of basic routing, fetching of data, and submitting forms.

While inertia was invented for Laravel, I'd argue that it works even better with Adonisjs. Because Adonisjs is TypeScript you can infer the types coming from the backend.


+1 for this stack.

Adonis + Inertia works wonderfully well. It's the best of both worlds. And the simplicity of this architecture is something to admire. It's very easy to know what's going on at every point: It's just requests, responses, middlewares and routes/views. Simple and easy, while still super powerful.


In my company I feel that we getting totally overrun with code that's 90% good, 10% broken and almost exactly what was needed.

We are producing more code, but quality is definitely taking a hit now that no-one is able to keep up.

So instead of slowly inching towards the result we are getting 90% there in no time, and then spending lots and lots of time on getting to know the code and fixing and fine-tuning everything.

Maybe we ARE faster than before, but it wouldn't surprise me if the two approaches are closer than what one might think.

What bothers me the most is that I much prefer to build stuff rather than fixing code I'm not intimately familiar with.


LLMs are amazing at producing boilerplate, which removes the incentive to get rid of it.

Boilerplate sucks to review. You just see a big mass of code and can't fully make sense of it when reviewing. Also, Github sucks for reviewing PRs with too many lines.

So junior/mid devs are just churning boilerplate-rich code and don't really learn.

The only outcome here is code quality is gonna go down very very fast.


I envy the people working at mystical places where humans were on average writing code of high quality prior LLMs. I'll never know you now.


I am working at one right now and I have worked at such in the past. One of the main tricks is to treat code reviews very seriously so people are not incentived to write lazy code. You need to build a cultire which cares about quality of both product and code. You also need decent developers, but not necessarily great developers.


It's very easy to go from what you're describing to a place hamstrung by nitpicking, though. The code review becomes more important than the code itself and appearances start mattering more than results.


Oh, I understand what you need to do. It's like losing weight. It's fairly simple.

And at the same time it's borderline impossible proven by the fact that people can't do it, even though everyone understands and roughly everyone agrees on how it works.

So the actual "trick" turns out to be understanding what keeps people from doing the necessary things that they all agree on are important – like treating code reviews very seriously. And getting that part right turns out to be fairly hard.


Did you make an effort to find those places and get them to hire you?


Some of them will get hired to fix the oceans of boilerplate code.


> In my company I feel that we getting totally overrun with code that's 90% good, 10% broken and almost exactly what was needed.

This is painfully similar to what happens when a team grows from 3 developers to 10 developers. All of sudden, there's a vast pile of coding being written, you've never seen 75% of it, your architectural coherence is down, and you're relying a lot more on policy and CI.

Where LLM's differ is that you can't meaningfully mentor them, and you can't let them go after the 50th time they try turn off the type checker, or delete the unit tests to hide bugs.

Probably, the most effective way to use LLMs is to make the person driving the LLM 100% responsible for the consequences. Which would mean actually knowing the code that gets generated. But that's going to be complicated to ensure.


Have thorough code reviews and hold the developer using the LLM responsible for everything in the PR before it can be merged.


Perlis, epigram 7:

7. It is easier to write an incorrect program than understand a correct one.

Link: http://cs.yale.edu/homes/perlis-alan/quotes.html


"but quality is definitely taking a hit now that no-one is able to keep up."

And its going to get worse! So please explain to me how in the net, you are going to be better off? You're not.

I think most people haven't taken a decent economics class and don't deeply understand the notion of trade offs and the fact there is no free lunch.


Technology has always helped people. Are you one of the people that say optimizing compilers are bad? Do you not use the intellisense? Or IDEs? Do you not use higher level languages? Why not write in assembly all the time? No free lunch right.

Yes there are trade offs, but at this point if you haven’t found a way to significantly amplify and scale yourself using llms, and your plan is to instead pretend that they are somehow not useful, that uphill battle can only last so long. The genie is out of the bag. Adapt to the times or you will be left behind. That’s just what I think.


Technology does not always help people, in fact often it creates new problems that didn't exist before.

Also telling someone to "adapt to the times" is a bit silly. If it helped as much as its claimed, there wouldn't be any need to try and convince people they should be using it.

A LOT of parallels with crypto, which is still trying to find its killer app 16 years later.


I don’t think anyone needs to be convinced at this point. Every developer is using LLM and I really can’t believe someone who has made a career out of automating things wouldn’t be immediately drawn to trying them at least. Every single company seems convinced and using it too. The comparison to crypto makes no sense.


> Every developer is using LLM

Citation needed. In my circles, Senior engineer are not using them a lot, or in very specific use cases. My company is blocking LLMs use apart from a few pilots (which I am part of, and while claude code is cool, its effectiveness on a 10-year old distributed codebase is pretty low).

You can't make sweeping statements like this, software engineering is a large field.

And I use claude code for my personal projects, I think it's really cool. But the code quality is still not there.


Stack overflow published recently a survey in which something like 80% of developers were using AI and the rest “wants to soon”. By now I have trouble believing a competent developer is still convinced they shouldn’t use it at all , though a few ludites perhaps might hold on for a bit longer.


Stack overflow published a report about text editors and Emacs wasn’t part of the list. So I’m very sceptical about SO surveys.


I was also offended by that :D.


“Using AI” is a very broad term. Using AI to generate lurem ipsum is still “using AI”.


> You can't make sweeping statements like this, software engineering is a large field.

that goes both ways


I need to be convinced.

Go ahead, convince me. Please describe clearly and concisely in one or two sentences the clear economic value/advantage of LLMs.


Careful now, you will scare them away!!!!!

People love stuff that makes them feel like they are doing less work. Cognitive biases distort reality and rational thinking, we know this already through behavioural economics.


The company I work for uses LLM's for digital marketing, the company has over 100M ARR selling products build on top of LLM's with real life measurable impact as measured by KPIs.


> real life measurable impact as measured by KPIs

This is making me even more skeptical of your claims. Individual metrics are often very poor at tracking reality.


Individual metrics are often very good at distorting reality, which is why corporate executives love them so much.


Digital marketing is old. What about LLMs gives an advantage to digital marketing?


Review responses for example. Responding to reviews has shown to have positive impact on brands. Traditionally it’s been hard to respond to all the reviews for high volume locations. Not anymore.

That’s one example, there are dozens of processes that are now relatively easy to automate due to LLMs.


It’s just plain mean to make the Emperor speak of the thread count of his “clothes”.


My parents could have said your first paragraph when I tried to teach them they could Google their questions and find answers.

Technology moves forward and productivity improves for those that move with it.


A few examples of technology that moved 'forward' but decreased productivity for those who moved with it from my 'lived' experience:

1) CASE tools (and UML driven development)

2) Wizard driven code.

3) Distributed objects

4) Microservices

These all really were the hot thing with massive pressure to adopt them just like now. The Microsoft demos of Access wizards generating a complete solution for your business had that same wow feeling as LLM code. That's not to say that LLM code won't succeed but it is to say that this statement is definitely false:

> Technology moves forward and productivity improves for those that move with it.


> Technology moves forward and productivity improves for those that move with it.

It does not, technology regresses just as often and linear deterministic progress is just a myth to begin with. There is no guarantee for technology to move forward and always make things better.

There are plenty of examples to be made where technology has made certain things worse.


I would say it as "technology tends to concentrate power to those who wield it."

That's not all it does but I think it's one of the more important fundamentals.


Why is productivity so important? When do regular people get to benefit from all this "progress?"


Being permitted to eat - is that not great benefit?


"But with Google is easier!" When you were trying to teach your folks about Google, were you taking into consideration dependence, enshittification, or the surveillance economy? No, you were retelling them the marketing.

Just by having lived longer, they might've had the chance to develop some intuition about the true cost of disruption, and about how whatever Google's doing is not a free lunch. Of course, neither them, nor you (nor I for that matter) had been taught the conceptual tools to analyze some workings of some Ivy League whiz kinds that have been assigned to be "eating the world" this generation.

Instead we've been incentivized to teach ourselves how to be motivated by post-hoc rationalizations. And ones we have to produce at our own expense too. Yummy.

Didn't Saint Google end up enshittifying people's very idea of how much "all of the world's knowledge" is; gatekeeping it in terms of breadth, depth and availability to however much of it makes AdSense. Which is already a whole lot of new useful stuff at your fingertips, sure. But when they said "organizing all of the world's knowledge" were they making any claims to the representativeness of the selection? No, they made the sure bet that it's not something the user would measure.

In fact, with this overwhelming amount of convincing non-experientially-backed knowledge being made available to everyone - not to mention the whole mass surveillance thing lol (smile, their AI will remember you forever) - what happens first and foremost is the individual becomes eminently marketable-to, way more deeply than over Teletext. Thinking they're able to independently make sense of all the available information, but instead falling prey to the most appealing narrative, not unlike a day trader getting a haircut on market day. And then one has to deal with even more people whose life is something someone sold to them, a race to the bottom in the commoditized activity (in the case of AI: language-based meaning-making).

But you didn't warn your parents about any of that or sit down and have a conversation about where it means things are headed. (For that matter, neither did they, even though presumably they've had their lives altered by the technological revolutions of their own day.) Instead, here you find yourself stepping in for that conversation to not happen among the public, either! "B-but it's obvious! G-get with it or get left behind!" So kind of you to advise me. Thankfully it's just what someone's paid for you to think. And that someone probably felt very productive paying big money for making people think the correct things, too, but opinions don't actually produce things do they? Even the ones that don't cost money to hold.

So if it's not about the productivity but about the obtaining of money to live, why not go extract that value from where it is, instead of breathing its informational exhaust? Oh, just because, figuratively speaking, it's always the banks have AIs that don't balk at "how to rob the bank"; and it's always we that don't. Figures, no? But they don't let you in the vault for being part of the firewall.


Paul Krugman (Nobel laureate in economy) said in 1998 that the internet is no biggie. Many companies needed convincing to adopt the internet (heck, some still need convincing).

Would you say the same thing ("If it helped as much as its claimed, there wouldn't be any need to try and convince people they should be using it.") about the internet?


I would, unironically.

Thing's called a self-fulfilling prophecy. Next level to a MLM scheme: total bootstrap. Throwing shit at things in innate primate activity, use money to shift people's attention to a given thing for long enough and eventually they'll throw enough shit at the wall for something to stick. At which point it becomes something able to roll along with the market cycles.


"It is difficult to get a man to understand something when his salary depends upon his not understanding it"


It's also difficult to get a man to understand something if you stubbornly refuse to explain it.

Instead of this crypto-esque hand waving, maybe you can answer me now?


> Do you not use the intellisense?

Not your point, but I turned intellisense off years ago and haven't missed it. There's so much going on with IDE UIs now that having extra drop downs while typing was just too much. And copilot largely replaces the benefit of it anyway.


The big difference is that all of the other technologies you cite are deterministic making it easy to predict their behavior.

You have to inspect the output of LLMs much more carefully to make sure they are doing what you want.


"Technology has always helped people. Are you one of the people that say optimizing compilers are bad? Do you not use the intellisense? Or IDEs? Do you not use higher level languages? Why not write in assembly all the time? No free lunch right."

Actually I am not a software engineer for a living so I have zero vested interest or bias lmao. That said, I studied Comp Sci at a top institution and really loved defining algorithms and had no interest in actual coding as it didn't give my brain the variety it needs.

If you are employed as a software engineer, I am probably more open to realizing the problems that only become obvious to you later on.


Someone already pointed out that we're at the point when it's longer possible to know if comments like the above are satire or not.


Yep, my strong feeling is that the net benefit of all of this will be zero. The time you have to spend holding the LLM hand is almost equal to how much time you would have spent writing it yourself. But then you've got yourself a codebase that you didn't write yourself, and we all know hunting bugs in someone else's code is way harder than code you had a part in designing/writing.

People are honestly just drunk on this thing at this point. The sunken cost fallacy has people pushing on (ie. spending more time) when LLMs aren't getting it right. People are happy to trade convenience for everything else, just look at junk food where people trade in flavour and their health. And ultimately we are in a time when nobody is building for the future, it's all get rich quick schemes: squeeze then get out before anyone asks why the river ran dry. LLMs are like the perfect drug for our current society.

Just look at how technology has helped us in the past decades. Instead of launching us towards some kind of Star Trek utopia, most people now just work more for less!


Only when purely vibe coding. AI currently saves a LOT of time if you get it to generate boilerplate, diagnose bugs, or assist with sandboxed issues.

The proof is in the pudding. The work I do takes me half as long as it used to and is just as high in quality, even though I manage and carefully curate the output.


I don't write much boilerplate anyway. I long ago figured out ways to not do that (I use a computer to do repetitive tasks for me). So when people talk about boilerplate I feel like they're only just catching up to me, not surpassing me.

As for catching bugs, maybe, but I feel like it's pot luck. Sometimes it can find something, sometimes it's just complete rubbish. Sometimes worth giving it a spin but still not convinced it's saving that much. Then again I don't spend much time hunting down bugs in unfamiliar code bases.


Like any tool, it has use cases where it excels and use cases where it’s pretty useless.

Unfamiliar code bases is a great example, if it’s able to find the bug it could do so almost instantly, as opposed to a human trying to read through the code base for ages. But for someone who is intimately familiar with a code base, they’ll probably solve the problem way faster, especially if it’s subtle.

Also say if your job is taking image designs and building them in html/css, just feeding it an image getting it to dump you an html/css framework and then you just clean up the details of will save you a lot of time. But on the flip side if you need to make critically safe software where every line matters, you’ll be way faster on your own.

People want to give a black and white “ai is bad” or “ai is great”, but the truth _as always_ is “it depends”. Humans aren’t very good at “it depends”.


I use AI for most of those things. And I think it probably saves me a bit of time.

But in that study that came out a few weeks ago where they actually looked at time saved, every single developer overestimated their time saved. To the point where even the ones who lost time thought they saved time.

LLMs are very good at making you feel like you’re saving time even when you aren’t. That doesn’t mean they can’t be a net productivity benefit.

But I’d be very very very surprised if you have real hard data to back up your feelings about your work taking you half as long and being equal quality.


That study predates Claude Code though.

I’m not surprised by the contents. I had the same feeling; I made some attempts at using LLMs for coding prior to CC, and with rare exceptions it never saved me any time.

CC changed that situation hugely, at least in my subjective view. It’s of course possible that it’s not as good as I feel it is, but I would at least want a new study.


I don’t believe that CC is so much better than cursor using Claude models that it moves the needle enough to flip the results of that study.

The key thing to look at is that even the participants that did objectively save time, overestimated time saved by a huge amount.

But also you’re always likely to be at least one model ahead of any studies that come out.


> That study predates Claude Code though.

Is there a study demonstrating Claude Code improves productivity?


I mean, I used to average 2 hours of intense work a day and now it’s 1 hour.


How are you tracking that? Are you keeping a log, or are you just guessing? Do you have a mostly objective definition of intense work or are you just basing it on how you feel? Is your situation at work otherwise exactly the same, or have you gotten into a better groove with your manager? Are you working on exactly the same thing? Have you leveled up with some more experience? Have you learned the domain better?

Is your work objectively the same quality? Is it possible that you are producing less but it’s still far above the minimum so no one has noticed? Is your work good enough for now, but a year from now when someone tries to change it, it will be a lot harder for them?

Based on the only real studies we have, humans grossly overestimate AI time savings. It’s highly likely you are too.


_sigh_. Really dude? Just because people overestimate them on average doesn’t mean every person does. In fact, you should be well versed enough about the statistics to understand that it will be a spectrum that is highly dependent on both a persons role and how they use it.

For any given new tool, a range of usefulness that depends on many factors will affect people differently as individuals. Just because a carpenter doesn’t save much time because Microsoft excel exists doesn’t mean it’s not a hugely useful tool, and doesn’t mean it doesn’t save a lot of time for accountants, for example.

Instead of trying to tear apart my particular case, why not entertain the possibility that it’s more likely I’m reporting pretty accurately but it’s just I may be higher up that spectrum - with a good combo of having a perfect use case for the tool and also using the tool skilfully?


> _sigh_. Really dude? Just because people overestimate them on average doesn’t mean every person does.

In the study, every single person overestimated time saved on nearly every single task they measured.

Some people saved time, some didn’t. Some saved more time, some less. But every single person overestimated time saved by a large margin.

I’m not saying you aren’t saving time, but it’s very unlikely that if you aren’t tracking things very carefully that you are overestimating.


I’ll admit it’s possible my estimates are off a bit. What isn’t up for debate though is that it’s made a huge difference in my life and saved me a ton of time.

The fact that people overestimate its usefulness is somewhat of a “shrug” for me. So long as it _is_ making big differences, that’s still great whether people overestimate it or not.


If people overestimate time saved by huge margins, we don’t know whether it’s making big differences or not. Or more specifically whether the boost is worth the cost (both monetary and otherwise).


Only if we’re only using people’s opinions as data. There are other ways to do this.


Sure and if we look at data, the. only independent studies we have show either small productivity gains or a reduction in productivity for everything but small greenfield projects.


Studies plural? Can you link them?


Google for the Stanford study by Yegor Denisov-Blanch. You might have to pay to access the paper, but you can watch the author’s synopsis on YouTube.

For low complexity greenfield projects (best case) they found a 30% to 40% productivity boost.

For high-complexity brownfield projects (worst case) they found a -5% to 10% productivity boost.

The METR study from a few weeks ago showed an average productivity drop around 20%.

That study also found that the average developer believed AI had made them 20% more productive. The difference in perception and reality was on average 40 percentage points.


The devil is always in the details with these studies. What did they measure, how did they measure it, are they counting learning the new tool as unproductive time, etc etc etc. I’ll have to read them myself. Regardless, I’ll be sad if it makes most people less productive on average if that’s the scientific truth, but it won’t change the fact that for my specific use case there is a clear time save.


Sure you need to read them yourself to know what conclusions to draw.

In my specific case I felt like I was maybe 30% faster on greenfield projects with AI (and maybe 10% on brownfield). Then I read the study showing a 40 percentage point overestimate on average.

I started tracking things and it’s pretty clear I’m not actually saving anywhere near 30%, and I’d estimate that long term I might be in the negative productivity realm.


My way of looking at this is simple.

What are people doing this quote on quote, time that they have gained back? Working on new projects? Ok can you show me the impact on the financials (show me the money)? And then I usually get dead silence. And before someone mentions the layoffs - lmao get real. Its offshoring 2.0 so that the large firms can increase their internal equity to keep funding this effort.

Most people are terrible at giving true informed opinions - they never dig deep enough to determine if what they are saying is proper.


Fast feedback is one benefit, given the 90% is releasable - even if only to a segment of users. This might be anathema to good engineering, but a benefit to user experience research and to organizations that want to test their market for demand.

Fast feedback is also great for improving release processes; when you have a feedback loop with Product, UX, Engineering, Security etc, being able to front load some % of a deliverable can help you make better decisions that may end up being a time saver net/net.


> And its going to get worse!

That isn't clear given the fact that LLMs and, more importantly, LLM programming environments that manage context better are still improving.


> What bothers me the most is that I much prefer to build stuff rather than fixing code I'm not intimately familiar with.

Me too. But I think there's a split here. Some people love the new fast and loose way and rave about how they're experiencing more joy coding than ever before.

But I tried it briefly on a side project, and hated the feeling of disconnect. I started over, doing everything manually but boosted by AI and it's deeply satisfying. There is just one section of AI written code that I don't entirely understand, a complex SQL query I was having trouble writing myself. But at least with an SQL query it's very easy to verify the code does exactly what you want with no possibility of side effects.


I'd argue that this awareness is a good thing; it means you're measuring, analyzing, etc all the code.

Best practices in software development for forever have been to verify everything; CI, code reviews, unit tests, linters, etc. I'd argue that with LLM generated code, a software developer's job and/or that of an organization as a whole has shifted even more towards reviewing and verification.

If quality is taking a hit you need to stop; how important is quality to you? How do you define quality in your organization? And what steps do you take to ensure and improve quality before merging LLM generated code? Remember that you're still the boss and there is no excuse for merging substandard code.


Imagine someones add 10 UTs carefully devised and someone notices they need 1 more during the PR.

Scenario B, you add 40 with an LLM, that look good on paper but only cover 6 of the original ones. Besides, who's going to pay careful attention to a PR with 40.

"Must be so thorough!".


As Fowler himself states, there's a need to learn to use these tools properly.

In any case poor work quality is a failure of tech leadership and culture, it's not AI's fault.


It’s funny how nothing seems to be AI’s fault.


That's because it's software / an application. I don't blame my editor for broken code either. You can't put blame on software itself, it just does what it's programmed to do.

But also, blameless culture is IMO important in software development. If a bug ends up in production, whose fault is it? The developer that wrote the code? The LLM that generated it? The reviewer that approved it? The product owner that decided a feature should be built? The tester that missed the bug? The engineering organization that has a gap in their CI?

As with the Therac-25 incident, it's never one cause: https://news.ycombinator.com/item?id=45036294


Blameless culture is important for a lot of reasons, but many of them are human. LLMs are just tools. If one of the issues identified in a post-mortem is "using this particular tool is causing us problems", there's not a blameless culture out there that would say "We can't blame the tool..."; the action item is "Figure out how to improve/replace/remove the tool so it no longer contributes to problems."


> You can't put blame on software itself, it just does what it's programmed to do.

This isn't what AI enthusiasts say about AI though, they only bring that up when they get defensive but then go around and say it will totally replace software engineers and is not just a tool.


Blame is purely social and purely human. “Blaming” a tool or process and root causing are functionally identical. Misattributing an outage to a single failure is certainly one way to fail to fix a process. Failing to identify faulty tools/ faulty applications is another way.

I was being flippant to say it’s never AI’s fault, but due to board/C-Suite pressure it’s harder than ever to point out the ways that AI makes processes more complex, harder to reason about, stochastic, and expensive. So we end up with problems that have to be attributed to something not AI.


If poor work gets merged, the responsibility lies in who wrote it, who merged it, and who allows such a culture.

The tools used do not hold responsibilities, they are tools.


"I got rid of that machine saw. Every so often it made a cut that was slightly off line but it was hard to see. I might not find out until much later and then have to redo everything."


How could a tool be at fault? If an airplane crashes is the plane at fault or the designers, engineers, and/or pilot?


Designers, engineers, and/or pilots aren't tools, so that's a strange rhetorical question.

At any rate, it depends on the crash. The NTSB will investigate and release findings that very well may assign fault to the design of the plane and/or pilot or even tools the pilot was using, and will make recommendations about how to avoid a similar crash in the future, which could include discontinuing the use of certain tools.


My point is that the tool (the airplane in this case) is not at at fault, but rather the humans in the loop.


If your toaster burns your breakfast bread, Do you ultimately blame "it"?

You gdt mad, swear at it, maybe even throw it to the wall on a git of rage but, at the end of the day, deep inside you still know you screwed.


Devices can be faulty and technology can be inappropriate.


If I bought an AI powered toaster that allows me to select a desired shade of toast, I select light golden brown, and it burns my toast, I certainly do blame “it”.

I wouldn’t throw it against a wall because I’m not a psychopath, but I would demand my money back.


No one seems to be able to grasp the possibility that AI is a failure


> No one seems to be able to grasp the possibility that AI is a failure.

Do you think by the time GPT-9 comes, we'll say "That's it, AI is a failure, we'll just stop using it!"

Or do you speak in metaphorical/bigger picture/"butlerian jihad" terms?


I don't see the use-case now, maybe there will be one by GPT-9


Absence of your need isn't evidence of no need.


This is true, but I've never heard of a use case. To which you might reply, "doesn't mean there isn't one," which you would be also right about.

Maybe you know one.


I presume your definition of use case is something that doesn't include what people normally use it for. And I presume me using it for coding every day is disqualified as well.


I didn't mean to suggest it has no utility at all. That's obviously wrong (same for crypto). I meant a use case in line with the projections the companies have claimed (multiple trillions). Help with basic coding (of which efficiency gains are still speculative) is not a multi-trillion dollar business.


You've failed to figure out when and how to use it. It's not a binary failed/succeeded thing.


None of the copyright issues or suicide cases are handled in the court yet. There are many aspects.


Metaverse was...


“There’s no use for this thing!” - said the farmer about the computer


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: