Hacker Newsnew | past | comments | ask | show | jobs | submit | hatefulheart's commentslogin

There are so many comments along the lines of:

"What's the matter, just fork it when it goes bad?"

The problem is that uv in and of itself, whilst a great technical achievement isn't sufficient. Astral run a massive DevOps pipeline that, just to give one example, packages the python distributions.

Those who are saying that forking is an option are clearly not arguing it in good faith.


That's overstating things. The biggest piece of infra is PyPI, to which uv is only an interface. They do distribute Python binaries, but that's not very impressive.

So when Charlie Marsh goes on a podcast saying that the majority the complications they face with their work is in DevOps, he's also overstating things?

But you know best it seems!


Overstating complexity justifies funding, and attracts attention.

False equivalency. The maintenance and expertise required to run the codebase you’ve generated still falls flatly on you. When you use a library or a framework it normally is domain experts that do that stuff.


I’m so glad we’ve got domain experts to write those tricky things like left-pad for us.

On a more serious note, I do think that the maintenance aspect is a differentiator, and that if it’s something that you end up committing to your codebase then ownership and accountability falls to you. Externally sourced libraries and frameworks ultimately have different owners.


I'm reminded of the recent "vibe coded" OCaml fiasco[1].

In particular, the PR author's response to this question:

> Here's my question: why did the files that you submitted name Mark Shinwell as the author?

> > Beats me. AI decided to do so and I didn't question it.

The same author submitted a similar PR to Julia as well. Both were closed in-part due to the significant maintenance burden these entirely LLM-written PR's would create.

> This humongous amount of code is hard to review, and very lightly tested. (You are only testing that basic functionality works.) Inevitably the code will be full of problems, and we (the maintainers of the compiler) will have to pay the cost of fixing them. But maintaining large pieces of plausible-in-general-but-weird-in-the-details code is a large burden.

Setting aside the significant volume of code being committed at once (13K+ lines in the OCaml example), the maintainers would have to review code even the PR author didn't review - and would likely fall into the same trap many of us have found ourselves in while reviewing LLM-generated code... "Am I an idiot or is this code broken? I must be missing something obvious..." (followed by wasted time and effort).

The PR author even admitted they know little about compilers - making them unqualified to review the LLM-generated code.

[1] https://github.com/ocaml/ocaml/pull/14369


This exchange is so funny. That much time around LLMs and no human feedback really seems to have broken that guy's brain.


“Whatever he did or didn’t invent, he made a ton of invention possible.“

I think it’s time to pony up.

Where are your vibe coded databases that take on SQLite and Postgres?

Where are your vibe coded Operating Systems?

Where are your vibe coded browsers?

Where are your vibe coded literally anything?


My pony’s doing just fine.

At a friend’s birthday last year, I wrote in the space of 8 minutes - then performed - a 3-minute long verse about said friend and their puppy. I didn’t get the verse from ChatGPT. I had it help me find rhyming words that fit the rhythm, had it help me find synonyms, and find punchy ends to sentences.

I made a xylophone iphone app way back in mid 2024 by copy pasting code to Claude and errors from Xcode, just to show off what AI can do. Someone asked to make it support dragging your finger across the screen to play lots of notes really fast - Claude did that in one shot. In mid 2024, 6 months before Claude Code.

I made a sorting hat for my sisters’ kids for Christmas a few weeks ago. I found a voice cloning website, had Claude write some fun dialogue, and vibe coded an app with the generated recordings of the sorting hat voice saying various virtues and Harry Potter house names. The cloned voice was so good, it sounded exactly like the actor in the movie. I loaded the app on my phone and hid a Bluetooth speaker in a Santa hat - tapping a button in the app would play a voice recording from the sorting hat AI voice. The kids unwrapped the hat and it declared itself as the sorting hat. Put the hat on a kid’s head, tap a button, hat talks! With a little sleight of hand, the kids really believed it was the hat talking all by itself. Laughing together with my whole family as the hat declared my cheeky niece “Slytherin!!!” was one of the most humanising things I’ve ever seen.

I’ve made event posters for my Burning Man camp. Zillions of dumb memes for group chats. You always have to do some inpainting or touch it up in an image editor, but it’s worth it for the lulz.

And right now I’m using Claude Code for my startup, ApprovIQ. Dario Amodei was right in a way: 99% of the code was written by Claude.

But sorry, no multi million line vibe coded codebases. For that my friend, you’ll be waiting until after the next AI winter.


With all due respect you are living in a different world. Not in a bad way, it’s just you haven’t experienced what maintenance on a large complicated code base is like.


The worst part of the new wave of vibe coders is their confidence.


Different worlds yes, but they both exist.


Sure, where one is ignorant of the other. That’s not a pro.


Small business owners not being aware of maintenance hell in large org codebases, yes, is that a problem?

I work for a large org and maintenance hell is my job, so I see both sides I think.


I’m a small business owner and solo developer on that business. Let’s just say I’d rather know the costs of my choices upfront. I’m sure there is not one small business owner in tech who would turn their nose up at that.


Good points. Works both ways though, you are splitting your time between two worlds, and don't have a fully clear view on the costs of bad choices on a small business

to know this you need to know what processes these businesses have been using for the past decade to run real full time business with full time staff. for example, you don't know just how bad the prior systems were, that the self built systems replaced.

with all due respect you don't have all the info to make the calculation on my world. just as I don't have it for yours.

the same tool that helped me build our systems, is not going to be the same tool that helps you maintain your large code base. But my point is, that I'm on the front line of change, and my guess is it's not going to be limited to my size of business. I don't know what your tool will look like, but I'd bet it's coming


Fair enough, that is a good point.


A pro is someone who makes money doing their profession.


Pro as in pros and cons, not as in professional.


Maybe the problem is large complicated codebases?


I think there will be a transition period.


Your trainers clearly never read Starting Strength.


No idea, I certainly haven't. This was decades ago, though, so it's entirely possible that established best practice has changed.


these days your better of not reading that, probably. bunch of outdated and bad advice coming from that corner.


What do you recommend?


"Eat food, not too much, mostly plants."

Also probably move around a lot, doesn't matter how, ideally by finding something fun to do that involves moving around a lot.


I think there is a difference between automating “things” (as you put it) and getting to the point where people are on stage suggesting that the government becomes a “backstop” to their investments in automation.


Good question, it’s not. You're responding to just another Rust and/or LLM fanatic claiming they can predict the future. Dime a dozen on this board.


I will that the Rust fanatic mantle thank you very much. Check your sources though [1]. I dont predict the future I just listen to Linus.

[1] https://lwn.net/Articles/1049831/


Sorry to burst your bubble but it takes more than Linus to make Linux.


I suspect you wouldn't be saying that if he had agreed with you.


I wouldn’t be saying that it takes more than Linus to build Linux if Linus agreed with me?

What on earth are you talking about? Don’t quit your day job to become a detective.


Absolutely. If Linus had said "Rust for Linux might go away" then you would have said "See! The creator and leader of Linux says Rust is going to go away!" but because he said it is here to stay you're saying "Pff, Linus isn't that important."

I won't quit my day job to become a detective because actual detective work isn't this trivial.


Do you think everyone you speak to online is this misinformed or has these takes?

I’ve read and listened to enough of Linus to know he says this himself. The Linux kernel is nothing without the maintainers, this is easily observable and everyone knows it.

You see the reason you shouldn’t go into detective work is because you’d be terrible at it, not because it’s non-trivial.


Hey let’s not put the Rust fanatics in the same bucket as the LLM bros. One is a safe programming language, the other is an overgrown lorem ipsum text generator. We’re not the same :)


The fact that you’re both getting lumped in the same shill bucket together should give you pause.


Yes, I paused to comment and moved on


I’m confused, how does this prevent a CSRF attack?

SameSite or not is inconsequential to the check a backend does for a CSRF token in the POST.


The only reason CSRF is even possible is because the browser sends (or, well, used to send) cookies for a particular request even if that request initiated from a different site. If the browser never did that (and most people would argue that's a design flaw from the get go) CSRF attacks wouldn't even be possible. The SameSite attribute makes it so that cookies will only be sent if the request that originated them is the same origin as the origin that originally wrote the cookie.


I think I understand now, the Cookie just is not present in the POST if a user clicked on, for example, a maliciously crafted post from a different origin?


Exactly.


Never needed the CSRF and assumed that cookies was always SameSite, but can see that it was introduced in 2016. Just had the sitename put into the value of the cookie since, and never really needed to think about that.

Just feels like all these http specs are super duck tapped together. I guess that is only way to ensure mass adoption for new devs and now vibe coders.


I'm not sure I'm understanding your solution


If the domain name is in the cookie value then that can't be used when submit another request from another domain. Yes you can configure the dns to bypass that, but at that point it is also pointless for CSRF.


Not to be rude, but from your comments you don't appear to understand what the CSRF vulnerability actually is, nor how attackers make use of it.

Cookies can still only be sent to the site that originally wrote them, and they can only be read by the originating site, and this was always the case. The problem, though, is that a Bad Guy site could submit a form post to Vulnerable Site, and originally the browser would still send any cookies of Vulnerable Site with the request. Your comment about "if the domain name is in the cookie value" doesn't change this and the problem still exists. "Yes you can configure the dns to bypass that" also doesn't make any sense in this context. The issue is that if a user is logged into Vulnerable Site, and can be somehow convinced to visit Bad Guy site, then Bad Guy site can then take an action as the logged user of Vulnerable Site, without the user's consent.


Given what was written, I'm not quite sure the author does either.


> Just had the sitename put into the value of the cookie since, and never really needed to think about that.

How would that help? This doesn't seem like a solution to the CSRF problem


No? The whole point of SameSite=(!none) is to prevent requests from unexpectedly carrying cookies, which is how CSRF attacks work.


What does this even mean?

I’m not being rude, what does it mean to unexpectedly carry cookies? That’s not what I understand the risk of CSRF is.

My understanding is that we want to ensure a POST came from our website and we do so with a double signed HMAC token that is present in the form AND the cookie, which is also tied to the session.

What on earth is unexpectedly carrying cookies?


The "unexpected" part is that the browser automatically fills some headers on behalf of the user, that the (malicious) origin server does not have access to. For most headers it's not a problem, but cookies are more sensitive.

The core idea behind the token-based defense is to prove that the origin server had access to the value in the first place such that it could have sent it if the browser didn't add it automatically.

I tend to agree that the inclusion of cookies in cross-site requests is the wrong default. Using same-site fixes the problem at the root.

The general recommendation I saw is to have two cookies. One without same-site for read operations, this allows to gracefully handle users navigating to your site. And a second same-site cookie for state-changing operations.


Simon have you got to the point where you just don’t read the article?

Others have pointed out your interpretation of long task is not the same as the article.

Maybe this is the negative effects of excessive LLM usage that are spoken about.


They were right. I hadn't read enough of the article to understand what was meant by multi-hour tasks. I upvoted them for pointing that out.


>> Maybe this is the negative effects of excessive LLM usage that are spoken about.

> I upvoted them for pointing that out.

I'm also curious about what you think about the GPs question. TBH, responding after reading half an article was a common thing for most people pre-LLM anyway.


Yeah, show me a Hacker News user who's never posted a comment on a story without properly reading it (or even without clicking the link). LLMs have nothing to do with it.

If I had piped the article through an LLM first, I wouldn't have made the embarrassing mistake in that comment!


Your tone is kind of ridiculous.

It’s insane this has to be pointed out to you but here we go.

Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.


No, a tank is obviously better at all those things. They should clearly be used by everyone, including the paratrooper. Never mind the extra fuel costs or weight, all that matters is that it gets the job done best.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: