"What's the matter, just fork it when it goes bad?"
The problem is that uv in and of itself, whilst a great technical achievement isn't sufficient. Astral run a massive DevOps pipeline that, just to give one example, packages the python distributions.
Those who are saying that forking is an option are clearly not arguing it in good faith.
That's overstating things. The biggest piece of infra is PyPI, to which uv is only an interface. They do distribute Python binaries, but that's not very impressive.
So when Charlie Marsh goes on a podcast saying that the majority the complications they face with their work is in DevOps, he's also overstating things?
False equivalency. The maintenance and expertise required to run the codebase you’ve generated still falls flatly on you. When you use a library or a framework it normally is domain experts that do that stuff.
I’m so glad we’ve got domain experts to write those tricky things like left-pad for us.
On a more serious note, I do think that the maintenance aspect is a differentiator, and that if it’s something that you end up committing to your codebase then ownership and accountability falls to you. Externally sourced libraries and frameworks ultimately have different owners.
I'm reminded of the recent "vibe coded" OCaml fiasco[1].
In particular, the PR author's response to this question:
> Here's my question: why did the files that you submitted name Mark Shinwell as the author?
> > Beats me. AI decided to do so and I didn't question it.
The same author submitted a similar PR to Julia as well. Both were closed in-part due to the significant maintenance burden these entirely LLM-written PR's would create.
> This humongous amount of code is hard to review, and very lightly tested. (You are only testing that basic functionality works.) Inevitably the code will be full of problems, and we (the maintainers of the compiler) will have to pay the cost of fixing them. But maintaining large pieces of plausible-in-general-but-weird-in-the-details code is a large burden.
Setting aside the significant volume of code being committed at once (13K+ lines in the OCaml example), the maintainers would have to review code even the PR author didn't review - and would likely fall into the same trap many of us have found ourselves in while reviewing LLM-generated code... "Am I an idiot or is this code broken? I must be missing something obvious..." (followed by wasted time and effort).
The PR author even admitted they know little about compilers - making them unqualified to review the LLM-generated code.
At a friend’s birthday last year, I wrote in the space of 8 minutes - then performed - a 3-minute long verse about said friend and their puppy. I didn’t get the verse from ChatGPT. I had it help me find rhyming words that fit the rhythm, had it help me find synonyms, and find punchy ends to sentences.
I made a xylophone iphone app way back in mid 2024 by copy pasting code to Claude and errors from Xcode, just to show off what AI can do. Someone asked to make it support dragging your finger across the screen to play lots of notes really fast - Claude did that in one shot. In mid 2024, 6 months before Claude Code.
I made a sorting hat for my sisters’ kids for Christmas a few weeks ago. I found a voice cloning website, had Claude write some fun dialogue, and vibe coded an app with the generated recordings of the sorting hat voice saying various virtues and Harry Potter house names. The cloned voice was so good, it sounded exactly like the actor in the movie. I loaded the app on my phone and hid a Bluetooth speaker in a Santa hat - tapping a button in the app would play a voice recording from the sorting hat AI voice. The kids unwrapped the hat and it declared itself as the sorting hat. Put the hat on a kid’s head, tap a button, hat talks! With a little sleight of hand, the kids really believed it was the hat talking all by itself. Laughing together with my whole family as the hat declared my cheeky niece “Slytherin!!!” was one of the most humanising things I’ve ever seen.
I’ve made event posters for my Burning Man camp. Zillions of dumb memes for group chats. You always have to do some inpainting or touch it up in an image editor, but it’s worth it for the lulz.
And right now I’m using Claude Code for my startup, ApprovIQ. Dario Amodei was right in a way: 99% of the code was written by Claude.
But sorry, no multi million line vibe coded codebases. For that my friend, you’ll be waiting until after the next AI winter.
With all due respect you are living in a different world. Not in a bad way, it’s just you haven’t experienced what maintenance on a large complicated code base is like.
I’m a small business owner and solo developer on that business. Let’s just say I’d rather know the costs of my choices upfront. I’m sure there is not one small business owner in tech who would turn their nose up at that.
Good points. Works both ways though, you are splitting your time between two worlds, and don't have a fully clear view on the costs of bad choices on a small business
to know this you need to know what processes these businesses have been using for the past decade to run real full time business with full time staff. for example, you don't know just how bad the prior systems were, that the self built systems replaced.
with all due respect you don't have all the info to make the calculation on my world. just as I don't have it for yours.
the same tool that helped me build our systems, is not going to be the same tool that helps you maintain your large code base. But my point is, that I'm on the front line of change, and my guess is it's not going to be limited to my size of business. I don't know what your tool will look like, but I'd bet it's coming
I think there is a difference between automating “things” (as you put it) and getting to the point where people are on stage suggesting that the government becomes a “backstop” to their investments in automation.
Absolutely. If Linus had said "Rust for Linux might go away" then you would have said "See! The creator and leader of Linux says Rust is going to go away!" but because he said it is here to stay you're saying "Pff, Linus isn't that important."
I won't quit my day job to become a detective because actual detective work isn't this trivial.
Do you think everyone you speak to online is this misinformed or has these takes?
I’ve read and listened to enough of Linus to know he says this himself. The Linux kernel is nothing without the maintainers, this is easily observable and everyone knows it.
You see the reason you shouldn’t go into detective work is because you’d be terrible at it, not because it’s non-trivial.
Hey let’s not put the Rust fanatics in the same bucket as the LLM bros. One is a safe programming language, the other is an overgrown lorem ipsum text generator. We’re not the same :)
The only reason CSRF is even possible is because the browser sends (or, well, used to send) cookies for a particular request even if that request initiated from a different site. If the browser never did that (and most people would argue that's a design flaw from the get go) CSRF attacks wouldn't even be possible. The SameSite attribute makes it so that cookies will only be sent if the request that originated them is the same origin as the origin that originally wrote the cookie.
I think I understand now, the Cookie just is not present in the POST if a user clicked on, for example, a maliciously crafted post from a different origin?
Never needed the CSRF and assumed that cookies was always SameSite, but can see that it was introduced in 2016. Just had the sitename put into the value of the cookie since, and never really needed to think about that.
Just feels like all these http specs are super duck tapped together. I guess that is only way to ensure mass adoption for new devs and now vibe coders.
If the domain name is in the cookie value then that can't be used when submit another request from another domain. Yes you can configure the dns to bypass that, but at that point it is also pointless for CSRF.
Not to be rude, but from your comments you don't appear to understand what the CSRF vulnerability actually is, nor how attackers make use of it.
Cookies can still only be sent to the site that originally wrote them, and they can only be read by the originating site, and this was always the case. The problem, though, is that a Bad Guy site could submit a form post to Vulnerable Site, and originally the browser would still send any cookies of Vulnerable Site with the request. Your comment about "if the domain name is in the cookie value" doesn't change this and the problem still exists. "Yes you can configure the dns to bypass that" also doesn't make any sense in this context. The issue is that if a user is logged into Vulnerable Site, and can be somehow convinced to visit Bad Guy site, then Bad Guy site can then take an action as the logged user of Vulnerable Site, without the user's consent.
I’m not being rude, what does it mean to unexpectedly carry cookies? That’s not what I understand the risk of CSRF is.
My understanding is that we want to ensure a POST came from our website and we do so with a double signed HMAC token that is present in the form AND the cookie, which is also tied to the session.
The "unexpected" part is that the browser automatically fills some headers on behalf of the user, that the (malicious) origin server does not have access to. For most headers it's not a problem, but cookies are more sensitive.
The core idea behind the token-based defense is to prove that the origin server had access to the value in the first place such that it could have sent it if the browser didn't add it automatically.
I tend to agree that the inclusion of cookies in cross-site requests is the wrong default. Using same-site fixes the problem at the root.
The general recommendation I saw is to have two cookies. One without same-site for read operations, this allows to gracefully handle users navigating to your site. And a second same-site cookie for state-changing operations.
>> Maybe this is the negative effects of excessive LLM usage that are spoken about.
> I upvoted them for pointing that out.
I'm also curious about what you think about the GPs question. TBH, responding after reading half an article was a common thing for most people pre-LLM anyway.
Yeah, show me a Hacker News user who's never posted a comment on a story without properly reading it (or even without clicking the link). LLMs have nothing to do with it.
If I had piped the article through an LLM first, I wouldn't have made the embarrassing mistake in that comment!
It’s insane this has to be pointed out to you but here we go.
Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
No, a tank is obviously better at all those things. They should clearly be used by everyone, including the paratrooper. Never mind the extra fuel costs or weight, all that matters is that it gets the job done best.
"What's the matter, just fork it when it goes bad?"
The problem is that uv in and of itself, whilst a great technical achievement isn't sufficient. Astral run a massive DevOps pipeline that, just to give one example, packages the python distributions.
Those who are saying that forking is an option are clearly not arguing it in good faith.
reply