First I will say, I am very much against dark patterns and I believe servers should be paid a fair wage and not have to rely on tips.
But until that I do tip for dine-in service. But I found the "buy me a coffee" link on the button of this to be much funnier / ironic than it probably should have been.
It's also missing what I think is the worst dark pattern:
Having no option not to tip at all. Instead requiring that the customer press "Custom" and manually entering "0.00"
> If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming.
Just this week alone I had the LLMs:
- Introduce a serious security flaw.
- Decided it was better to duplicate the same 5 lines of code 20 times instead of making a function and calling that.
And that is actually just this week. And to be clear, I am not making that up to prove a point, I use AI day in and day out and it happens consistently. Which is fine, humans can do that too, the issue is when there is a whole new generation of "programmers" that have absolutely zero clue how to spot those issues when (not if) they come up.
And as AI gets better (which it will) it actually makes it more dangerous because people start blindly trusting the code it produces.
If that's happening then you're most likely not using the best tools (best model and IDE) for agentic coding and/or not using them right.
How an experienced developer uses LLMs to program is different than how a new developer should use LLMs to learn programming principles.
I don't have a CS degree. I never programmed in assembly. Before LLMs I could pump out functional secure LAMP stack and JS web apps productively after years of practice. Some curmudgeon CS expert might scrutinize my code for being not optimally efficient or engineered. Maybe I reinvented some algorithm instead of using a standard function or library. Yet my code worked and the users got what they wanted.
If you're not using the best tools and you're not using them properly and then they produce a result you don't like, while thousands of developers are using the tools productively, does that say something about you or the tools?
Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.
Whether the inexperienced dev uses an LLM or not doesn't change the fact that they might product bad code with security flaws.
I'm not arguing that people that don't know how to program can use LLMs to replace competent programmers. I'm arguing that competent programmers can be 3-4x more productive with the current best agentic coding tools.
I have extremely compelling valid evidence of this, and if you're going to try to debate me with examples of how you're unable to get these results then all it proves is you're ideologically opposed to it or not capable.
First, I'm using frontier models with Cursor agenic mode.
> Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.
I 100% agree. That was my point. A lot of people (not saying you, I don't know you) are not qualified to take on that level of responsibility yet they do it anyway and ship it to the user.
And on the human side, that is precisely why procedures like code review have been standard for a while.
But my main objection to the parent post was not that LLMs can't be powerful tools but that specifically the examples used of maintainability and security are (IMO) possibly the worst examples you can use. Since 70k line un-reviewable pull requests are not maintainable and probably also not secure (how would you know?).
Okay, I'm pretty sure we would heavily agree on a lot of this if we pulled it all apart.
It really boils down to who is using the LLM tool and how they are using it and what they want.
When I prompt the LLM to do something I scout out what I want it to do, potential security and maintenance considerations, etc. I then prompt it precisely, sometimes with equivalent of multi page essay, sometimes with a list of instructions, etc. the point is I'm not vague. I then review what it did and look for potential issues. I also ask it to review what it did and if it sees potential issues (sometimes with more specific questioning).
So we are mashing together a few dimensions, my GPC was pointing out:
- A: competent developer wants software functionality produced that is secure and maintainable
- B: competent developer wants to produce software functionality that is secure and maintainable
The distinction between these is subtle but has a huge impact on senior developer attitudes to LLMs from what I've seen. Dev A more likely to figure out how to get most out of LLMs, Dev B will resist and use flaws as excuse to do it themselves. Reminds me a bit of early AWS days and engineers hung up on self hosting. Or devs wanting to built everything from scratch instead of using a framework.
What youre pointing out is that if careless or inexperienced developers use LLMs they will produce unmaintainable and insecure code. Yeah I agree. They would probably produce insecure and unmaintainable code without LLMs too. Experienced devs using LLMs well can produce secure and maintainable code. So the distinction isn't LLMs, it's who is using them and how.
What just occured to me though, and I suspect you will appreciate, is the fact that I'm only working with other very experienced devs. Experienced devs working with JR or careless devs who can now produce unmaintainable and insecure code much faster is a novel problem and would probably be really frustrating to deal with. Reviewing a 70k line PR produced by an LLM without thoughtful prompting and oversight sounds awful. I'm not advocating this is a good thing. Though surely there is some way to manage that, and figuring out how to manage it probably has some huge benefits. I've only been thinking about it for 5 min so I definitely don't have an answer.
One last thought that just occured to me: the whole narrative of AI replacing junior devs seemed bonkers to me because there's still so much demand for new software and LLMs don't remotely compare to developers. That said, as an industry I guess we haven't figured out how to mix LLMs and Jr developers in a way that's net constructive? If JR+LLM = 10x more garbage for SR to review, maybe that's the real reason why JR roles are harder to find?
> thousands of developers are using the tools productively,
There's at least one study that suggests that they actually are not in fact working more productively, they just feel that way.
Unfortunately for me personally, Claude Code on the latest models does not generally make me more productive, but it has absolutely led to several of my coworkers submitting absolutely trash-tier untested LLM code for review.
So until i personally see it give me output that meets my standards, or i see my coworkers do so, I'm not going to be convinced. Legions of anonymous HN commenters insisting they're 50 year veterans that have talked Claude into spitting out perfect code will never convince me.
(I spent over an hour working with Claude Code to write unit tests. I did eventually get code that met my standards, after dozens of rounds of feedback and many manual edits, and cleaning up quite a lot of hallucinatory code. Like most times I decide to "put in the effort" to get worthwhile results from Claude, I'm entirely certain I could have done it faster myself. I just didn't really feel like it at 4 on a Friday)
If you consider "skimpy outfits" pornographic that both Facebook and X are worse than TikTok for me. I've seen a few pieces of content I had to report before but not many.
X, on the other hand, has literal advertisements for adult products on my feed and I get followed by "adult" bot accounts several times a week that when I click through to block them often shows me literal porn. Same with spam facebook friend requests.
I think it boils down to a simple fact that trying to police user-generated content is always going to be an up-hill battle and it doesn't necessarily reflect on the company itself.
> Global Witness claimed TikTok was in breach of the OSA, which requires tech companies to prevent children from encountering harmful content...
Ok, that is noble goal but I feel that the gap between "reasonable measures" and "prevent" is vast.
> I think it boils down to a simple fact that trying to police user-generated content is always going to be an up-hill battle and it doesn't necessarily reflect on the company itself.
I think it boils down to the simple fact that policing user-generated content is completely possible, it just requires identity verification, which is a very unpopular but completely effective idea. Almost like we rediscovered, for the internet, the same problems that need identity in other areas of life.
I think you will also see a push for it in the years ahead. Not necessarily because of some crazy new secret scheme, but because robots will be smart enough to beat most CAPTCHAs or other techniques, and AI will be too convincing, causing websites to be overrun. Reddit is already estimated to be somewhere between 20% and 40% robots. Reddit was also caught with their pants down by a study recently, with an AI robot on r/changemymind racking up ridiculous amounts of karma undetected.
I'm not convinced that will fix the problem. Even in situations where identity is well known such as work or school, we commonly have bad actors.
It's also pretty unpopular for a good reason.
There is a chilling effect that would go along with it. Like it or not, a lot of people use these social platforms to be their true selves when they can't in their real life for safety reasons. Unfortunately for some people their "true self" is pretty trashy. But it's a slippery slope to put restrictions (like ID verification) on everyone just because of a few bad actors.
Granted I'm sure there's some way we could do that while maintaining moderate privacy but it's technologically challenging and I'm not alone in wanting tech companies to have less of my personal information not more.
I didn't get a sense the article singled out charter schools specifically rather it just lists it as a alternative place that funds get funneled instead of to neighborhood public schools.
Which brings me to:
> The main reason "private" (in their sense of the word) schools are gaining in popularity is precisely because they are seen as delivering a better education by an ever wider chunk of society.
If you accept that the article is talking about charter schools, then yes, perhaps the narrow focus of the charter could allow for a stronger education in a specialized area could allow for better education in that area.
But, if you accept it as private schools as a whole, then I don't buy that argument fully. The administration has been very clear that the motivation is "anti-woke" and "traditional family values" and nothing to do with education quality. In fact, as someone who went to a religious school in a small town (granted 30+ years ago) I can vouch that my education (especially in science and math) was FAR worse than the public schools at the time and homeschooling quality varies wildly.
Edit: As far as
> More specifically the US currently spends more than the vast majority of the world per pupil
I also find this focus on spending per pupil very odd because it doesn't account for cost of living.
And if you dive into the fine print it says:
> Includes both government and private expenditures.
So what if (and this is a completely untested hypothesis) the reason we spend so much per pupil in that chart is being exasperated by the private school system.
Edit 2: after diving into it, that source provided is greatly inflated by private school spending including private colleges (which are insanely expensive). So that same data can also be used to argue the US is really spending too much on private schools not public ones.
Here [1] are the data on spending per student PPP adjusted. It doesn't really change it much at all. US is 6th in the world in spending per secondary pupil. They seem to lack data for primary, but it's not going to be some radically different story one way or the other. The initial link I gave (where US is 5th in the world) offers a breakdown of various spending - I was referencing the first table - which is elementary/secondary only. Also, religious schools in the US (Catholic at least) also substantially outperform public schools by a range that widens over time. [2]
In any case private schools will always perform better than public schools because they can be selective with who they admit. A handful of very bad students can easily derail the education of an entire class, and in public schools it can be somewhat difficult to get rid of these kids. And so I do think things like education vouchers, tax rebates, and other incentives to allow more middle and lower class families access to private education is a very good thing.
Lastly, on the woke stuff. Would you be happy if your child was taught creationism and intelligent design? Probably not. Why? Because it'd be ideologically motivated, rather than educationally motivated. If people want to teach their children that in their own time - more power to them, but it has no place in the classroom. And I'd feel exactly the same if my children were taught that e.g. math is racist, or the contemporary 'reimaginings' of history that mix critical theory and contemporary values, and retrofit them into the past in an antagonistic fashion. We went from a real problem of white washing history, to just inventing these sordid tales that are even further off base.
Thank you on presenting the research. I appreciate that.
To address you points though:
> A handful of very bad students can easily derail the education of an entire class
Private school had plenty of bad apples too. In fact, some kids I went to school with were explicitly there because they were trouble makers and their parents though the nuns would break them (they didn't). In contrast, I've found my daughter's public school to be pretty zero tolerance when it comes to disruptors.
But even if you are right, that is also the strength of public schools. The same thing that makes them unable to turn down the bad apple is also what makes sure kids with special needs or low family means don't get left behind.
> math is racist, or the contemporary 'reimaginings' of history that mix critical theory and contemporary values, and retrofit them into the past in an antagonistic fashion.
Except every time one of those stories come out and you dig deeper it is almost never actually what the media says. It's usually either extremely isolated or taken entirely out of context for sensationalism.
For example, there have been several documented cases of public school teachers teaching creationism, and also that the Civil war wasn't about slavery (despite slavery being specifically mentioned by multiple states when they joined the Confederacy), but I would never represent that as wide spread and try to tear down the whole system over it.
Private schools are, of course, not homogeneous. Some schools will accept bad apples, most won't. Public schools have no choice and you generally cannot expel a child except for extremely serious issues. If you've found a public school without major disruptive issues then you probably live in a high income and/or less urban area which immediately works as an invisible filter on the student body. I went to public school system in an urban low income area - I will never put my own children in such a system, under any circumstance.
As for 'no child left behind' and the woke stuff. I can actually tie both of these together in California. [1] In an effort to increase equity they've essentially hamstrung their own education. They're making Algebra 1 a grade later (meaning less normal path access to calculus), offering "alternatives" to Algebra 2, swapping from a focus on mastery to one on "big picture" understanding, keeping classes integrated regardless of student performance, and generally dumbing down the mathematical education across the board. They want to achieve equity in outcomes, and so they're taking the easy route - lower the ceiling, rather than raise the floor. It's near to certain that outcomes in California will decline significantly over the next decade, but I expect there will also be better grades on average - laying a nice layer of paint on a building that's collapsing.
---
As for the Civil War, imagine the EU had a military and simply refused to accept Brexit, triggering a war. Would the cause of that war have been e.g. immigration (which was arguably the main factor leading to Brexit, and mentioned in numerous official documents relating to Brexit), or would it have been over the rights of EU member countries? Obviously without immigration you don't have Brexit and so you don't have a war. Yet similarly without our hypothetical effort of the EU to impose its will on member countries, you also don't have war. A key point to me is that one issue is variable, while one is fixed.
This whole thread is giving blockchain in 2015 vibes. People were using all sorts of quotes and anecdotes to tell skeptics why they were wrong and in 10 years the entire financial system will be running on blockchain. A certain amount of skepticism and cautious optimism is healthy.
Also, people seem to be missing that "AI Assisted" coding and "Vibe Coding" are not the same thing.
Personally I think the issue with vibe coding is two fold:
1. It is not good at solving problems that are uncommon.
2. It is not deterministic.
Yes, AI can do quality control and testing now. But anyone who has done TDD can tell you that just the mere presence of tests does not itself mean the code is effective or solving the right problem.
Is it getting better? Yes. Do I trust any vibe coded apps built by people who don't know actual code and are treating it like a black box? Absolutely not.
And I say that as someone who has tried pretty much every IDE out there and uses AI assisted coding (on "agent" mode) heavily every single day.
Not OP, but there are many things that I know don't work without trying them. That's not a contradiction. It may or may not be true but it's not a contradiction by itself. You can know reasonable well that something doesn't work by looking at other people who have tried it (sometimes even better if those people are experts and you are not).
When was the last time you actually priced them out?
When they first came up they were pricy but unless you're talking about fancy smart-bulbs with Wifi and color changing, they are not 10x the price. And they empirically last 5-20+ times longer.
So even before you consider that a huge portion of the energy put into incandescence is lost to heat (thereby making it cost MUCH more in electricity), they are still roughly the same price after accounting for lifespan.
Definitely very low resolution, but compared to sites that use a solid color this seems much better. And only requiring one variable is really nice.
The article seems very well thought through. Though for both the algorithm and the benchmark algorithm the half blue / half green image with the lake shows the limitations of this technique. Still pretty good considering how light weight it is.
I did deliberately pick some "bad" examples like the blue+green image, and other multicolor images.
I wanted to add an upload function so people could test any image, then i realised I'd have to implement the compression/hashing in the client. Maybe i should!
When I saw that link I thought maybe it was one of those: "add X to the recommended libraries list" PRs or something like that. But this is wild... it's literally an advertisement.
But until that I do tip for dine-in service. But I found the "buy me a coffee" link on the button of this to be much funnier / ironic than it probably should have been.
It's also missing what I think is the worst dark pattern:
Having no option not to tip at all. Instead requiring that the customer press "Custom" and manually entering "0.00"