Hacker Newsnew | past | comments | ask | show | jobs | submit | pmarreck's commentslogin

Yep, go ePub. Have been doing that for years now, after converting my entire Amazon purchase library to ePub thanks to an old loophole.

Apple figured the correct model out years ago with iTunes Music.


So is vaccine resistance.

Doesn't mean it's correct, or empirically-based.


So ultra-literally that is true, "resistance" does not mean "reasonable resistance", but I reject your subtext: It's a terrible comparison.

We've had literal generations of experience with vaccines, tons of data with formal systems to collect it, and most of the "resistance" traces back to "I dun wanna" and hearsay.

In contrast, LLM prompt-injection is an empirically proven issue, along with other problems like wrongful correlations (both conventional ones like racism and inexplicable ones), self-bias among models, and humans generally deploying them in very irresponsible ways.


I work with Claude Max for hours a day.

I see a lot of speculation by people who do not.

I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.

Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.

It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.

My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam

BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.


Do you not... remember? The US life expectancy is 79 years. 7.9 years ago was late May 2018. The best LLM was... wait, there weren't any. There was ELMo, an embedding model. It wasn't just not smart at agentic coding, it wasn't even just not smart at writing code snippets, it wasn't even just not smart at answering questions of any kind, it wasn't even just not good at producing a coherent output, it wasn't even just not good at producing coherent sentences, it was _not even the point where people thought unconstrained text output was a thing machines did_.

There is no step along the ladder which has remotely evidenced or supported that the next step is going to be ten, twenty, a hundred times harder than the last step on the ladder, but a constant chorus of people singing at every moment, each moment wrong, that the next step is the one.


I mostly agree with your experience, but;

Every day I deal with bad judgement calls from humans (sometimes my own!), but I don't screenshot them because it's not polite.

I don't think we're at the top of the curve yet? Current AIs have only been able to write code _at all_ for less than 5 years.

Code in particular is a domain that should be reasonably amenable to RL, so I don't think there are any particular reasons why performance should top out at human levels or be limited by training data.


I see people on here all the time saying this tool or that model regressed. It used to be better.

There are clearly some pressures to make it worse. Like it's expensive to run. And unbelievably that it's under provisioned somehow.

Could you have looked at early Myspace and declared social media would only get better? By some measures it was already at its peak.


Personally I don't think coding agents will regress significantly as long as there is competitive pressure and independent benchmarks. Regulation is a risk because coding may be equivalent to general reasoning, and that might be limited for political / "safety" reasons.

Social media "regressed" from the point of view of users because the success metric from the network's point of view was value extraction per eyeball-minute. As long as there continue to be strong financial incentives to have the strongest coding model I think we'll see progress.


>, but I don't screenshot them because it's not polite.

The Daily WTF has had that covered for two decades now. People do a lot of insane crap, it's surprising it's not more deadly for them.


Are these still accidents where the driver was not paying attention, though?

Of course. But the argument that the nature of FSD causes them to not pay attention.

As far as LLM-produced correctness goes, it all comes down to the controls that have been put in place (how valid the tests are, does it have a microbenchmark suite, does it have memory leak detection, etc.)

There's much more to it than that. One unmentioned aspect is "Has the tooling actually tested the extruded code, or has it bypassed the tests and claimed compliance?". Another is "Has a human carefully gone over the extruded product to ensure that it's fit for purpose, contains no consequential bugs, and that the test suite tests all of the things that matter?".

There's also the matter of copyright laundering and the still-unsettled issue of license laundering, but I understand that a very vocal subset of programmers and tech management gives zero shit about those sorts of things. [0]

[0] I would argue that -most of the time- a program that you're not legally permitted to run (or distribute to others, if your intention was to distribute that program) is just as incorrect as one that produces the wrong output. If a program-extrusion tool intermittently produces programs that you're not permitted to distribute, then that tool is broken. [1]

[1] For those with sensitive knees: do note that I said "the still-unsettled issue of license laundering" in my last paragraph. Footnote zero is talking about a possible future where it is determined that the mere act of running gobs of code through an LLM does not mean that the output of that LLM is not a derived work of the code the tool was "trained" on. Perhaps license-washing will end up being legal, but I don't see Google, Microsoft, and other tech megacorps being very happy about the possibility of someone being totally free to run their cash cow codebases through an LLM, produce a good-enough "reimplementation", and stand up a competitor business on the cheap [2] by bypassing the squillions of dollars in R&D costs needed to produce those cash cow codebases.

[2] ...or simply release the code as Free Software...


> to minimize other damage

You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.

What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?

I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?

*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids


Yeah, you also have to consider that your kids can be on either side of the equation too.

I honestly don't know if by the other side of the equation is your kid being on the street when somebody elses's av causes the accident. Bonus points of the owner of the av is not liable for the accident.

> You mean deaths to multiple other people, do you not

I mean deaths the AI predicts for other people, yes

And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.


I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.

You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.


> deaths the AI predicts for other people

Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?


It was an entire media beat up because the media was too afraid to talk about anything real and the public not interested.

There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.

Instead we got "what if the car calculates the trolley problem against you?"

And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).

I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.

Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.


> Which is of course what the road rules are: you slam on the brakes.

Yeah, there are a shocking number of accidents which basically amount to "they tried to swerve and it went badly".

You can concoct a few scenarios where other drivers are violating the road rules so much as to basically be trying to murder you -- the simplest example is "you are stopped at a light and a giant truck is barreling towards you too fast to stop".

If you are a normal driver, you probably learn about this when you wake up in the hospital, but an autonomous vehicle could be watching how fast vehicles are approaching from behind you. There's going to be a wide range of scenarios where it will be clear the truck is not going to stop but there's still time to do something (for instance, a truck going 65mph takes around 5 seconds to stop, so if it's halfway through its stopping distance, you've got around 2.5 seconds to maneuver out of the way).

That does leave you all sorts of room to come up with realistic trolley problems.


> That does leave you all sorts of room to come up with realistic trolley problems

But all require a human (or malicious) driver on one hand. The more rule-following AVs on the road, the fewer the opportunities for such trolley problems.

And I'd still argue that debating these ex ante is, while philosophically fascinating, not a practical discussion. I'm not seeing a case where one would code anything further than collision avoidance and e.g. pre-activating restraints.


Yeah, realistically the problems almost never happen and hopefully become rarer over time.

The typical human preference WRT the trolly problem ("don't take an action which leads to deaths, even if it would save more lives") is also a reasonable -- maybe the only reasonable answer -- to these hypotheticals.

Ie, move against the light to avoid getting rear ended, but not if you're going to run over a pedestrian or cause an accident with another vehicle trying to do so. (Even if getting rear ended would push you into the pedestrian or other car.)


The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?

What is the lowest likelihood of your own death you'd find acceptable in this situation?


We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own death—driving off a bridge, say.

I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.


Interestingly, I think that similar types of arguments are made against "agentic coding"

If you don't pay constant attention, you will never notice when it slips in a bug or security issue


Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.

Today's car crash deaths are sometimes software bug caused deaths. Toyota failed their forensic audit of their drive by wire code back in 2013. https://capitolweekly.net/toyota-has-settled-hundreds-of-sud...

Sure, but you can do that in a diff after the event, rather than live.

I would prefer to understand why a paused or backgrounded game still manages to consume a ton of CPU or GPU

Like, you're still just churning away at the main game loop while literally nothing else is happening except for you waiting for me to unpause it?

Because THAT would be an actual achievement. Hell, I can suspend any process from the Unixy side of things by sending a SIGSTOP signal, for a far more perfect "pause".

If I was a game dev, I would not settle for "looks like paused, but still burning down the environment and heating up my home as a side effect"


Because the engine is still running, even in a paused state, the game still has to show something and process input. Sometimes there is a menu too, sometimes the game is not completely frozen: flames may flicker for instance.

In the article, there is a case where the game takes a screenshot and disable costly rendering, presumably to deal with this problem. But the thing is that most games are designed to to be actively played, not paused for extended periods of time and having an active pause for a couple of minutes isn't going to destroy the environment.

For backgrounding, is is common for games to run at a much slower framerate to save resources.


But if you were a game dev, you would understand why it‘s not as easy as it seems to outsiders. :)

I'm not saying it would be trivial, but I bet that once you figure out a workable pattern, you could replicate that on other games.

One idea that might be relatively easy to implement- Slow the framerate down to something super slow instead of fully stopping


Ah, but now your pause menu feels like total garbage to use!

I once wrote an algorithm to generate a regex to find all matches from a given word with a levenshtein distance of 1 (I did not permute it beyond that though). Can link it if someone is curious.

Good.

I think most of us know that their design failure here was a lack of backwards compatibility. But at least it's getting adopted.


Backward compatibility was never really the problem; the problem is that forward compatibility with ANY successor protocol (without modifying IPv4) is a fundamental impossibility.

But at least a reasonable facsimile eventually came out with NAT64.

(You can also do NAT46, but it requires one IPv4 address for every IPv6 destination you want to be reachable from the IPv4 Internet, so it doesn't scale very well.)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: