Hacker Newsnew | past | comments | ask | show | jobs | submit | Veedrac's commentslogin

Quote from bottom of page.

> We understand that some may wish to sensationalize a public crisis like this, but we would implore anybody to consider the ethics of the situation before publicizing this matter to a wider audience than is already exposed to it, or interjecting with prying questions.


Fundamentally, there are two ways of representing iteration pipelines: source driven, and drain driven. This almost always maps to the idea of _internal_ iteration and _external_ iteration, because the source is wrapped inside the transforms. Transducers are unusual in being source driven but also external iterators.

Most imperative languages choose one of two things, internal iteration that doesn't support composable flow control, and external iteration that does. This is why you see pause/resume style iteration in Python, Rust, Java, and even Javascript. If that's your experience, transducers are a pretty novel place in the trade-off space: you keep most of the composability, but you get to drive it from things like event sources.

But the gap is a bit smaller than it might appear. Rust's iterators are conceptually external iterators, but they actually do support internal iteration through `try_fold`, and even in languages that don't, you can 'just' convert external to internal iterators.

Then all you have to do to recover what transducers give you is pass the object to the source, let it run `try_fold` whenever it has data, and check for early termination via `size_hint`. There's one more trick for the rare case of iterators with buffering, but you don't have to change the Iterator interface for that, you just need to pass one bit of shared state to the objects on construction.

Not all Iterators are strictly valid to be source-driven, and while most do, not everything works nicely when iterated this way (eg. Skip could but doesn't handle this case correctly, because it's not required to), but I don't think transducers can actually do anything this setup can't. It's just an API difference after that point.


> If that's your experience, transducers are a pretty novel place in the trade-off space

That is not my experience and TBH I don't know what a lot of your terminology specifically means.


I wasn't saying you would have that experience, I was saying that the reason people act like transducers are unique is that transducers are an unconventional place on well worn ground.

Ultimately, yes, everything bottoms out, most special tricks seem less special the more you understand about them, because it's programming and Turing Equivalence is the bedrock the whole field rests on. But the average person learning about transducers is not going to spot how closely related it is to other things that already exist.

I'm happy to elaborate on any part of the terminology if you're curious, but tbh I mostly wrote it for myself because I thought the framing was novel and wanted it noted down somewhere.


Do you not... remember? The US life expectancy is 79 years. 7.9 years ago was late May 2018. The best LLM was... wait, there weren't any. There was ELMo, an embedding model. It wasn't just not smart at agentic coding, it wasn't even just not smart at writing code snippets, it wasn't even just not smart at answering questions of any kind, it wasn't even just not good at producing a coherent output, it wasn't even just not good at producing coherent sentences, it was _not even the point where people thought unconstrained text output was a thing machines did_.

There is no step along the ladder which has remotely evidenced or supported that the next step is going to be ten, twenty, a hundred times harder than the last step on the ladder, but a constant chorus of people singing at every moment, each moment wrong, that the next step is the one.


It turns out there is literally no amount of being publicly right about a longshot bet sufficient for people to conclude you hold your beliefs because you think they are true.


But longshot bettors have it easy. Society quickly forgets all the predictions that don't come true. It remembers the one that did, and treats the prognosticator as a prophet. In social terms, predicting doom is an asymmetrical strategy, because you only have to be right once.

Which is also to say it's a cheap bet that anyone with no reputation can afford. Hence, not believing doomsayers mean what they say is a sort of societal hedge against people flooding the zone with doomsday scenarios about everything.


Entire sick post was: "Hey, if you think I'm bad, look at Elon. I'm the one that tried to stop him having control."

Altman is a ghoul, and we can't be cowed into saying otherwise. he's also supported all the weakness in society that has lead to sick people doing sick things.


We needn't be cowed into saying otherwise, but throwing a bomb at him is something else entirely. If you're convinced that wicked people are running the world, the response isn't to be wicked.


I'm sure they do believe they can successfully manipulate the market by lying to it. Elon Musk laid that groundwork a decade ago.

If you meant their "core mission" then every one of their actions belies their complete panic over the obvious failure of their technology.


More than anything it's a supply limit. Solar is consistently scaling about as fast as any manufacturing industry scales. The TAM is just big.


I think the idea is that in an always-on display mode, most of the screen is black and the rest is dim, so circuitry power budget becomes a much larger fraction of overhead.


Ohh like property tax on a vacant building


There's a very simple solution to this problem here. Instead of wink-wink-nudge-nudge implying that 100% is 'human baseline', calculate the median human score from the data you already have and put it on that chart.


Its below 1% lmao


where did you get this 1%?


Maybe I am just out of my depth, but I don't understand what problem quantum Darwinism is solving. The Schrödinger equation already explains why observers seem to agree: the ones that don't are separated from each other.

This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.


I think the claim is this: the wave function never collapses. However, the effect of the wave function on the environment quickly converges to only one of the two states. We could not know the difference because we cannot directly observe the wave function. We only can see the result as it is magnified onto a macro scale by our observation equipment (or, lacking that, our eyes, which themselves turn a tiny microscopic phenomenon into macro signals). Once that particular outcome has been 'selected' for, the probability of the other outcome quickly becomes vanishingly small very fast. Thus, all future outcomes are that outcome, even though the underlying reality is still that fully entangled state.

Photons (and other objects that seem to behave 'quantumly') do not seem subject to this (and thus we can use them to understand quantum behavior) because they have particular properties wherein their behavior is not as affected by these macroscopic drop-offs quite as badly.


My confusion is that this is just Many Worlds / the Schrödinger equation, and Quantum Darwinism doesn't seem to add anything that wasn't already obvious by inspection. But after reading more, I think that's kind of the point? It's ultimately just an argument for why the Schrödinger equation produces these locally classical regions, plus a bunch of overly flowery prose and dressing up in invented jargon that can mostly be ignored. I think the article failed to ignore that second part and ended up confused.


Many worlds is not the Schrodinger equation. No I don't think this is many worlds. The decision is made uniquely and then is amplified.


Many worlds is just the claim that the Schrödinger equation holds in actuality.

I don't think QD makes decisions 'uniquely'. Take this quote,

> The step from the epistemic (“I have evidence of |π17〉”.) to ontic (“The system is in the state |π17〉”.) is then an extrapolation justified by the nature of ρS⁢ℰ: Observers who detected evidence consistent with |π17〉 will continue to detect data consistent with |π17〉 when they intercept additional fragments of ℰ. So, while the other branches may be in principle present, observers will perceive only data consistent with the branch to which they got attached by the very first measurement. Other observers that have independently “looked at” S will agree.

https://pmc.ncbi.nlm.nih.gov/articles/PMC9689795/

Emphasis on "the other branches may be in principle present" — the claim at least in this paper can't be that all branches agree, just that they agree locally.


Without defining what 'actuality' is, then there's no meaning to 'the Schrodinger equation holds in actuality'. In their own way, all interpretations of quantum mechanics claim the Schrodinger equation holds in 'actuality'. Some view probability and potential as a claim on 'actuality'. Others dismiss this and instead view probability skeptically and claim it must thus be true. This is an ontological argument, not a scientific one.


If you don't like the word 'actuality', I can rephrase. Many worlds is just the claim that physical reality materially evolves in correspondence with the Schrödinger equation.

If you want to quibble over what it means for something to be material, go ahead, but unless you can tie it to some specific claim being made about QD I don't really know what the exercise gets you.


This is missing the primary reasons insider trading is bad, which are that it's an information theft incentive against employers, and worse, that it's a sabotage incentive.


> From what I've seen, models have hit a plateau where code generation is pretty good...

> But it's not improving like it did the past few years.

As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?


Yes a strange comment. Opus 4.5 is significantly better than before and Opus 4.6 is even better. Same with the 5.2 and 5.3 Codex models.

If anything, the pace has increased.

This may be one of the most important graphs to keep an eye on: https://metr.org/ and it tracks well to my anecdotal experience.

You can see the industry did hit a bit of a wall in 2024 where the improvements drop below the log trend. However, in 2025 the industry is significantly _above_ the trend line.


Are you seeing any meaningful improvements to anything you use though? Like have self-driving cars become really cheap and common place? Medicine improved? Is Netflix giving us an abundance of cheap, really good content to watch? How is your AI doctor?

The geeks are telling us the LLMs are great, but that's about it.

I'm seeing way more AI generated youtube thumbnails...I know you will say "give it time" but I'm pretty convinced the problems AI solves are not the hard problems required to boost an economy.


The wild thing is, that "plateau" link is from September 2025, aka two months before Opus 4.5.

Yeah, it's not a plateau.


I see these claims in a lot of anti-LLM content, but I’m equally puzzled. The pace of progress feels very fast right now.

There is some desire to downplay or dismiss it all, as if the naysayers are going to get their “told you so” moment and it’s just around the corner. Yet the goalposts for that moment just keep moving with each new release.

It’s sad that this has turned into a culture war where you’re supposed to pick a side and then blind yourself to any evidence that doesn’t support your chosen side. The vibecoding maximalists do the same thing on the other side of this war, but it’s getting old on both sides.


Yeah, I feel that too. It'd be great if people acknowledged the progress without turning it into polarized movements and numerous discussions about how we all lag behind...


What I feel is that people are claiming progress is being made, but on what front ?

The machines might be producing more code at a faster rate, but what has that actually amounted too?


I mean if you take now, from a year ago, vs a year ago from two years ago and then once more vs two years ago to three years ago, you wouldn't see the idea of a plateau in effectiveness or not?

I still have several projects I developed in mid 2024 where I felt the AI was really close but not quite good enough for production, and almost two years in they haven't gotten appreciably better to where I would be able to release an actual application.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: