Hacker Newsnew | past | comments | ask | show | jobs | submit | rishabhaiover's commentslogin

What are some valid sovereign alternatives NHS can use?

They could always use Fujitsu/SCL *

* see the Post Office scandal


I love the fact that Steven Seagal made it.

That's true. The noise is being generated by people who are directly or indirectly incentivized to talk about it.

> coming up with the right projects and producing a vertically differentiated product to what already exists is.

Agreed but not all engineers are involved with this aspect of the business and the concern applies to them.


For the longest time, the joy of creation in programming came from solving hard problems. The pursuit of a challenge meant something. Now, that pursuit seems to be short-circuited by an animated being racing ahead under a different set of incentives. I see a tsunami at the beach, and I’m not sure whether I can run fast enough.

I see it more like a playing a text adventure game. You give it commands and sometimes it works, and sometimes the results are unexpected.

Personally, I've never been interested in being a character in someone else's story.

But now you've got me thinking. Has anyone studied whether the programmers who are more enamored of AI are also into RPGs?


Not to mention many companies speedrunning systems of strange and/or perverse incentives with AI adoption.

That being said, Welch’s grape juice hasn’t put Napa valley out of business. Human taste is still the subjective filter that LLMs can only imitate, not replace.

I view LLM assisted coding (on the sliding scale from vibe coding to fancy auto complete) similar to how Ableton and other DAW software have empowered good musicians that might not have made it otherwise due to lack of connections or money, but the music industry hasn’t collapsed completely.


In the music world, I would say that, rather than DAWs, LLM-assisted coding is more like LLM-assisted music creation.

Yep DAW’s aren’t the comparison. People are not thinking deeply about what is going on - there is a big war on-going in order to eradicate taste and make it systematic to immensely benefit the few.

> I can run fast enough.

Can you do some code reviews while you're running?


(Inception scene) here a minute is seven hours

I did an experiment on FlashAttention in Triton to measure the impact of caching tiles in the Shared Memory. Surprisingly, it had a non-monotonic relationship with prefetching these tiles and it was kernel dependent. Attention kernel benefits from prefetching caches while MLP W1 doesn't.

Very interesting and Would love to see the experiments. Quick question: what do you mean about kernel dependent ?

Sorry for not being clear. We had two different CUDA functions, one was for Attention and one was for the MLP. Here's the kernel code: https://github.com/sankirthk/GPT2-Kernel-Fusion/blob/main/ke...

We saw different results of pipelining with the Attention kernel vs the MLP kernel (since MLP W1 has to project the attention results into a much higher dimension, the arithmetic intensity shifts towards compute bound characteristics)


Agreed, this observation holds true for both decode and prefill. Thanks for sharing the code

I've practiced a healthy skepticism of the recent boom but I can't reason why the long horizon time wouldn't stretch to 8 hours or a week worth's of effort from next year. After Opus-4.5, governments and organizations should really figure out a path out of this storm because we're in it now.

Doubling time has been 7 months for a while, so you should expect 8h not 1 week next year.

Predictions over historical data in a landscape with fragile priors doesn't seem like a strong metric to me (it's a useful approximation at best)

It's significantly accelerated to 4 months since the beginning of 2025, which puts 1 week within reach if things stay on trend. But yes 7 months is the more reliable long-term trend.

Can we attribute the acceleration to something specific, that might not actually continue growth? For example, agentic coding and reasoning models seem to have made a huge leap in abilities, but wouldn't translate to an ongoing exponential growth.

There's a fair amount of uncertainty on this point. In general it's unclear when/whether things will plateau out (although there are indications again that the trend is accelerating not decelerating).

That being said, if by "agentic coding" you are implying that a leap in capabilities is due to novel agentic frameworks/scaffolding that have appeared in 2025, I believe you are confusing cause and effect.

In particular, the agentic frameworks and scaffolding are by and large not responsible for the jump in capabilities. It is rather that the underlying models have improved sufficiently such that these frameworks and scaffolding work. None of the frameworks and scaffolding approaches of 2025 are new. All of them had been tried as early as 2023 (and indeed most of them had been tried in 2020 when GPT-3 came out). It's just that 2023-era models such as GPT-4 were far too weak to support them. Only in 2025 have models become sufficiently powerful to support these workflows.

Hence agentic frameworks and scaffolding are symptoms of ongoing exponential growth, not one-time boosts of growth.

Likewise reasoning models do not seem to be a one-time boost of growth. In particular reasoning models (or more accurate RLVR) seem to be an on-going source of new pretraining data (where the reasoning traces of models created during the process of RLVR serve as pretraining data for the next generation of models).

I remain uncertain, but I think there is a very real chance (>= 50%) that we are on an exponential curve that doesn't top out anytime soon (which gets really crazy really fast). If you want to do something about it, whether that's stopping the curve, flattening the curve, preparing yourself for the curve etc., you better do it now.


Well said. I don't think anybody's stopping anything. I wish I knew how to prepare for it.

After I saw Opus 4.5 search through zig's std io because it wasn't aware of a breaking change in the recent release, I fell in love with claude-code and I don't see a strong enough reason to switch to codex at the moment.


exactly. they need to bring in spotify level of caching of streaming music that it just works if you're in a subway. Constant availability should be table stakes for them.


NATS be trippin, no CAP.


Underrated


It seems like the demise of the possibility of great art in the next 50 years. Maybe my bias I find everything made by Apple or Netflix almost perfect but not it. Every moment is curated for maximum something, but not the feeling I get I used to get, even with filler episodes in between.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: