Hacker Newsnew | past | comments | ask | show | jobs | submit | snovv_crash's commentslogin

I agree, but I think the same logic could have been applied to the structure of the article. It could have all been 2 paragraphs.

Try enjoying reading for purposes other than spending as few brain tokens as possible to acquire maximum info. It takes time to understand another persons perspective. Sophie’s Choice wouldn’t be as good a movie if you watched the 30-second TL;DR.

I found it compelling throughout


To each their own, I found it tedious and annoying. I quit reading maybe 1/4 of the way in. By then already I had loud alarms going off that I need to read the comments because I'm sure many of the points are easy for a real expert to debunk - too much feels off.

Well I found the text to be obviously inflated with AI, becoming much more verbose than necessary, even if syntactically, grammatically and structurally it was correct.

> He wasn’t following a plan. He was just that kind of person.

Because the article is AI slop, plain and simple.


This one definitely does not feel like AI to me. I could be wrong. But it has too much warmth.

I would write that prose. It’s very powerful to use small sentences with small words to drive a point home. Like when you are in some drawn out argument about th future with your spouse and your child comes in the room. She says quietly, “please stop fighting I’m hungry”. How do you argue with that? You can’t, it’s just true.

Am I AI slop?


How many times would you use that structure in a single article?

> Am I AI slop?

This is the internet, you could be a dog for all anybody cares. If you write like AI though...


I'm more interested if it fixes CVEs faster than it introduces them.

That too. Honestly I am expecting that if AI is such the wonder-miracle that people act like it is that it should be able to spot complex back-doors that require multiple services that look benign when red teamed but when used in conjunction provide the lowest CPU ring access along with all the obfuscated undocumented CPU instructions and of course all the JTAG debugging functions of all the firmware.

Single letter variables all the way. Then it's easy to tell what code is human welritten. /s

My vote: AI induced psychosis via sycophantic assurances that the results are real. Plus a heap of Dunning-Kruger by allowing someone with just enough knowledge to be dangerous to get far enough to waste everyone's time.

I wonder if they also only want agents to read it, not people.

LLM prose is very bland and smooth, in the same way that bland white factory bread is bland and smooth. It also typically uses a lot of words to convey very simple ideas, simply because the data is typically based on a small prompt that it tries to decompress. LLMs are capable of very good data transformation and good writing, but not when they are asked to write an article based on a single sentence.

That's true. I.e. it's not that they're not capable of doing better, it's just whoever's prompting them is typically too lazy to add an extra sentence or three (or a link) to steer it to a different region of the latent space. There's easily a couple dozen dimensions almost always left at their default values; it doesn't take much to alter them and nudge the model to sample from a more interesting subspace style-wise.

(Still, it makes sense to do it as a post-processing style transfer space, as verbosity is a feature while the model is still processing the "main" request - each token produced is a unit of computation; the more terse the answer, the dumber it gets (these days it's somewhat mitigated by "thinking" and agentic loops)).


Capex vs. opex

Unless you're aware of hyperspectral image adapters for LLMs they aren't capable of that either.

The real improvement will be when the software engineers get into the training loop. Then we can have MoE that use cache-friendly expert utilisation and maybe even learned prefetching for what the next experts will be.

> maybe even learned prefetching for what the next experts will be

Experts are predicted by layer and the individual layer reads are quite small, so this is not really feasible. There's just not enough information to guide a prefetch.


It's feasible to put the expert routing logic in a previous layer. People have done it: https://arxiv.org/abs/2507.20984

Manually no. It would have to be learned, and making the expert selection predictable would need to be a training metric to minimize.

Making the expert selection more predictable also means making it less effective. There's no real free lunch.

For CPU with bigger K you would put the centroids in a search tree, so take advantage of the sparsity, while a GPU would calculate the full NxK distance matrix. So from my understanding the bottleneck they are fixing doesn't show up on CPU.


search trees tend not to scale well to higher dimensions though, right?

from what I've seen I had the impression that Yinyang k-means was the best way to take advantage of the sparsity.


Most data I've used is for geospatial with D<=4 (xyzt) so for me search trees worked great. But for things like descriptor or embedding clustering yes, trees wouldn't be useful.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: