Hacker Newsnew | past | comments | ask | show | jobs | submit | solarkraft's commentslogin

It’s not even a less powerful device. It has the same performance as the M1, which is still a beast.

Oh, awesome! I kind of missed something like this when I last made a userscript for a React app.

It’s still an insightful and well written comment, but the LLM-ness does make me wonder whether this part was actually human-intended or just LLM filler:

> The discipline to do it consistently enough that agents can actually retrieve and use it is what's missing, and structuring it for that purpose is genuinely underexplored territory

Because I somewhat agree that discipline may be missing, but I don’t believe it to be a groundbreaking revelation that it’s actually quite easy to tell the LLM to put key reasoning that you give it throughout the conversation into the commits and issue it works on.


Suppose you spend months deeply researching a niche topic. You make your own discoveries, structure your own insights, and feed all of this tightly curated, highly specific context into an LLM. You essentially build a custom knowledge base and train the model on your exact mental framework.

Is this fundamentally different from using a ghostwriter, an editor, or a highly advanced compiler? If I am doing the heavy lifting of context engineering and knowledge discovery, it feels restrictive to say I shouldn't utilize an LLM to structure the final output. Yet, the internet still largely views any AI-generated text as inherently "un-human" or low-effort.


I would ignore any HN content written by a ghost writer or editor. I guess I would flag compiler output but I’m not sure we’re talking about the same thing?

I’m on the internet for human beings. I already read a newspaper for editors and books for ghostwriters.

Not for long though, HN is dying. Just hanging around here waiting for the next thing , I guess…


Sorry man, the internet has died and is not being replaced by anything but an authoritarian nightmare.

My only guess is if you want actual humans, you'll have to do this IRL. Of course we has humans have got used to the 24/7 availability and scale of the internet so this is going to be a problem as these meetings won't provide the hyperactive environment we want.

Any other digital system will be gamed in one way or another.


The problem is: the structure of LLM outputs generally make everything sound profound. It’s very hard to understand quickly whether a comment has actual signal or it’s just well written bullshit.

And because the cost of generating the comments is so low, there’s no longer an implicit stamp of approval from the author. It used to be the case that you could kind of engage with a comment in good faith, because you knew somebody had spent effort creating it so they must believe it’s worth time. Even on a semi-anonymous forum like HN, that used to be a reliable signal.

So a lot of the old heuristics just don’t work on LLM-generated comments, and in my experience 99% of them turn out to be worthless. So the new heuristic is to avoid them and point them out to help others avoid them.

I would much rather just read the prompt.


I hadn't considered this so eloquently with LLM text output, but you're right. "LLMs make everything sound profound" and "well-written bullshit".

This has severe ramifications for internet communications in general on forums like HN and others, where it seems LLM-written comments are sneaking in pretty much everywhere.

It's also very, very dangerous :/ Because the structure of the writing falsely implies authority and trust where there shouldn't be, or where it's not applicable.


How do you deal with the comments sometimes being relatively noisy for humans? I tend to be annoyed by comments overly referring to a past correction prompt and not really making sense by themselves, but then again this IS probably the highest value information because these are exactly the things the LLM will stumble on again.

    > How do you deal with the comments sometimes being relatively noisy for humans?
To extents, that is a function of tweaking the prompt to get the level of detail desired and signal/vs noise produced by the LLM. e.g. constraining the word count it can use for comments.

We have a small team of approvers that are reviewing every PR and for us, not being able to see the original prompt and flow of interactions with the agent, this approach lets us kind of see that by proxy when reviewing the PR so it is immensely useful.

Even for things like enum values, for example. Why is this enum here? What is its use case? Is it needed? Having the reasoning dumped out allows us to understand what the LLM is "thinking".

(Of course, the biggest benefit is still that the LLM sees the reasoning from an earlier session again when reading the code weeks or months later).


Inline comments in function body: for humans.

Function docs: for AI, with clear trigger (“use when X or Y”) and usage examples.


I really hate its tendency to leave those comments as well. I seem to have coached it out with some claude.md instructions but they still happen on occasion.

From what I remember, this was for describing the project’s structure over letting the model discover it itself, no?

Because how else are you going to teach it your preferred style and behavior?


Just like any Google product then.

I personally do this and I can imagine a world in which it is popular with privacy/sovereignty enthusiasts. I have doubts that this share of people will be significant enough for many companies to cater their products to this model - but if anyone will, it will be Apple - and it would yield them a few extra Mac Studio sales and likely make much more profit than selling the same service.

It’s a big deal! Prompt processing was previously the Mac’s weak point. Sure, output generation matters for file recital in programming, but in general conversation I’d rather have it output a short answer anyway (after extensive processing by a smart model).

General conversation is already free with all the major providers (Claude, ChatGPT, etc.). That's not where the major gains in productivity lie.

> The damn thing _talks_. You can just _speak_ to it. You can just ask it to do what you want

I mean - yeah. So do humans. But it turns out that that a lot of humans require considerable process to productively organize too. A pet thesis of mine is that we are just (re-) discovering the usefulness of process and protocol.


> It almost never ends with a follow up question

Oh my god. I hate this so much. Gemini’s Voice mode is trained to do this so hard that it can’t even really be prompted away. It completely derails my thought process and made me stop using it altogether.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: