Hacker Newsnew | past | comments | ask | show | jobs | submit | jrecyclebin's commentslogin

Well, except that, in this case, Copilot really is for entertainment purposes only.


> There is no need for determinism to guarantee the job will be done identically every time if we only plan to do it once.

So can't you just save the conversation transcript and replay it with the tools? Seems a lot more efficient that regenerating the whole thing. And, also, no risk of branching when a tool reply is slightly different. (Of course, errors can occur on subsequent runs.)


Agent can still "forgot password" on many accounts. Or magic link.


Another approach to journal writing is basically the opposite: rather than treating it like a task to fill with very rigid requirements - find a notebook and pen that you'll enjoy spending time with. An easy start is a Midori Ruled A5 (very simple, lay flat notebook) and a Uniball Zento Signature (the most hyped pen in the world right now) and treat them basically like little friends you spend time with. Writing only two sentences is denying yourself quality time writing and reflecting at a leisurely pace if you really come to enjoy it.

I'd also think you're more likely to read back if writing time is a fond memory.


Journaling on paper causes loads of paper with private information you either need to carry when you move or dispose somehow securely. It's nice if you own your house and don't move but if you do it's a pain


Digitize and burn.


Digest and excrete.


Kind of ironic - the moat is money...

At the same time, I see the appeal. I feel like 10% of the comments I read lately are "is this an AI response?" - would be nice to be free of that. Probably not possible tho.


The weakest part is the last one - and it's a big one. Personalsit.es is just a flat single-page directory (of thumbnails, even, not content - so the emphasis is design.) To be part of the conversation, you'd list there and hope someone comes along. Compare with Reddit where you start commenting and you're close-to-an-equal with every other comment.

Webmentions do get you there - because it's a commenting system. But for finding the center of a community, it seems like you're still reliant on Bluesky or Mastodon or something. (Which doesn't "destroy all websites.") Love the sentiment ofc.


Great advertising for vidalias. I simply have to try one now.


They're really good. The apple thing is no joke. Vidalia and Walla-Walla onions are top tier alliums.


author here.. our Vidalia season usually starts in late April - FYI. If you visit our website, submit your email there and I'll drop you a note when our order lines are open.


Good luck finding them anywhere right now


Skill descriptions get dumped in your system prompt - just like MCP tool definitions and agent descriptions before them. The more you have, the more the LLM will be unable to focus on any one piece of it. You don't want a bunch of irrelevant junk in there every time you prompt it.

Skills are nice because they offload all the detailed prompts to files that the LLM can ask for. It's getting even better with Anthropic's recent switchboard operator (tool search tool) that doesn't clutter the system prompt but tries to cut the tool list down to those the LLM will need.


Can I organize skills hierarchically? If when many skills are defined, Claude Code loads all definitions into the prompt, potentially diluting its ability to identify relevant skills, I'd like a system where only broad skill group summaries load initially, with detailed descriptions loaded on-demand when Claude detects a matching skill group might be useful.


There's a mechanism for that built into skills already: a skill folder can also include additional reference markdown files, and the skill can tell the coding agent to selectively read those extra files only when that information is needed on top of the skill.

There's an instruction about that in the Codex CLI skills prompt: https://simonwillison.net/2025/Dec/13/openai-codex-cli/

  If SKILL.md points to extra folders such as references/, load only the specific files needed for the request; don't bulk-load everything.


yes but those are not quite new skills right?

can those markdown in the references also in turn tell the model to lazily load more references only if the model deems they are useful?


Yes, using regular English prompting:

  If you need to write tests that mock
  an HTTP endpoint, also go ahead and
  read the pytest-mock-httpx.md file


> Anthropic's recent switchboard operator

I don’t know what this is and Google isn’t finding anything. Can you clarify?



While I totally agree with you, I also can see a world where we just throw a ton of calls in the MCP and then wrap it in a subagent that has a short description listing every verb it has access to.


Absolutely. Remember these are just tools, how each one of us uses them it's a diffrent story. A lot can be leveraged as well by adding a couple of lines to CLAUDE.md on how he should use this memory solution, or not, it's totally up to anyone. You can also have a subagent that is responsible for project management that is in charge of managing memory or having a coordinator. Again a lot of testing needs to be done :)


Something of a logical leap here: if LLMs aren't capable of replacing workers and it's all lies, then what company is going to engage in mass layoffs without seeing results first? Sounds like companies that deserve to go away.


> If LLMs aren't capable of replacing workers and it's all lies, then what company is going to engage in mass layoffs without seeing results first?

We see companies layoff workers for all sorts of short-sighted reasons. They'll mass layoff to reduce labor costs for short term profits and stock price increases, so the execs and shareholders can cash out. AI is just the current reason the executive class has decided to use for the layoffs they were going to do regardless.


Further: business and management are exceptionally fad-driven, for numerous information-theoretic reasons.

Performance is difficult to measure and slow to materialise. At the same time, everyone, especially senior leadership and managers, is desperately competitive, even where that competition is on the perception rather than reality of performance. There's a very strong follow-the-herd / follow-the-leader(s) mentality, often itself driven by core investors and creditors.

A consequence is a tremendous amount of cargo-culting, in the sense of aping the manifest symbols of successful (or at least investor-favoured) firms and organisations, even where those policies and strategies end up incurring long-term harms.

Then there's the apparent winner-take-all aspect of AI, which if true would result in tremendous economic power, if not necessarily financial gains, to a very small number of incumbents. Look at the earlier fallout of the railroad, oil, automobile, and electronics industries for similar cases.

(I've found over the years various lists of companies which were either acquired or went belly-up in earlier booms, they're instructive.)

NB: you'll find fad-prone fields anywhere a similar information-theoretic environment exists: fashion, arts, academics, government, fine food, wine collecting, off the top of my head. Oh, and for some reason: software development.


Yep, those are the companies that would go away.


LLMs are just a stock price preserving excuse to do layoffs from previous overhiring.


Yes. A lot of these people should have been laid off anyway. The Musk Twitter massacre taught everybody a lesson, and layoffs were hot before AI was even the main concern.

Also, the DEI massacre is probably going to develop (or has developed) into a full scale HR/Social PR massacre. Instead of getting yelled at for doing the wrong thing, better to do nothing but make more money. And a side-benefit is that firing all of those people makes it even easier to fire more people. (Is that the singularity?)

I don't doubt that some industries are going to be nearly wiped out by AI, but they're going to be the ones that make sense. LLMs are basically super-google translate, and translators and maybe even language teachers are in deep trouble. In-betweeners and special effects people might be in even more trouble than they already were. Probably a lot more stuff that we can't even foresee yet. But for people doing actual thinking work, they're just a tool that feeds back to you what you already know in different words. Super useful to help you think but it isn't thinking for you, it's a moron.


> for people doing actual thinking work, they're just a tool that feeds back to you what you already know in different words. Super useful to help you think but it isn't thinking for you, it's a moron.

Beautiful description of AI. It’s the tech equivalent of the placebo effect. It does truly work for some, until you look closely and it’s actually a bunch of hot air.

Is a placebo worth a trillion dollars?


Yeah exactly. The question should always be - are these layoffs incremental because of AI? If not, then they should not count in this kind of analysis.


> The Musk Twitter massacre taught everybody a lesson

Well, depends on which lesson. "The company can still run" or "we actually won't build anything new for years".

Twitter released a couple things that were being worked on before the acquisition, and then nothing else (grok comes from a different company which later was merged into it, but obviously had different employees).


> The Musk Twitter massacre taught everybody a lesson

That companies can be kept on KTLO mode with only a skeleton crew?

I think everybody knew that already. The hot takes that Twitter was going to disappear were always silly, probably from people butthurt that a service they liked was being fundamentally changed.


Or maybe companies are letting people go for other reasons and blaming it on AI?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: