First thing I did here is a grep for "Skills" and no hits. Simon's posts are well upvoted here and Anthropic/Claude is a bit of HN darling, but I think they are playing the hype game a bit too well here.
3 months ago, Anthropic and Simon claimed that Skills were the next big thing and going to completely change the game. So far, from my exploration, I don't see any good examples out there, nor is a there a big growing/active community of users.
Today, we are talking about Cowork. My prediction is that 3 months from now, there will be yet another new Anthropic positioning, followed up with a detailed blog from Simon, followed by HN discussing possibilities. Rinse and Repeat.
This is something I have experienced first hand participating in the Vim/Emacs/Ricing communities. The newbie spends hours installing and tuning workflows with the mental justification of long-term savings, only to throw it all away in a few weeks when they see a new, shinier thing. I have been there and done that. For many, many years.
The mature user configures and installs 1 or 2 shiny new things, possibly spending several hours even. Then he goes back to work. 6 months later, he reviews his workflow and decides what has worked well, what hasn't and looks for the new shiny things in the market. Because, you need to use your tools in anger, in the ups and downs, to truly evaluate them in various real scenarios. Scenarios that won't show up until serious use.
My point is that Anthropic is incentivized in continuously moving goalposts. Simon is incentivized in writing new blogs every other day. But none of that is healthy for you and me.
They were only announced in October and they've already been ported to Codex and Gemini CLI and VS Code agents and ChatGPT itself (albeit still not publicly acknowledged there by OpenAI). They're also used in Cowork and are part of the internals in Fly's new Sprites. They're doing extremely well for an idea that's only three months old!
This particular post on Cowork isn't some of my best work - it was a first impression I posted within a couple of hours of release (I didn't have preview access to Cowork) just to try and explain what the thing was to people who don't have a $100+/month Claude Max subscription.
I don't think it's "unhealthy" for me to post things like this though! Did you see better coverage of Cowork than mine on day one?
I read that as it's not healthy to constantly follow the day one posts about every iteration of brand new technology in order to try and see how to incorporate it into your workflow in a rapidly evolving manner.
It's not an attack on your article or your habits, it's an accurate indictment of chronically consuming probably short-lived hype instead of practicing craft and the use of hardened tools, much like watching certain programmers on youtube to know about the latest frontend library instead of just working on something with versatile, generalizable, industry-relevant tools
You made the right call. Skills were added to Antigravity and I immediately started creating and using them. I never used custom MCP servers, but skills were immediately obvious to me.
An example: I made a report_polisher skill that cleans some markdown formatting, check image links, and then uses pandoc to convert it to HTML. I ask the tool itself created the skill, then I just tweaked it.
How is the fidelity of something like this? It seems like it would randomly fuck it up once in a blue moon. Is that not the case? For your use case I don't understand why you would want an AI involved at all.
Skills may have have code attached to them, so in this case the formatting and converting is all code.
The value of skills is that they are attached to the context of an LLM for few tokens, and the LLM activates one when it feels that it relevant (and brings it into context). It's a chepear alternative to having a huge CLAUDE.md (or equivalent) file.
Please do open-source your skill and blog about it. Also, would like to hear from your experience after a few months of use. Like - how many times did you use the skill, did you run into some problems later (due to some unexpected thing in the markdown), did the skill generalize - or do you have to make tweaks for particular inputs.
@brailsafe has accurately captured where I am coming from.
I want more blogs/discussion from the community about the existing tools.
In 3/6 months, how many skills have you written? How many times have you used each skill? Did you have edit skills later due to unseen corner cases, or did they generalize? Are skills being used predominantly at the individual level, or are entire teams/orgs able to use a skill as is? What are the usecases where skills are not good at? What are the shortcomings?
(You being the metaphorical HN reader here of course.)
HN has always been a place of greater technical depth than other internet sites and I would like to see more of this sort of thing on the front page along with day one calls.
Anything that lets us compose smaller tasks into larger ones effectively is helpful. That’s because self-attention (ie context) is still a huge limiting factor.
As someone who uses these tools a lot, and who sits on the bleeding edge everyday, I agree with you.
MCP got a ton of use out of the gate. People were fawning over it for the first few months, and we can see how well that hype survived contact with hardcore engineers.
I really disagree, skills are really quite useful and there is a lot of usage + community - e.g. take a look at https://github.com/obra/superpowers which I know is used by a lot of people to smooth out their workflow with Claude with great results (not forced spec driven development just better context use + better results). Just this week I used skills to help encapsulate a way to document legacy services ahead of a rewrite (given that my experience now is that rewriting becomes a valid path vs refactoring in many instances): https://github.com/cliftonc/unwind.
I looked at superpowers, but it felt way too generic. Thanks for sharing unwind. More discussion/blogs about these kind of skills is what I am looking for. I would encourage you to write a blog on unwind, explaining in detail how it has helped you. Even better if you do it after 3 months of use, explaining the journey/evolution of the skill.
I'm happy to bet with that skills -- or "a set of instructions in markdown that get sucked into your context under certain conditions" will stick around. Similarly, I think that the Claude Code/Cowork -- or "interactive prompt using shell commands on a local filesystem" -- will also stick around.
I fully anticipate there being a fair amount of thrashing on what exactly the right wrapper is around both of those concepts. I think the hard thing is to discriminate between the learned constants (vim/emacs) are from the attempts to re-jiggle or extend that (plugins, etc); it's actually useful to get reviews of these experiments exactly so you don't have to install all of them to find out whether they add anything.
(On skills, I think that the reason why there "aren't good examples out there" is because most people just have a stack of impromptu local setups. It takes a bit of work to extract those to throw them out into the public, and right now it's difficult to see that kind of activity over lots of very-excitable hyping, as you rightly describe.
The deal with skills and other piles of markdown is that they don't look, even from a short distance, like you can construct a business model for them, so I think they may well end up in the world of genuine open source sharing, which is a much smaller, but saner, place.
> (On skills, I think that the reason why there "aren't good examples out there" is because most people just have a stack of impromptu local setups. It takes a bit of work to extract those to throw them out into the public, and right now it's difficult to see that kind of activity over lots of very-excitable hyping, as you rightly describe.
Very much this. All of my skills/subagents are highly tailored to my codebases and workflows, usually by asking Claude Code to write them and resuming the conversation any time I see some behavior I don't like. All the skills I've seen on Github are way too generic to be of any use.
I thought skills were supposed to be sharable, but (a) ones that are being shared openly are too generic and not useful, (b) people are writing super specific skills and not sharing them.
Would strongly encourage you to open-source/write blog posts on some concrete examples from your experience to bridge this gap.
To be fair, Cowork and similar things are just trying to take the agentic workflows and tools that developers are already accessing (eg most of us have already been working with files in Cursor/CC/Codex for a long time now, it's nothing new) and making them friendly for others.
> 3 months ago, Anthropic and Simon claimed that Skills were the next big thing and going to completely change the game. So far, from my exploration, I don't see any good examples out there, nor is a there a big growing/active community of users.
Skills have become widely adopted since Anthropic's announcement. They've been implemented across major coding agents[0][1][2] and standardized as a spec[3]. I'm not sure what you mean by "next big thing" but they're certainly superior to MCP in ways, being much easier to implement and reducing context usage by being discoverable, hence their rapid adoption
I don't know if skills will necessarily stay relevant amongst evolution of the rest of the tooling and patterns. But that's more because of huge capital investment around everything touching AI, very active research, and actual improvements in the state of the art, rather than simply "new, shinier things" for the sake of it
2 days ago I built a skill to automate a manual workflow I was using: After Claude writes and commits some code, have Codex review that code and have Claude go back and address what Codex finds. I used this process to implement a fairly complete Docusign-like service, and it did a startlingly good job right out of the gate, the bugs were fairly shallow. In my manual review of the Codex findings, it seems to be producing good results.
Claude Code largely built that skill for me.
Implemented as a skill and I've been using it for the last 2 days to implement a "retrospective meeting runner" web app. Having it as a skill completely automates the code->review->rework step.
I looked that the official repo of skills, but I found those very generic and artificial.
I would encourage you to write up a blog post of your experience and share a version of the skill you have built. And then follow up with a blog post after 3 months with analysis like how well the skill generalized for your daily use, whether you had to make some changes, what didn't work etc. This is the sort of content we need more of here.
I partially agree with you that things get abandoned by users when they are too complex, but I think skills are a big improvement compared to what we had before.
Skills + tool search tool (dynamic MCP loading) announced recently are way better than just using MCP tools. I see more adoption by the people around me compared to a few months ago.
Anthropic has great marketing. They get shit (and I do mean shit) to stick in a way that I don't think anyone else in the AI space could. MCP and skills were both obvious duds to people who understand the tech.
Simon is more influencer than engineer at this point, he's incentivized to ride waves to drive views, and I think the handwaiving "this will be amazing" posts have been good to him, even if they turn out to be completely wrong.
I'm not really sure I understand this critique. Skills and cowork are not mutually exclusive. It sits in a gap between Chat and Claude Code.
In regular Chat, I struggle to get the agent to consistently traverse certain workflows that I have. This is something that I can trivially do in Claude Code - but Claude Code wants to code (so I'm often fighting it's tendencies).
Cowork seems like it's going to allow me to use the best parts of Claude Code, without being forced to output everything to code.
It’s not quite at the same level but it reminds me of YouTubers who get products from companies for free for a “review” and then they say “no money exchanged hands”. The incentives are implicit wink-wink and everyone knows it except the audience.
In the case of Cowork I didn't even get preview access, I learned about it at the same moment as everyone else did. There was no incentive from Anthropic to write about it at all (and I expect they may have preferred me not to bang on about prompt injection risks or point out the bugs in their artifacts implementation.)
Honestly, constantly having to fend off accusations of being a shill is pretty tiring.
3 months ago, Anthropic and Simon claimed that Skills were the next big thing and going to completely change the game. So far, from my exploration, I don't see any good examples out there, nor is a there a big growing/active community of users.
Today, we are talking about Cowork. My prediction is that 3 months from now, there will be yet another new Anthropic positioning, followed up with a detailed blog from Simon, followed by HN discussing possibilities. Rinse and Repeat.
This is something I have experienced first hand participating in the Vim/Emacs/Ricing communities. The newbie spends hours installing and tuning workflows with the mental justification of long-term savings, only to throw it all away in a few weeks when they see a new, shinier thing. I have been there and done that. For many, many years.
The mature user configures and installs 1 or 2 shiny new things, possibly spending several hours even. Then he goes back to work. 6 months later, he reviews his workflow and decides what has worked well, what hasn't and looks for the new shiny things in the market. Because, you need to use your tools in anger, in the ups and downs, to truly evaluate them in various real scenarios. Scenarios that won't show up until serious use.
My point is that Anthropic is incentivized in continuously moving goalposts. Simon is incentivized in writing new blogs every other day. But none of that is healthy for you and me.