Hacker Newsnew | past | comments | ask | show | jobs | submit | sukit's commentslogin

Been trying to get into jj lately, but I rely a lot on VS Code's git gutter to review changes as I code. Doesn't look like jj has an equivalent in VS Code. Anyone got tool recommendations?

There are a number of jj plugins for vsc. VisualJJ and Jujutsu Kaizen are probably the two most popular

https://www.visualjj.com/

https://github.com/keanemind/jjk


> jjk

what's next, "oh! my gitess"? "chainsvn man"?


I just use the VS Code git integration with the jj colocated git repo. HEAD is @- and the changes in @ are considered working copy changes. It works for all I was using the VS Code integration for.

Same experience here

You should be able to use the normal git gutter as long as your repository is colocated.

jjk or jjview

I have a PR up for jjk that does the full change as a review changes, and there's another user's PR that allows diffs over arbitrary ranges (i.e. when working out whether the commits that make up a PR are good as a whole rather than individually)


visualjj, it’s fantastic

Code can be logically separated, but my mind struggles to do the same. I guess this might require some training?


Using different models to supervise each other sounds reasonable. I’m curious what plans are you subscribed to for Claude and GPT?


I’m on the $100 Claude and $20 GPT plans. I almost never run out of weekly usage on Claude and occasionally blow my Codex allowance, but OpenAI lets you buy credits ad-hoc (either $40 per 1000, or just turn on and monitor the “top-off” and set it to buy like $5 at a time. The one or two times I’ve run out, by the time the week resets, I’ve only spent an extra 5 or 10 bucks.


I haven’t tried that, does it introduce some kind of “black magic” that makes the agent hard to observe?


It's just a set of skills and Claude commands. You can install via Claude plugins and read the prompts yourself


Do you manage worktrees manually or leave it to Agent?


Leave it to the agent tbh, I spend more time on testing the worktrees - but even agents can do that for you - or if you add playwright MCP, make TDD both on frontend/backend and e2e pipelines - merge when cases work and tests approved


Fair question. I haven’t done a systematic benchmark yet, so I don’t have hard numbers to point to. Honestly I’ve mostly been iterating from actual use. The main test has been whether it helps me keep the good parts of brainstorming with the agent, recover context across longer multi PR or multi session work, and reduce friction overall. So right now the evidence is mostly qualitative and based on my own workflow, not a formal evaluation.


Thanks a lot! I’ll definitely give that a try.


That's a great point. I think there is some pattern to when it works well or not, but I’m not sure if that’s universal or just tied to how I use it. Different prompting styles or workflows might lead to very different outcomes.


I might be misunderstanding how it works, but from what I’ve seen, CLAUDE.md doesn’t seem to be automatically pulled into context. For example, I’ve explicitly written in CLAUDE.md to avoid using typing.Dict and prefer dict instead — but Claude still occasionally uses typing.Dict.

Do I need to explicitly tell Claude to read CLAUDE.md at the start of every session for it to consistently follow those preferences?


No, Claude Code will automatically read CLAUDE.md. LLMs are still hit or miss at following specific instructions. If you have a linter, you can put a rule there and tell Claude to use the linter.


That’s the problem — about 50% of the time, the result is so messy that cleaning it up takes more time than just writing it. So I wonder is there a better way to prompt or structure things so that I consistently get clean, usable code?


> That’s the problem — about 50% of the time, the result is so messy that cleaning it up takes more time than just writing it.

Are you using git? as in "git checkout ." or "git checkout -b claude-trial-run"


This is my experience as well, and that of many people I’ve talked to. The ones who breathlessly state how awesome it is seem to all be business people to me rather than engineers. It keeps throwing me into doubt.


I’ve seen so many people praise Claude Code so highly that my first instinct was to assume I must be using it wrong. I’ve tried quite a few different workflows and prompting styles — but still haven’t been able to get results anywhere near as good as what those people describe.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: