Hacker Newsnew | past | comments | ask | show | jobs | submit | nikcub's commentslogin

I delayed adopting conductor because I had my own worktree + pr wrappers around cc but I tried it over the holidays and wow. The combination of claude + codex + conductor + cc on the web and claude in github can be so insanely productive.

I spend most of my time updating the memory files and reviewing code and just letting a ton of tasks run in parallel


There have been multiple model generations now where Anthropic have proven that they're ahead of everyone with developing LLMs for coding - if anything the gap has broadened with Opus 4.5.


Codex is and has been superior for some time (though it is slower)


What types of tasks do you find Codex superior?



I meant more how do they pay for all that bandwidth. I can download a 20gb model in like 2 minutes


> But I don’t know anyone non-technical who has ditched ChatGPT as their default LLM.

Google are giving away a year of Gemini Pro to students, which has seen a big shift. The FT reported today[0] that Gemini new app downloads are almost catching up to ChatGPT

[0] https://www.ft.com/content/8881062d-ff4f-4454-8e9d-d992e8e2c...


the trio library has an excellent tutorial that explains all of these concepts[0] even if you don't use trio and stick to the core python libs it's worth reading:

https://trio.readthedocs.io/en/stable/tutorial.html


https://github.com/ocaml/ocaml/pull/14369/files#diff-bc37d03...

Found this part hilarious - git ignoring all of the claude planning MD files that it tends to spit out, and including that in the PR

Lazy AI-driven contributions like this are why so many open source maintainers have a negative reaction to any AI-generated code


The AI should've told him that you can have a local gitignore (.git/info/exclude)


(keep on disk, don't commit)


Don’t open time wasting PRs full stop and give oss maintainers a break is the better message to take home from this.


don't use the same browser regardless - the key is to compartmentalise.


There is an entire business opportunity in just building better user and developer frontends to Google's AI products. It's so incredibly frustrating.


lol that’s our whole company, Nimstrata


They require the bot management config to update and propagate quickly in order to respond to attacks - but this seems like a case where updating a since instance first would have seen the panic and stopped the deploy.

I wonder why clickhouse is used to store the feature flags here, as it has it's own duplication footguns[0] which could have also easily lead to a query blowing up 2/3x in size. oltp/sqlite seems more suited, but i'm sure they have their reasons

[0] https://clickhouse.com/docs/guides/developer/deduplication


I don't think sqlite would come close to their requirements for permissions or resilience, to name a couple. It's not the solution for every database issue.

Also, the link you provided is for eventual deduplication at the storage layer, not deduplication at query time.


I think the idea is to ship the sqlite database around.

It’s not a terrible idea, in that you can test the exact database engine binary in CI, and it’s (by definition) not a single point of failure.


I think you're oversimplifying the problem they had, and I would encourage you to dive in to the details in the article. There wasn't a problem with the database, it was with the query used to generate the configs. So if an analogous issue arose with a query against one of many ad-hoc replicated sqlite databases, you'd still have the failure.

I love sqlite for some things, but it's not The One True Database Solution.


they also have to find and sign large-scale and long-term power supply deals


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: