Hacker Newsnew | past | comments | ask | show | jobs | submit | GodelNumbering's commentslogin

I saw the following from linkedIn this morning

> Update to our terms and data use As of November 3, 2025, we are using some of your Linkedin data to improve the content-generating Al that enhances your experience, unless you opt out in your settings. We also updated our terms. See what's new and how to manage your data.

Frankly, it is unacceptable to tell a user "oh we have been using your personal data for 5 months already and will continue to do so unless you explicitly opt out". Are there any transparent alternatives to LinkedIn (not the trust me bro variant)?


I am building corvi.careers, its a job search engine not social network tho

No consensus but a decent definition is: Ability to utilize resources to achieve outcomes

Like kicking a ball to win a soccer match?

Yes, kicking a ball in the right way does require intelligence.

That sounds more like competency, not intelligence, which funnily enough is only loosely correlated to intelligence.

Hey thanks for doing this! I will be looking into the gpt versions.

It doesn't support gemini CLI because google seems to ban users for using it, there was a big controversy about it some time ago so I decided to leave it alone for now. Also, feel free to reach out to me if you want to discuss anything specific


UPDATE: chatgpt 5.5 using both codex and API are fully supported now

Neat! I'll give it a try. It'd be nice to try to mix gpt 5.5 with local qwen3.6 to see if the context and retrieval optimizations can alleviate the context limitations of running such model on a consumer card (I have a single 3090)

Dirac doesn't use any of that but the memory part may be something I explore in future

Protocol overhead

It supports LMStudio or you can start a local endpoint, then run

OPENAI_COMPATIBLE_CUSTOM_KEY="xxx" dirac -y --provider "https://localhost/v1" --model <model_name> "hi..."


Currently, 14 most popular languages (https://github.com/dirac-run/dirac/tree/master/src/services/...). Easy to add more languages

Good points.

1. I have been trying to benchmark openweights models but keep running into timeouts due to slow inference (terminal bench tasks have strict timeouts that you are not allowed to modify). Posted my frustration here https://www.reddit.com/r/LocalLLaMA/comments/1stgt39/the_fru...

2. Done (updated github readme)

3. Yes, on an average the times were shorter, but I did not benchmark it because at random times, the model outputs get slower, so it is not a rigorous benchmark

4. Added info on this too


1. Good point, didn't know about the timeouts, that's rough for the benchmarks. Though they IMO don't necessarily be "SWE-official" to have value, if the only difference is disabling those.

3. Maybe you could instead provide a measure of output tokens used (including thinking), as that's a reasonable measure for speed. I guess input tokens would be similar unless the AST usage and hashes etc increases them a lot? Seems unlikely.


> Web tools route through api.dirac.run

This is something that needs to be deprecated entirely. The web fetch tool no longer is used or works. There is nothing even listening at api.dirac.run. This was the result of me stretching my capacity too thin and bulk renaming cline.bot to dirac.run

UPDATE (+1h): both Web search and web fetch tools are now nuked.


Thanks! Since it is a Cline fork, the telemetry mechanism is inherited. I left it as it might help debug issues. There is no evil purpose behind it nor does it create or store any PII

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: