Hacker Newsnew | past | comments | ask | show | jobs | submit | nacs's commentslogin

I only installed it today for the first time but yes it does have a very prominent button to completely disable all AI.

Thanks, I might give it a spin on the weekend then and see how well it performs compared to Sublime Text. If what other people say here are true - as in it uses considerable CPU and GPU resources being idle - then I'll know it's not a usable piece of software.

Just checked since you made me curious. With 1 PHP and 1 nodejs project in 2 windows, here's the usage (on Ubuntu Linux):

   Zed is using:
   - CPU: 4.7%
   - RSS: ~2.1 GB
The bulk of it is language servers (TypeScript/tsserver is adding ~600MB). Zed by itself is ~790MB RAM.

Is it constantly using that 4.7% CPU? (which could be one or two CPU cores depending on the processor).

For (what should be) an event driven application using any amount of CPU just sitting there idle is a big no-no. Anyone on a laptop should pay attention.


Even Sonnet 4.6 is 9x multiplier (previously 1x)!

The only model I even used on Copilot was Sonnet and now its got a ridiculous multiplier.

At this point they might as well just charge per Million tokens like every other provider instead of having a subscription.


They do for any new plan. Those multipliers are only for people that paid annually. After their subscription ends they'll go into token based pricing like the rest of people.

I understand it like : the 10 usd is for handling the business record, maybe also the harness, I get a few coins to kick tires, but to use it for anything real it’s pay as you go by the tokens list price.

> At this point they might as well just charge per Million tokens like every other provider instead of having a subscription.

Pretty sure that's what they will eventually do


... that is exactly what they will do. Just click the link in this thread, or read the headline.

Why the multipliers then at all?

The multipliers are there only for current annual plan customers. After 2026 its all tokens.

I thought I was smart for buying the annual plan after I graduated and lost my student plan and then GitHub taking away my Copilot Pro I got for free for being a author of a popular OSS project. Turns out I'm being punished for making that year commitment to them. I like to think I'm only a moderate user of GHCP so this is just terrible for me. I'm honestly thinking about cancelling and switching to alternatives while also looking at investing in a local LLM setup.

So they're changing the product that people already paid an annual subscription for to the worse. That's asking for legal complaints.

It's not just the price of the console itself as mentioned in the article. Things like the Playstation and Xbox require a *very* expensive SDK.

Playdate's SDK is free.


You should really look at the 2nd link, its much worse than telemetry..

> opencode will proxy all requests internally to https://app.opencode.ai

> There is currently no option to change this behavior, no startup flag, nothing. You do not have the option to serve the web app locally, using `opencode web` just automatically opens the browser with the proxied web app, not a true locally served UI.

> https://github.com/anomalyco/opencode/blob/4d7cbdcbef92bb696...


That is the address of their hosted WebUI which connects to an OpenCode server on your localhost. Would be nice if there was an option to selfhost it, but it is nowhere near as bad as "proxying all requests".


It looks like the author has kept it updated since then.

They mention the "Qwen3.5 (35B)" model for example which was released around 2 weeks ago.


For some anecdata, I've set up Qwen3.5 on a RX 7900XTX last weekend. It runs fine, did some simple coding prompts and got responses in 15-30 seconds. It's my first foray into running models locally just to see what's possible, and I guess I'm happily surprised so far.

Also, the entire setup was done through Codex. I asked Codex to figure out how to run models locally given my architecture (Ubuntu, AMD GPU). It told me which steps to apply and I hit zero snags.


Lossless-cut has both an HTTP api and a CLI so it could be controlled via a lightweight TUI if someone wanted.


They may but note that this isn't an official Newgrounds project - this is just a user ("Bill") posting on his own Newgrounds blog that he has made this (its not Newgrounds' official blog).


I meant Newgrounds the community.


Yep, the email they sent out is terribly worded so it looks like the age requirement is for Zed itself.

Their actual blog ( https://zed.dev/blog/terms-update ) says the age requirement is only for their AI service (still not the best wording but a little clearer):

> Age requirement. You must be 18 or older to use Zed’s AI-enabled software-as-a-service offering (the “Service").


This still sounds odd. Where is this restriction coming from?


It has binding arbitration. I assume/hope you must be an adult to sign away your right to sue.


Speculation I've seen is that whatever LLM they're reselling has this requirement itself and they need to pass it along.

I had expected this to be about their multi-user editing and chat features.


> I really hope more people realize that local LLMs are where it's at

No worries, the AI companites thought ahead - by sending GPU, RAM, and now even harddrive prices through the roof, you won't have a computer to run a local model.


What model and hardware powers this?

Is this a Google T5 based model?


3bit hard-wired Llama 3.1 8B ( https://taalas.com/the-path-to-ubiquitous-ai/ )


3bit is a bit ridiculous. From that page I am unclear if the current model is 3 or 4bit. If it’s 4bit… well, NVIDIA showed that a well organized model can perform almost as well as 8bit.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: