You are not crazy, you are just waking up from the SaaS delusion. We somehow allowed the industry to convince us that paying $20/month to rent volatile compute, have our proprietary workflows surveilled, and get throttled mid-thought is an 'upgrade'. The pendulum is swinging violently back to local-native tools. Deterministic, privately owned, unmetered—buying your execution layer instead of renting it is the only way to build actual leverage.
If I could buy this to run it locally, what's that hardware even look like? What model would I even run on the hardware? What framework would I need to have it do the things Claude Code can do?
No one was convinced to spend money to do the things you're saying. That's just disingenuous. People rent models because (a) it moves compute elsewhere (b) they provide higher quality models.
This perfectly explains the trade-off. But from a pure UX perspective, freezing the input pipeline feels uniquely hostile. They could buffer the keystrokes invisibly in the background instead of locking the cursor, which creates the jarring perception that the site is actively fighting the user.
This is exactly why most devs just surrender and ship an 800MB Electron bundle for any cross-platform tool.
I finally got sick of that tradeoff. Ported a local video processing pipeline to Tauri v2 so it just uses the native macOS webview instead of fighting GTK or bundling Chromium. By hydrating heavy dependencies (like ffmpeg) via Rust at launch, the payload dropped to 30MB and idle RAM sits under 80MB.
Leaning on the native OS renderer is the only way cross-platform doesn't feel like a bloated compromise.
Hit this exact wall with desktop wrappers. I was shipping an 800MB Electron binary just to orchestrate a local video processing pipeline.
Moved the backend to Tauri v2 and decoupled heavy dependencies (like ffmpeg) so they hydrate via Rust at launch. The macOS payload dropped to 30MB, and idle RAM settled under 80MB.
Skipping the default Chromium bundle saves an absurd amount of overhead.