Hacker Newsnew | past | comments | ask | show | jobs | submit | susupro1's commentslogin

You are not crazy, you are just waking up from the SaaS delusion. We somehow allowed the industry to convince us that paying $20/month to rent volatile compute, have our proprietary workflows surveilled, and get throttled mid-thought is an 'upgrade'. The pendulum is swinging violently back to local-native tools. Deterministic, privately owned, unmetered—buying your execution layer instead of renting it is the only way to build actual leverage.


I'm quite aware of my dependency and i'm balancing this in and out regularly over the last 10 years.

Owning is expensive. Not owning is also expensive.

Energy in germany is at 35 cent/kwh and skyrocketed to 60 when we had the russian problem.

I'm planning to buy a farm and add cheap energy but this investment will still take a little bit of time. Until then, space is sparse.


If I could buy this to run it locally, what's that hardware even look like? What model would I even run on the hardware? What framework would I need to have it do the things Claude Code can do?


i don't use local llms. it's mostly the closed source subscriptions that are not private, it really is a choice.

there are many cloud providers of zero data retention llm APIs, and even cryptographic attestation.

they are not throttled, you can get an agreed rate limit.


Would you mind naming some of your favorite providers?


API: fireworks

Fast: inception labs or cerebras

Confidential: tinfoil.sh, phala

TTS/STT: groq

Routers: vercel (or openrouter if they don't have the model).

Search: unsolved, just can't get zdr, local hosted.


No one was convinced to spend money to do the things you're saying. That's just disingenuous. People rent models because (a) it moves compute elsewhere (b) they provide higher quality models.


c) It's turnkey instead of requiring months/years of custom dev and on-going maintenance.


This perfectly explains the trade-off. But from a pure UX perspective, freezing the input pipeline feels uniquely hostile. They could buffer the keystrokes invisibly in the background instead of locking the cursor, which creates the jarring perception that the site is actively fighting the user.


This is exactly why most devs just surrender and ship an 800MB Electron bundle for any cross-platform tool.

I finally got sick of that tradeoff. Ported a local video processing pipeline to Tauri v2 so it just uses the native macOS webview instead of fighting GTK or bundling Chromium. By hydrating heavy dependencies (like ffmpeg) via Rust at launch, the payload dropped to 30MB and idle RAM sits under 80MB.

Leaning on the native OS renderer is the only way cross-platform doesn't feel like a bloated compromise.


Hit this exact wall with desktop wrappers. I was shipping an 800MB Electron binary just to orchestrate a local video processing pipeline.

Moved the backend to Tauri v2 and decoupled heavy dependencies (like ffmpeg) so they hydrate via Rust at launch. The macOS payload dropped to 30MB, and idle RAM settled under 80MB.

Skipping the default Chromium bundle saves an absurd amount of overhead.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: