Hacker Newsnew | past | comments | ask | show | jobs | submit | sharms's commentslogin

This is great - a Macbook Pro for Linux users, made of CNC milled aluminum, haptic trackpad, and 20+ hours of 4k video playback under Linux


The 20h figure is specifically for streaming 4k Netflix in the app on Windows. Netflix doesn’t even support 4K streaming on Linux as far as I know.


Good call out - but seeing 2.5W consumption at idle from people with it already on Linux so these numbers will hold (like Dell XPS 14)


This is because the "thinking" you see is a summary by a highly quantized model - not the actual model, to mask these tokens


This is awesome! Does it work on most G14's or are there specific years / models?


AWS actually has two libraries they use instead: s2n and aws-lc https://github.com/aws/s2n-tls https://github.com/aws/aws-lc


The complaints about Apple are from decades of excellent design and about a pixel being off or other small items that people with well trained eyes spot. The problems with Windows are forcing you to run Onedrive and then deleting your files


Yes they have an Nvidia image and I just used it on a 5080 last weekend, worked perfect


Piggybacking on this… do all nvidia cards have the same issue with Linux drivers, where the fan won’t ever go below 30%? I have a 3090 on Ubuntu 24 and hours of googling netted nothing that worked.


FWIW - the fans on my 4070ti super turn off during idle in pop, bazzite, and cachy (just like on windows). Definitely no being stuck at 30% that I’ve experienced


That might just be your card idling at a hot temperature; my 3070 Ti is idling with 0% fan according to nvtop and LACT both.

Install GWE or Lact and you should be able to tweak the fan curves manually from Linux.


Created https://www.spreadcheer.net - a Christmas list app that can store locally, isn't full of ads, no login required and should be pretty fast


FWIW I bought the M4 max with 128GB and it is useful for local LLMs for OCR, I don't find it as useful for coding (ala Codex / Claude Code) with local LLMs. I find that even with GPT 5 / Claude 4.5 Sonnet that trust is low, and local LLMs can lower that just enough to not be as useful. The heat is also a factor - Apple makes great hardware, but I don't believe it is designed for continuous usage the way a desktop is.


Thank you! Planetscale has been great so far especially the console where it shows the query insights and the p95 / p99 views. I have been able to make features without worrying about the database because it just works


I would imagine having more karma if I was AI


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: