Hacker Newsnew | past | comments | ask | show | jobs | submit | SOLAR_FIELDS's commentslogin

I was paying IIRC $85 USD to spectrum a month for 300 down and 10 up. Google fiber came to my neighborhood a year and a half ago and offered 1gb symmetrical for $70, so 3x more down and 100x more up for less money.

I’ll actually be optimistic and say we will make it a year before the price hikes start


It does feel like with each new frontier model release the major improvement I notice is that the model is, in fact, getting better at reading your mind. And what I mean by that is that it gets better at understanding the nuance and the subtleties of the intent of what you are saying better, and teasing out the actual intent of what you want better. So it gets easier and easier for the model to build a world around less input. So in a significant way, yes, newer models are reading your mind in a way, because they are probabilistically figuring out better how most humans communicate in natural language and filling in the gaps.

Re writing code: most people find the writing of code to be a chore. For those that don’t, I don’t envy them, because that is the part that just got completely destroyed by AI. It’s becoming pretty abundantly clear that if you enjoy hand writing code that it will be a hobby rather than something you can do professionally and succeed over people who aren’t writing by hand


I think that the skill of hand-writing software is still useful in 2026. The vast majority of programming is a module calling another API. This does not spark joy. Truly interesting classes of problems —application of algorithms or applying complex arcane knowledge— are often not handled well by LLMs. Also, what the author wrote really strikes a chord. We should write the exceptionally difficult sections ourselves so we understand how the software operates.

This reminds me of the observation that Anthropic's unsupervised LLM-generated Rust implementation of sqlite3 was correct for the subset of features they chose, but thousands of times slower (wall clock). Of course, performance will be the next skill to be targeted by expert-led RHLF, but this is a hard problem with many tradeoffs. It may prove to be time-consuming to improve.


Yeah they have more "common sense", though not as much as I'd like. I used to think Opus is big, but after using it a lot, I think it should actually be a lot bigger. The difference from Sonnet to Opus is really noticeable, but the difference from Opus to human (in common sense) is also massive. I expect as the hardware improves, we'll see 3-10x bigger models become the default.

Small models are making great strides of course, and perhaps we will soon learn to distill common sense ;) but subtlety and nuance appear physically bound to parameter count...


> teasing out the actual intent of what you want better.

Do you mean they ask clarifying questions before generating a response?


Kind of. I mean that they have gotten way better at taking some braindead sentence like “trace the performance of this app” and deriving what you actually mean which involves looking at your codebase, identifying your deployment scenario, identifying the steps required to pull the traces, writing the query to sample the traces, then correlating it all together. Just an example, you say 5 words and it’s able to figure out exactly what you want it to do and it might ask questions to clarify but otherwise it’s really good at figuring out what you actually need.

In the dark before times of 6 months ago, the thing would go completely off the rails and fuck it all up. In today’s world, 80% of the time it’s gonna get you pretty close to what you actually want with literally 5 words for simple tasks.

Complex tasks require more upfront work but my anecdata has demonstrated for me that complex tasks are showing similar relative reductions in upfront planning and effort to succeed


In this day and age when a natural language query can produce the most AbstractBeanFactoryFactoryBeanFactory boilerplate at the same rate as a much more concise equivalent, does verbosity matter as much?

If you want to understand what is going on at all, then yes, good abstraction layers do matter, and a lot at that. Hashtag cognitive debt.

Sure, but if I can summon up a summary of what’s going on for those abstraction layers in a matter of seconds, I don’t particularly care whether they were overly verbose or not. There’s no world where you can hand me a pile of code and expect me to be able to comprehend it faster than an LLM can walk the stack anymore, even the most beautiful pristine code that would make Linus Torvalds praise you is easier to have an LLM parse it and explain it to you than doing it yourself.

And the LLM doesn’t care. You could hand it a pile of the best code ever and a pile of brainfuck and probably the difference between comprehending one over the other is in the seconds if not milliseconds of compute time.


This works only until it doesn't. The stochastic nature of LLMs will not go away. When you have to fix that bug, but the explanations of the LLM are incorrect (root) cause analysis, and you have to dig into the code yourself, you will regret not having taken more care earlier. I have had numerous scenarios in my latest project, in which the LLMs simply did not get on the right track, when I asked them about some issue I saw with a widget or making a custom widget (Python, tkinter). I don't think it will fare much better when analyzing existing code, because ultimately it does not understand things.

Given the stochastic nature, if I am forced to have to dig into the code because the LLM couldn't figure it out perhaps one out of every 10 times, it's still a huge bonus. Probably it depends on what you are working on. Esoteric COBOL? Erlang? Yeah good luck, you're probably hand steering the thing while the frontier model providers figure out how to train it better. Vanilla-ish Python/Golang/Typescript/Java? I pretty much never have to do that nowadays for things the model is familiar with. If i do have to dig into the code, I've never regretted doing it this way, because 90% of my use cases worked just fine, and in those 90% of use cases I was able to produce working code at 20x the rate of hand writing it if not more. Feels like a huge win to me.

To me that’s a winner if I’m paying for a SaaS. If I have to go through a procurement cycle to talk to you instead of busting out my card I’m probably going to look and make sure no one else is remotely equivalent first that does self service. If someone else is giving rough feature parity and offers self service they will always win. Even if the self service is more expensive the convenience of me not having to talk to you is gonna get outweighed by that.

I don’t need to chat with you where you do a q&a where you decide what the correct amount of money to extract out of me is. Price your service accurately and accordingly instead and you’ll get my business


agree 100%

I found DBGate to be a pretty good cross platform FOSS option

Is it though? If the way that I’m going to edit those files is by typing the same natural language command into Claude code, and the edit operation to maintain it takes 20 seconds instead of 10, to me that seems pretty materially the same

Yes, it is

How so?

Well yeah… why else would the company have bought them? It wasn’t because they just love ORMs and this one specifically they just wanted to make sure survived because they happened to be using it. It doesn’t work like that.

You can mostly just skip past whatever vague platitudes the announcement post makes toward stuff like this and instead just watch the product closely over the next few months to figure out how it’s gonna go. Trusting anything said in the announcement is a recipe for getting burned later


I don’t use this specific orm but orms in general are trying to solve a very hard problem and as such there are a lot of ways to mess it up. If you can be the least bad at it and create slightly less dumpster fires than everyone else that’s a huge thing

Apple has historically never been good at multiple users at the same machine. Even MacOS is still pretty bad at it. IMO incentives are not aligned here, they want everyone purchasing their own iPad, so i suspect that their strategy is to not invest too much into profile management as it risks cannibalizing their hardware sales.

Like 20 years ago OS X server had pretty great support for it.

I worked a university lab and had an account on the lab server. I could walk up to any computer in the lab and login and get the exact same desktop experience with all my files and settings. The computing power was all on the local machine, but it basically mounted my user folder from the server.

That was the only time I worked anywhere with that setup on Macs, but it worked so well. Though it was admittedly not your standard office environment — there were frequent compelling reasons for me to be using different machines in different parts of the lab, and not a lot of compelling reasons for me to use that account from a computer on a remote network.


20 years ago, I would still have bought a Mac, nowadays they don't sell any hardware that I would pay for.

I don't pay extra for have less options than on PC hardware, my desktop and laptops can be upgraded at will and without gunpoint prices (forgetting about the whole AI stuff that affects everyone anyway), thus all my use of Apple hardware is project specific and taken from the company's hardware pool.


This is such a weird take because it’s pretty well established that if you just need an average computing device that apples cheapest options are often dollar for dollar per compute efficiency way better than the competition.

If you need anything other than a base configuration that’s not true anymore because apple makes stupid money on their $200 upgrades of 8 gb of ram but if you are a low grade consumer who doesn’t need anything other than the base configuration you would be hard pressed to convince me that the base models of their products are worse value than their non apple equivalents


No they aren't, because Apple doesn't offer good gaming options, memory or hard disk sizes at comparable PC prices in tier 2, tier 3, .... world economies, only the rich kids of such nations can afford Apple prices.

It is only well established on the minds of those earning US salaries, or living in countries of similar economies, G8 style.


> Even MacOS is still pretty bad at it.

What problems do you see with multiple users on macOS? I don't use it intensively, but I've never noticed issues.


As a very simple example, airdrop to macOS with multiple logged in users will frequently pop up the confirmation notification in the user account that is not active.

Facetime too. I shared a laptop with my wife for like 2 years, so it was an ok experience, but we noticed those little things.

I wonder if this was a design choice, so if I’m on the computer and a call comes in for them, I can let them know and maybe hand it off?

The alternative would be they would have to answer on their phone (assuming they have an iPhone, which may not always be the case), then use handoff to get it on the Mac.


Could be, but I've never wanted that. I just answer it on my iPhone or my desktop Mac.

Perhaps I don't understand it but the encryption security model for MacOS/iPadOS/iOS currently doesn't allow multiple different encryption keys for each user. So any user can decrypt the whole drive and while it does enforce user permissions, the security model can't support true multiuser.

I actually don't know if Windows or ChromeOS support this either but this is certainly something Linux can with LUKS et. al.


Yep on ChromeOS each user's home dir is separately encrypted with their own password.

USB security prompt disappears when multiple MacOS accounts signed in

Still a problem for me, and has been for years, but I may be holding it wrong. https://discussions.apple.com/thread/255929514?sortBy=rank

The solution posted in the discussion is not really secure.


For me quitting preview, or maybe it is settings, resolves it.

Non-admins getting prompts for system and app upgrades is mildly annoying. The bigger one in a family setting is the clunky sharing. There's no good way to share a photo library or music library between users. The Unix version of making a folder shared by a group doesn't usually work for Apple apps.

Switching users while changing displays often results in an incorrect resolution. That’s such a basic thing: different users have different preferences for their displays and keyboards attached to the displays. Yet this doesn’t work reliably, as if during some moments the login window just doesn’t want to adjust resolutions.

As soon as I added a 2nd user, my Samba share totally broke and days later I still don't have it working. It was fine for over a year and now I'm close to deleting my 2nd user just so I can access my Mac Mini across the network again.

"Fast user switching" has been a feature since OS X 10.5 Leopard. It kind of requires an instructional video though.

Here's an early one I found: https://www.youtube.com/watch?v=nJKRgs2IUg4&t=7s ("18 years ago")


The rhetoric here around this stuff all feels pretty pointless. It’s very obvious to see what happened here. The dictatorship wanted to weaponize Anthropic’s ai at a frightening scale, and Anthropic said no. OpenAI said yes. All the discussion around what is lawful is pretty irrelevant since this administration completely ignores what is lawful anyway.

Personally, I find it interesting. But I guess "more than you wanted to know" is accurate in your case.

If you read the post, "this administration completely ignores what is lawful anyway" is something they mention, in talking about how prior administrators have been doing similar for quite a while.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: