Hacker Newsnew | past | comments | ask | show | jobs | submit | esyir's commentslogin

I'll add an expansion here. It's more useful to you locally, as you have excess compute that's generally wasted. If you're serving multiple user and trying to max output, you might cost some in this case

I do think that this is natural. When you use LLM coding tools, you're becoming a lot more like an architect/staff/manager, rather than the direct coder. You're setting out the spec, coming up with the design, and coming up with the high level structure of the project.

However, this comes at the cost of losing track of the minute details of the implementation because you didn't write it yourself. I find it a bit analogous to code I've reviewed vs code I've written.

However, I've found using AI for code structure summary and questioning tends to be a good way to get around it. I might forget faster, but I also pick it up faster.


My impression is that, seeing the rising AI use rates everywhere, that you are in a bubble.

Could be me too, but seeing China's general societal infatuation with AI outpace the US by orders of magnitude, I think that's a bit less likely.


If car manufacturers cannot bring car related deaths to zero, they too should no longer be legitimate companies.

A better comparison would be that if a car company can’t meet preexisting crash/safety standards, they need to shut down.

These are pretty clear laws established by a democratic government with a pretty good record for rule of law.


Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.

The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.


Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?

Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?


We're talking about CSAM right? Which all platforms remove proactively, build models to remove and essentially always respond to when informed.

Demanding some perfect immediate magic response there is the equivalent of asking car manufacturers to prevent all deaths.


Do they remove it and respond really though?

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,

https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html

https://www.zdnet.com/article/us-ai-safety-institute-will-be...

https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok...

The list of not even trying anymore goes on and on. Mechahitler was also fun


When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.

So yes, yes, let's do like we did with cars.


I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.

For a community full of engineers, I'm always surprised that people always take absolutionist views on minor technical decisions, rather than thinking of the tradeoffs made that got there.


The obvious trade off here is engineering effort vs. development cost, and when the tech support solution is "have you tried turning it off, then on again?" We know which path was chosen


Not the OP, but my interpretation here is that if you model the replies as some point in a vector space, assuming points from a given domain cluster close to each other, replies that span two domains need to "tunnel" between these two spaces.


I think what my ideal is basically DnD, but with an AI DM.

This is something that I'm hoping the current LLM and future AI work eventually get us to. If we can get persistent context and memory, or at least a simulacrum of that, we could get to truly dynamic reactive worlds


I'd say that the internet has also strongly lowered the barriers to external propaganda and influence, which is another major factor here. When you've got a huge swarm of "people" with no stake, or even a negative stake in your country, that's a naturally destabilizing factor


You mean like the countless western "safety", copyright and "PC" changes that've come through?

I'm no fan of the CCP, but it's not as though the US isn't hamstringing it's own AI tech in a different direction. That area is something that china can exploit by simply ignoring the burden of US media copyright


Just a note, chatGPT does retain a persistent memory of conversations. In the settings menu, there's a section that allows you to tweak/clear this persistent memory


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: