Hacker Newsnew | past | comments | ask | show | jobs | submit | more api's commentslogin

I like that Zed has a disable all AI option. I use AI a little but prefer to use terminal based assistants or just cut/paste to a chat.

BTW Zed is great and I subscribed just to support them even though I don’t use their cloud. They should charge for it, even a little bit.

(I might try their AI features again but last time I found them less convenient than the other ways.)


What’s missing is the incentive. The budget and deficit increase regardless of who is in office because all the incentives are for it to increase.


Our family has EVs and rents a gas car for the occasional road trip. We did the math and it’s cheaper than buying and owning and doing the maintenance on a gas car.

If we did long road trips a lot we’d probably get rid of one EV and get an older gas car for that. It wouldn’t be the daily driver.


It would be the end of the US auto industry. Of course they kind of deserve it. The Japanese also beat them to a pulp for much the same reason: refusal to offer practical affordable efficient vehicles to the silent majority who just want a fucking car.


Remember that the simplest answer for this kind of thing is often: pork for politically connected tech and defense companies.

Not saying it couldn’t be for bigger things, just that there does not have to be a rational need other than handing out money.


Oh it’s Bluesky.

Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.

They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.

AI is, if anything, a breath of fresh air by comparison.


You are wrong about AI "being a breath of fresh air" in comparison. For one, AI isn't something you use instead of a microblogging platform. LLMs push all sorts of utter trash in the guise of "information" for much the same reasons.

But I wanted to go out of my way to comment to agree with you wholeheartedly about your claims about the irredeemability of the "microblogging" format.

It is systemically structured to eschew nuance and encourage stupid hot takes that have no context or supporting documents.

Microblogging is such a terrible format in it's own right that it's inherent stupidity and consistent ability to viralize the stupidest takes that will nevertheless be consumed whole by the entire self-selecting group that thinks 140 characters is a good idea is essential to the Russian disinfo strategy. They rely on it as a breeding ground for stupid takes that are still believable. Thousands of rank morons puke up the worst possible narratives that can be constructed, but inevitably, in the chaos of human interaction, one will somehow be sticky and get some traction, so then they use specific booster accounts to get that narrative trending, and like clockwork all the people who believe there is value to arguing things out of context 140 characters at a time eat it up.

Even people who make great, nuanced and persuasive content on other platforms struggle to do anything but regress to the local customs on Twitter and BS.

The only exception to this has been Jon Bois, who is vocally progressive and pro labor and welfare policy and often this opinion is made part of his wonderful pieces on sports history and journalism and statistics, but his Twitter and Bluesky posts are low context irreverent comedy and facetious sports comments.

The people who insisted Twitter was "good" or is now "good" have always just been overly online people, with poor media literacy and a stark lack of judgement or recognition of tradeoffs.

That dumbass russian person who insisted they had replicated the LK-99 "superconductor" and all the western labs failed because the soviets were best or whatever was constantly brought up here as how Twitter was so great at getting people information faster, when it actually was direct evidence of the gullibility of Twitter users who think microblogging is anything other than signal-free noise.

Here's a thing to think about: Which platform in your job gets you info that is more useful and accurate for long term thinking? Teams chats, emails, or the wiki page someone went out of their way to make?


AI has been a breath of fresh air to me, but I understand some of the problems with it.

Chatting with a bot and using it as a brainstorming or research assistant is the first time I’ve felt a since of wonder since Web 1.0. It offers a way to search and interact with knowledge that is both more efficient and different from anything else.

One of the most mind blowing to me is reverse idea search. “I heard the following idea once. Please tell me who may have said this.” Before LLMs this was utterly impossible.

But I also understand how these things work and that any fact or work that the LLM does must be checked. You can’t just mindlessly believe a chat bot. I can see how people who don’t keep that in mind could be led way out into lala land by these things.

I also see their potential for abuse, but that’s true of all tech. In prehistoric times I’m sure there were some guys sitting around a fire lamenting “maybe we should not have sharpened stick. Maybe we should not play god. Let stick be dull as god intended.”


The thing he’s actually angry about is the death of personal computing. Everything is rented in the cloud now.

I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.

His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.

It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.

It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.

There’s more but that’s the gist of it.

That being said, Google is one of the companies that helped kill personal computing long before AI.


You do not seem to be familiar with Rob Pike. He is known for major contributions to Unix, Plan 9, UTF-8, and modern systems programming, and he has this to say about his dream setup[0]:

> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called "terminals". Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.

[0]: https://usesthis.com/interviews/rob.pike/


I don't know his history, but he sounds like he grew up in Unix world where everything wanted to be offloaded to servers because it started in academic/government organizations..

Home Computer enthusiasts know better. Local storage is important to ownership and freedom.


Your data must be on local storage or if it's in the cloud encrypted with keys only you control, otherwise it's not your data.


We agree then? I'm not getting your point...


I wonder how 2012 Rob Pyke would feel about 2025 internet and resource allocation?


I do recognize his name and knew him as a major creator of Go and contributor to UNIX and Plan 9, but didn’t know this quote.

In which case he’s got nothing to complain about, making this rant kind of silly.


This comment is the most "Connor, the human equivalent of a Toyota accord" I've read in a while.


If you asked similar age groups this question in the 1990s you’d get stuff like rock star, actor/actress, and pro athlete.

Young kids usually have career aspirations that mirror what’s popular in their media world. It means little.


If you are putting something out for free for anyone to see and link and copy, why is LLM training on it a problem? How’s that different from someone archiving it in their RSS reader or it being archived by any number of archive sites?

If you don’t want to give it away openly, publish it as a book or an essay in a paid publication.


The problem is that LLM “summaries” do not cite sources. They furthermore don’t distinguish between making summaries and taking direct quotes; that “summary” is often directly lifting text that someone wrote. LLMs don’t cite in either case. It’s a clear case of plagiarism, but tech companies are being allowed to get away with it.

Publishing in a paid publication is not a solution because tech companies are scraping those too. It’s absolutely criminal. As an individual, I would be in clear violation of the law if I took text someone else wrote (even if that text was in the public domain) and presented it as my own without attribution.

From an academic perspective, LLM summaries also undermine the purpose of having clear and direct attribution for ideas. Citing sources not only makes clear who said what; it also allows the reader to know who is responsible for faulty knowledge. I’ve already seen this in my line of work, where LLMs have significantly boosted incorrect data. The average reader doesn’t know this data is incorrect and in fact can’t verify any of the data because there is no attribution. This could have serious consequences in areas like medicine.


Its important to consider others perspectives, even if inaccurate. As it was expressed to me when I suggested "why not write a blog" to a relative who is into niche bug photos and collecting they didn't want to give their writing and especially photos to be trained on. They have valid points honestly and an accurate framing of what will happen, it will get injested eventually likely. I think they overestimate a tad their works importance overall but still they seemed to have a pretty accurate guage of likely outcomes. Let me flip the question, why should they not be able to choose "not for training uses" even if they put it up publically?


> why should they not be able to choose "not for training uses" even if they put it up publically?

I'm having trouble even parsing that question; "Publically" means that you put yourself out there, no? It sounds to me like that Barbra Streisand thing of building an ostentatious mansion and expecting no one to post photos of it.

I suppose you could try to publish things behind some sort of EULA, but that's expressly not public.


If you are having trouble understanding, just ask. Of course I'm talking about a websites terms of use.


As I understand it, terms of use on a publicly accessible page aren't enforceable. That's why it's legal to e.g. scrape pages of news sites regardless of any terms of use. If it's curlable, it's fair game (but it's fair for the site to try to block my scraping).


This is not an answer to your question, but one issue is that if you write about some niche sort of thing (as you do, on a self-hosted blog) that no one else is really writing about, the LLM will take it as a sole source on the topic and serve up its take almost word for word.

That's clearly plagiarism, but it's also interesting to me as there's really no way the user who's querying their fav. ai chatbot if the answer has truthiness.

I can see a few ways this could be abused.


I don't see how this is different from the classic citogenesis process; no AI needed. If a novel claim is of sufficient interest, then someone will end up actually doing proper research and debunking of it, probably having fun and getting some internet fame.


> I don't see how this is different from the classic citogenesis process;

Lack of novelty doesn't remove it as a problem.


Agreed, it's definitely a problem, but I'm just saying that it's the basic problem of "people sometimes say bullshit that other people take at face value". It's not a technical problem. The most relevant approach to analyze this is probably https://en.wikipedia.org/wiki/Truth-default_theory


Are you suggesting that the AI chatbot have this built-in? Because the chances that I, an amateur who is writing about a subject out of passion, have gotten something wrong would approach 1 in most circumstances, and the ask that the person receiving the now recycled information will perform these checks every time they query an AI chatbot would be 0.


These scrapers can bring a small website to its knees. Also, my "contribution" will be drowned in the mass, making me undiscoverable. Further, I can't help fearing a nightmare where someday I'm accused of using AI when I'm only plagiarizing myself.


I feel like Framework wasn’t for this customer. They would have been happier with a Lenovo or something or a Mac.


I agree, although I do not think even Lenovo would be enough.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: