Hacker Newsnew | past | comments | ask | show | jobs | submit | opan's commentslogin

Perhaps obvious to some, but this does not seem to be about learning in the traditional sense, nor a library in the book sense, unfortunately.

Real high framerate is one thing, but the TV setting is faking it with interpolation. There's not really a good reason to do this, it's trickery to deceive you. Recording a video at 60fps is fine, but that's just not what TV and movies do in reality. No one is telling you to watch something at half the intended framerate, just the actual framerate.

In principle, I agree with you.

I would vastly prefer original material at high frame rates instead of interpolation.

But I remember the backslash against “The Hobbit: An Unexpected Journey” because it was filmed at 48 Hz, and that makes me think that people dislike high frame rate content no matter the source, so my comment also covers these cases.

Also, because of that public response, we don't have more content actually filmed at high frame rates =)


I wanted to like The Hobbit in 48, but it really didn't work for me. It made everything look fake, from the effects to the acting. I lost suspension of disbelief. If we want high frame rate to be a thing, then filmmakers need to figure out a way to direct that looks plausible at a more realistic speed, and that probably means less theatrics.


>and if I could avoid playing against MnK players while I’m on controller

If you can stand to move away from an Xbox controller (they're the only ones without gyro still) and learn gyro/flick stick, it levels the playing field a lot more. Flick stick's instant turns even give it some advantages over KB+M.

8BitDo's controllers are the best-supported on Steam at the moment, with gyro, analog triggers, and back buttons all fully working at the same time now in DInput mode. I use the Pro 2, but if you prefer Xbox layout, you may want one of their Ultimate controllers. Don't buy the ones that are licensed Xbox controllers as I believe they then don't have gyro. Many have the Nintendo button labels, but Steam has a toggle to use the Xbox layout.


I really hate UI button mismatch in steam. I grew up on PS controllers, I couldn't even tell you the xbox layout from memory. I was once forced to use a Switch controller (was traveling, forgot my main) while the in-screen UI was XBox. I had so much trouble at first, especially since the xbox and switch controllers share button names but even the "x" made me think of the playstation x. Just some musings.

It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.

If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)

It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.


> I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.

If they do, you’ll be in good company. That post is about the exact opposite of what people usually link it for. I’ll let Dan explain:

https://news.ycombinator.com/item?id=27067281


Dan makes a case for being charitable to the commenter and how lame it is to neener-neener into the past, not that it has some opposite meaning everyone is missing out on.

Dan clearly references how people misunderstand not only the comment (“he didn't mean the software. He meant their YC application”) but also the whole interaction (“He wasn't being a petty nitpicker—he was earnestly trying to help, and you can see in how sweetly he replied to Drew there that he genuinely wanted them to succeed”).

So yes, it is the opposite of why people link to it (which is a judgement I’m making, I’m not arguing Dan has that exact sentiment), which is to mock an attitude (which wasn’t there) of hubris and lack of understanding of what makes a good product.


The comment isn't infamous because it was petty or nitpicking. It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.

It's why it caught the zeitgeist at the time and why it's still apropos in this conversation now.


> It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.

None of those things are true. Which is the point I’m making. Go read the original conversation. All of it.

https://news.ycombinator.com/item?id=9224

Don’t skip Brandon’s reply.

https://news.ycombinator.com/item?id=9479

It is absurd to claim that someone who quickly understood the explanation, learned from it, conceded where they were wrong, is somehow “profoundly out-of-touch” and “lost all perspective”. It’s the exact opposite.

I agree with Dan that we’d be lucky if all conversations were like that.


I think you should take your own advice and re-read the conversation without your pre-conceived conclusion.

Ironically your own overly verbose and aggressive comments here fall into the same trap.


> If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.

Curiously, literally nobody on earth uses this workflow.

People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.


> The accuracy isn’t perfect

The reason why people don't use LLMs to "trivially save hours or days of work" is because LLMs don't do that. People would use a tool that works. This should be evidence that the tools provide no exceptional benefit, why do you think that is not true?


The only way LLM search engines save time is if you take what it says at face value as truth. Otherwise you still have to fact check whatever it spews out which is the actual time consuming part of doing proper research.

Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.


Of course you have to fact check - but verification is much faster and easier than searching from scratch.

How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.

Because it gives you an answer and all you have to do is check its source. Often you don’t have to do that since you have jogged your memory.

Versus finding the answer by clicking into the first few search results links and scanning text that might not have the answer.


As I said, how are you going to check the source when LLMs can't provide sources? The models, as far as I know, don't store links to sources along with each piece of knowledge. At best they can plagiarize a list of references from the same sources as the rest of the text, which will by coincidence be somewhat accurate.

Pretty much every major LLM client has web search built in. They aren't just using what's in their weights to generate the answers.

When it gives you a link, it literally takes you to the part of the page that it got its answer from. That's how we can quickly validate.


LLMs provide sources every time I ask them.

They do it by going out and searching, not by storing a list of sources in their corpus.


have you ever tried examining the sources? they actually just invent many "sources" when requested to provide sources

When talking about LLMs as search engine replacements, I think the stark difference in utility people see stems from the usecase. Are you perhaps talking about using it for more "deep research"?

Because when I ask chatgpt/perplexity things like "can I microwave a whole chicken" or "is Australia bigger than the moon" it will happily google for the answers and give me links to the sites it pulled from for me to verify for myself.

On the other hand, if you ask it to summarize the state-of-the art in quantum computing or something, it's much more likely to speak "off the top of its head", and even when it pulls in knowledge from web searches it'll rely much more on it's own "internal corpus" to put together an answer, which is definitely likely to contain hallucinations and obviously has no "source" aside from "it just knowing"(which it's discouraged from saying so it makes up sources if you ask for them).


I haven't had a source invented in quite some time now.

If anything, I have the opposite problem. The sources are the best part. I have such a mountain of papers to read from my LLM deep searches that the challenge is in figuring out how to get through and organize all the information.


For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.

> People must be in complete denial

That seems to be a big part of it, yes. I think in part it’s a reaction to perceived competition.


>The CUDA Tile IR project is under the Apache License v2.0 with LLVM Exceptions

GP's LKML link is very recent unlike your two links, implying something could've changed.

I have no insight into the Asahi project, but the LKML link goes to an email from James Calligeros containing code written by Hector Martin and Sven Peter. The code may have been written a long time ago.

I've never gotten along too well with virtualization, but would second the ThinkPad idea, or something similar. Old/cheap machine for tinkering is a good way to ease in, and I think bare metal feels more friendly.

I'd probably recommend against dual booting, but I understand it's controversial. I like to equate it to having two computers, but having to fully power one off to do anything* on the other one. Torrents stop, music collection may be inaccessible depending on how you stored it, familiar programs may not be around anymore. I dual booted for a few years in the past and I found it miserable. People who expected me to reboot to play a game with them didn't seem to understand how big of an ask that really was. Eventually things boiled over and I took the Windows HDD out of that PC entirely. Much more peaceful. (Proton solves that particular issue these days also)

That being said, I've had at least two friends who had a dual boot due to my influence (pushing GNU/Linux) who ended up with some sort of broken Windows install later on and were happy to already have Ubuntu as an emergency backup to keep the machine usable.

*Too old might be a problem these days with major distros not having 32bit ISOs anymore


I went 100% bazzite back in April/May, no windows, and I couldn’t be happier. The pc I built is basically 90% gaming/movies/hanging with friends, 10% browser tasks. Very easy to live this life if you don’t have particular professional needs IMO. When I was doing more freelance editing this really would not have been an option as resolve studio does not work well on Linux.

I had working IPv6 in the past, but currently I seem to have no working IPv6. Using Xfinity. I have access to some servers at a friend's place in another city, pretty sure he also doesn't have IPv6. Maybe some phone calls would sort it out, but when "everything" still works (with IPv4), it's hard to care.


That is really bizarre, because I have Comcast and I find their IPv6 support excellent. The only complaints I have are that I wish you could get bigger than a /60 prefix (a /56 would be nice), and that I wish it was feasible to get a static prefix as a residential customer. Granted you said you don't really care to fix it, but if that ever changes I do think you could get them to fix it pretty easily. IPv6 is one of the things they generally do right.


Curious what you’re doing that requires more than 16 SLAAC-enabled subnets (or a lot more non-SLAAC enabled subnets)


They just posted a progress report this month. Seems very much alive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: