Hacker Newsnew | past | comments | ask | show | jobs | submit | ablob's commentslogin

I think multi-cursors can be seen as an extension of macros, just that instead of defining the macro and navigating to the relevant places you instead navigate first and then execute the commands interactively (in essence skipping the part where you have to record). As a side effect you also don't need to be that concerned about what to do after having made a mistake. I've had some pretty nasty string-wrangling with the substitute command that could've been avoided by just using a macro and the other way around. I'd argue these things complement each other and there is no need to restrict yourself arbitrarily. Having it and not using it is better than needing and not having. I can recall countless times where multi-cursor would've been just the sweet-spot I needed.

P.S.: multi-cursor is not about moving around the code base and therefore not taking lessons about navigation has no impact in this matter.


> If the biggest flaw of a OS is the border radius of its windows, you've got yourself a pretty decent OS!

This argument would also make Windows 11 a pretty decent OS by extension via "If the biggest flaw of a OS is the position of the start menu you've got yourself a pretty decent OS".

In general I could use any minor nuisance as a proof of decency - or inject some to form this argument on purpose as a manufacturer.

People don't like if their environment changes in minor unsolicited ways. There's always gonna be fuzz about these things and that means that the fuzz itself can't be used to make any strong argument whatsoever.


I think people are more complaining about windows crashing on updates or Microsoft putting ads everywhere or forcing one drive

That’s way more than just the “position of the start menu”


For Windows, you also have an ad, an AI, or both appearing in every other app.

On the specific issue of window corner roundedness, Windows 11 is great IMO. The corners are rounded when the window is floating free, but change to square when it's maximized or snapped to a side of the screen. The perfect design.

The perfect design is no rounded corners. anywhere. ever.

How do you tell a snapped window from a free-floating window in that case?

I've never really needed that, or I don't understand the need for there to be a difference. I typically tile windows into each corner based on how large I need them. When I need more than 4, I'll manually place them.

What I do notice is the wasted space needed by the entire window border to accommodate rounded corners and how annoying it is to grab a window handle in e.g. Ubuntu w/ GNOME because you're clicking/touching where the corner would be (but isn't, because it's round).


You might have an application for which speed is not important most of the time. Only one or two processes might require allocation-free code. For such a case, why would you burden all of the other code with the additional complexity? Calling out to a different language then may come with baggage you'd rather avoid.

A project might also grow into these requirements. I can easily imagine that something wasn't problematic for a long time but suddenly emerged as an issue over time. At that point you wouldn't want to migrate the whole codebase to a better language anymore.


Common video codecs are often hardware accelerated. This should be on the CPU side quite often, as there are a lot of systems without dedicated GPUs that still play video, like Notebooks and smart phones. So in the end it's less about being parallelizable, but if it beats dedicated hardware, to which the answer should almost always be no.

P.S.: In video decoding speed is only relevant up to a certain point. That being: "Can I decode the next frame(s) in time to show it/them without stuttering". Once that has been achieved other factors such as power drainage become more important.


It is my understanding that hardware accelerated video encoders (as in the fixed-function ones built into consumer GPUs) produce a lower quality output than software-based encoders. They're really only there for on-the-fly encoding like streaming to twitch or recording security camera footage. But if you're encoding your precious family memories or backing up your DVD collection, you want to use software encoders. Therefore, if a hypothetical software h264 encoder could be faster on the GPU, it would have value for anyone doing not-on-the-fly encoding of video where they care about the quality.

One source for the software encoder quality claim is the "transcoding" section of this article: https://chipsandcheese.com/i/138977355/transcoding


> ... That being: "Can I decode the next frame(s) in time to show it/them without stuttering".

Except when you are editing video, or rendering output. When you have multiple streams of very high definition input, you definitely need much more than realtime speed decoding of a single video.

And you would want to scrub around the video(s), jumping to any timecode, and get the target frame preferably showing as soon as your monitor refreshes.


Both the curl and the SQLite project have been overburdened by AI bug reports. Unless the Google engineers take great care to review each potential bug for validity the same fate might apply here. There have been a lot of news regarding open source projects being stuffed to the brim with low effort and high cost merge requests or issues. You just don't see all the work that is caused unless you have to deal with the fallout...

This project has nothing to do with bug reports... it's an opt-in tool for reviewing proposed changes that kernel developers can decide to use (if they find it useful).

This is the first time I've heard about Emacs trying to look nice.


I once saw a visualization that basically partitioned decisions on a 2D plane. From that perspective, decision trees might just be a fancy word for kD-Trees partitioning the possibility space and attaching an action to the volumes.

Given that assumption, the nebulous decision making could stem from expert's decisions being more nuanced in the granularity of the surface separating 2 distinct actions. It might be a rough technique, but nonetheless it should be able to lead to some pretty good approximations.


You have this thing a little backwards that it is unintentionally hilarious.

Decision trees predate KD trees by a decade.

Both use recursive partitioning of function domain a fundamental and an old idea.


Who cares who had it first, what matters is who has it, and who doesn't...


Apparently some do, hence my reply.


The short answer is yes. The post literally has an example of co-routines (think C-style: possible, but ugly). The difference here is how easy it is to write. I'd wager the question is not if it can be achieved, but for which use cases it can be ergonomic.


The blog-poster wasn't happy with the issue being closed, so somehow I doubt that opening a new issue and referencing this one would've yielded a different result from what we got now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: