Hacker Newsnew | past | comments | ask | show | jobs | submit | sapiogram's commentslogin

Haskell is far more dangerous. It allows you to simple destruct the `Just` variant without a path for the empty case, causing a runtime error if it ever occurs.


I thought this problem was Wayland's reason for existing?


Nah mate, it's all about the Wayland Trust model. No keylogging, consent-based screen recording, and no window spying. Isolation.


https://www.x.org/releases/X11R7.6/doc/xextproto/security.ht...

Notice the date.. 1996

(for those that didn't click the link, that is the X11 security extension which address all that, and it was published ~30 years ago).


As long as you're willing to stay on some old LTS distro, you'll be fine for at least another 10 years. X isn't going anywhere.


Rust's standard library hasn't received any major additions since 1.0 in 2015, back when nobody was writing web services in Rust so no one needed logging.


This is patently false. The majority of the "features" that get added to Rust in every release are additions to the standard library.


Depends on your definition of "major". Small utility functions are added on nearly every release, but they definitely aren't major.


> Or, for $300, you can buy an RTX 5060 that is better than the best GPU from just 6 years ago. It's even faster than the top supercomputer in the world in 2003, one that cost $500 million to build.

RTX 5060 is slower than the RTX 2080 Ti, released September 2018. Digital Foundry found it to be 4% slower in 1080p, 13% slower in 1440p: https://www.youtube.com/watch?v=57Ob40dZ3JU


Kill threads at will?


That requires some explanation. Basically I think runtimes should be abort safe and have some defined thing that happens when a thread is aborted. Antiquated 70s blocking APIs do not, or do not consistently.

It’s a minor gripe compared to the heaviness of threads and making every programmer hand roll fibers by way of async.


Rust solves this at compile-time with move semantics, with no runtime overhead. This feature is arguably why Rust exists, it's really useful.


Rust moves are a memcpy where the source becomes effectively unitialized after the move (that is say it is undefined to access it after the move). The copies are often optimized by the compiler but it isn't guaranteed.

This actually caused some issues with rust in the kernel because moving large structs could cause you to run out the small amount of stack space availabe on kernel threads (they only allocate 8-16KB of stack compared to a typical 8MB for a userspace thread). The pinned-init crate is how they ended solving this [1].

[1] https://crates.io/crates/pinned-init


if you can always move the data that's the sweet spot for async, you just pass it down the stack and nothing matters.

all of the complexity comes in when more than one part of the code is interested in the state at the same time, which is what this thread is about.


Do you monitor your product closely enough to know that there weren't other brief outages? E.g. something on the scale of unscheduled server restarts, and minute-long network outages?


I personally do through status monitors at larger cloud providers at 30 sec resolutions, never noticed a downtime. They will sometimes drop ICMP though, even though the host is alive and kicking.


Surprised they allow ICMP at all


why does this surprise you?

actually, why do people block ICMP? I remember in 1997-1998 there were some Cisco ICMP vulnerabilities and people started blocking ICMP then and mostly never stopped, and I never understood why. ICMP is so valuable for troubleshooting in certain situations.


Security through obscurity mostly, I don't know who continues to push the advice to block ICMP without a valid technical reason since at best if you tilt your head and squint your eyes you could almost maybe see a (very new) script kiddie being defeated by it.

I've rarely actually seen that advice anywhere, more so 20 years ago than now but people are still clearly getting it from circles I don't run in.


I don’t disagree. I am used to highly regulated industries where ping is blocked across the WAN


I do. Routers, switches, and power redundancy are solved problems in datacenter hardware. Network outages rarely occur because of these systems, and if any component goes down, there's usually an automatic failover. The only thing you might notice is TCP connections resetting and reconnecting, which typically lasts just a few seconds.


Of course. It's a production SaaS, after all. But I don't monitor with sub-minute resolution.


I do for some time now, on the scale of around 20 hosts in their cloud offering. No restarts or network outages. I do see "migrations" from time to time (vm migrating to a different hardware, I presume), but without impact on metrics.


Having run bare-metal servers for a client + plenty of VMs pre-cloud, you'd be surprised how bloody obvious that sort of thing is when it happens.

Also sorts of monitoring gets flipped.

And no, there generally aren't brief outages in normal servers unless you did it.

I did have someone accidentally shut down one of the servers once though.


to stick to the above point, this wasn't a minute long outage. if you care about seconds/minutes long outages, you monitor. running on aws, hetzer, ovh, or a raspberry in a shoe box makes no difference


Idk about subpixel font rendering, but font rendering on Linux looks massively better after a patch last week: https://github.com/zed-industries/zed/issues/7992#issuecomme...


I'm glad there's finally some progress in that direction. If they actually implement subpixel RGB anti-aliasing, it would definitely be worth considering as an alternative. It's been surprising to see so many people praise Zed when its text rendering (of all things) has been in such a state for so long.


Tbh though, is subpixel text rendering really all that important anymore when high resolution monitors are common now and low-dpi is the exception?


You should get outside your major metropolis and highly paid Western job once in a while. High-DPI monitors are the exception for most of the world.


It doesn't take being outside of the west for this to be relevant. Two places I currently frequent, A) the software development offices of a fortune 500 company, and B) the entire office & general-spaces (classrooms, computer labs, etc) of a sizeable university, have 1080p monitors for >80% of their entire monitor deployment.


Even then... my visibility is pretty bad, so earlier this year I upgraded to 45" 3440x1440 monitors, and even then I'm viewing at 125%, so subpixel fonts helps a lot in terms of readability, even if I cannot pick out the native pixels well.

They aren't high-dpi though, just big and still zoomed. On the plus side, it's very similar experience to two 4:3 monitors glued together... side by side apps on half the screen is a pretty great experience... on the down side, RDP session suck, may need to see if I can find a scaling RDP app.


Most people I know are on 1920x1080 LCDs. Over half of PC gamers seem to be on that resolution, for example: https://store.steampowered.com/hwsurvey


My gaming PC is also connected to a 1080p display because tbh for gaming that's good enough, but I don't whine about application text quality on that setup since it looks pretty bad with or without ClearType compared to a highdpi display ;)


Yea, I tried to give it a go on Fedora, but the terrible text rendering made it a insta-delete, for me.


ok for what reason we need sub-pixel rgb anti aliasing here???? does we run game engine for code??


Subpixel antialiasing of fonts is a pretty standard feature for a few decades. Without it, text can look fuzzy on certain displays.

> does we run game engine for code??

Zed literally does this; they render their UI using a graphics library just like a video game.


It's fun to see "GPU accelerated" and "like game engine" when literally every application is rendered the same way with the same APIs.


Last I checked I don't create a GL context to make a WPF app.


That "after" image is still rendered with greyscale AA rather than subpixel, but whatever they changed did make it more legible at least.


And that was already impressive. High-end gaming computers with dual-channel DDR5 only reach ~100GB/s of CPU memory bandwidth.


High end gaming computers have far more memory bandwidth in the GPU, though. The CPU doesn’t need more memory bandwidth for most non-LLM tasks. Especially as gaming computers commonly use AMD chips with giant cache on the CPU.

The advantage of the unified architecture is that you can use all of the memory on the GPU. The unified memory architecture wins where your dataset exceeds the size of what you can fit in a GPU, but a high end gaming GPU is far faster if the data fits in VRAM.


The other advantage is you don’t have to transfer assets across slow buses to get it into that high speed VRAM.


Right, but high-end gaming GPUs exceed 1000GB/s and that's what you should be comparing to if you're interested in any kind of non-CPU compute (tensor ops, GPU).


And you can find high-end (PC) laptops using LPDDR5x running at 8533 MT/s or higher which gives you more bandwidth than DDR5.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: