Haskell is far more dangerous. It allows you to simple destruct the `Just` variant without a path for the empty case, causing a runtime error if it ever occurs.
Rust's standard library hasn't received any major additions since 1.0 in 2015, back when nobody was writing web services in Rust so no one needed logging.
> Or, for $300, you can buy an RTX 5060 that is better than the best GPU from just 6 years ago. It's even faster than the top supercomputer in the world in 2003, one that cost $500 million to build.
RTX 5060 is slower than the RTX 2080 Ti, released September 2018. Digital Foundry found it to be 4% slower in 1080p, 13% slower in 1440p: https://www.youtube.com/watch?v=57Ob40dZ3JU
That requires some explanation. Basically I think runtimes should be abort safe and have some defined thing that happens when a thread is aborted. Antiquated 70s blocking APIs do not, or do not consistently.
It’s a minor gripe compared to the heaviness of threads and making every programmer hand roll fibers by way of async.
Rust moves are a memcpy where the source becomes effectively unitialized after the move (that is say it is undefined to access it after the move). The copies are often optimized by the compiler but it isn't guaranteed.
This actually caused some issues with rust in the kernel because moving large structs could cause you to run out the small amount of stack space availabe on kernel threads (they only allocate 8-16KB of stack compared to a typical 8MB for a userspace thread). The pinned-init crate is how they ended solving this [1].
Do you monitor your product closely enough to know that there weren't other brief outages? E.g. something on the scale of unscheduled server restarts, and minute-long network outages?
I personally do through status monitors at larger cloud providers at 30 sec resolutions, never noticed a downtime. They will sometimes drop ICMP though, even though the host is alive and kicking.
actually, why do people block ICMP? I remember in 1997-1998 there were some Cisco ICMP vulnerabilities and people started blocking ICMP then and mostly never stopped, and I never understood why. ICMP is so valuable for troubleshooting in certain situations.
Security through obscurity mostly, I don't know who continues to push the advice to block ICMP without a valid technical reason since at best if you tilt your head and squint your eyes you could almost maybe see a (very new) script kiddie being defeated by it.
I've rarely actually seen that advice anywhere, more so 20 years ago than now but people are still clearly getting it from circles I don't run in.
I do. Routers, switches, and power redundancy are solved problems in datacenter hardware. Network outages rarely occur because of these systems, and if any component goes down, there's usually an automatic failover. The only thing you might notice is TCP connections resetting and reconnecting, which typically lasts just a few seconds.
I do for some time now, on the scale of around 20 hosts in their cloud offering. No restarts or network outages. I do see "migrations" from time to time (vm migrating to a different hardware, I presume), but without impact on metrics.
to stick to the above point, this wasn't a minute long outage.
if you care about seconds/minutes long outages, you monitor. running on aws, hetzer, ovh, or a raspberry in a shoe box makes no difference
I'm glad there's finally some progress in that direction. If they actually implement subpixel RGB anti-aliasing, it would definitely be worth considering as an alternative. It's been surprising to see so many people praise Zed when its text rendering (of all things) has been in such a state for so long.
It doesn't take being outside of the west for this to be relevant. Two places I currently frequent, A) the software development offices of a fortune 500 company, and B) the entire office & general-spaces (classrooms, computer labs, etc) of a sizeable university, have 1080p monitors for >80% of their entire monitor deployment.
Even then... my visibility is pretty bad, so earlier this year I upgraded to 45" 3440x1440 monitors, and even then I'm viewing at 125%, so subpixel fonts helps a lot in terms of readability, even if I cannot pick out the native pixels well.
They aren't high-dpi though, just big and still zoomed. On the plus side, it's very similar experience to two 4:3 monitors glued together... side by side apps on half the screen is a pretty great experience... on the down side, RDP session suck, may need to see if I can find a scaling RDP app.
My gaming PC is also connected to a 1080p display because tbh for gaming that's good enough, but I don't whine about application text quality on that setup since it looks pretty bad with or without ClearType compared to a highdpi display ;)
High end gaming computers have far more memory bandwidth in the GPU, though. The CPU doesn’t need more memory bandwidth for most non-LLM tasks. Especially as gaming computers commonly use AMD chips with giant cache on the CPU.
The advantage of the unified architecture is that you can use all of the memory on the GPU. The unified memory architecture wins where your dataset exceeds the size of what you can fit in a GPU, but a high end gaming GPU is far faster if the data fits in VRAM.
Right, but high-end gaming GPUs exceed 1000GB/s and that's what you should be comparing to if you're interested in any kind of non-CPU compute (tensor ops, GPU).