That is actually memory safe, as null will always trigger access violation..
Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.
As a fun example, I worked on a safety-critical system where accessing all-bits-zero pointers would trigger an IRQ that jumped back to PC + 4, leaving the register/variable uninitialized. Great fun was had any time there was LR corruption and CPU started executing whatever happened to be next in memory after function return.
I recently had a less wild but similarly baffling experience on an embedded-but-not-small device. Address 0 was actually a valid address. We were getting a HardFault because a device driver was dereferencing a pointer to an invalid but not-null address. Working backwards, I found that it was getting that invalid address not from 0x0 but rather from 0xC… because the pointer was stored in the third field of a struct and our pointer to that struct was null.
foo->bar->baz->zap
Foo = 0, &bar = 0xC, baz = invalid address, *baz to get zap is what blew up.
The problem is not nullopt, but that the client code can simply dereference the optional instead of being forced to pattern-match. And the next problem, like the other guy mentioned above, is that you cannot make any claims about what will happen when you do so because the standard just says "UB". Other languages like Haskell also have things like fromJust, but at least the behaviour is well-defined when the value is Nothing.
I my experience you absolutely must have type checking for anything that prints, because eventually some never previously triggered log/assertion statement is hit, attempts to print, and has an incorrect format string.
I would not use iostreams, but neither would I use printf.
At the very least if you can't use std::format, wrap your printf in a macro that parses the format string using a constexpr function, and verifies it matches the arguments.
_Any_ code that was never previously exercised could be wrong. printf() calls are typically typechecked. If you write wrappers you can also have the compiler type check them, at least with GCC. printf() code is quite low risk. That's not to say I've never passed the wrong arguments. It has happened, but a very low number of times. There is much more risky code.
So such a strong "at the very least" is misapplied. All this template crap, I've done it before. All but the thinnest template abstraction layers typically end up in the garbage can after trying to use them for anything serious.
My understanding is Rust compiles each crate in the same way C++ compiles each .cpp file, so if you stick everything in a single crate you get horrible compile times.
This does seem like a poor design decision by Rust to me, forcing people to break things into arbitrary crates just to get reasonable compile times.
It also seems like a disaster for incremental compilation.
> This does seem like a poor design decision by Rust to me, forcing people to break things into arbitrary crates just to get reasonable compile times.
Not necessarily, since codegen units can introduce intra-crate parallelism during compilation.
But in any case, as with basically everything there are tradeoffs involved in choosing compilation unit size. Making your compilation unit crate-sized also means that you have some more flexibility in your code organization (e.g., stuff can mutually depend on each other, which is itself potentially useful and/or harmful) and you don't run into other potential issues like the orphan rule. There are also potential impacts on optimization, though LTO muddy the picture. Codegen units are just the cherry on top.
There's almost certainly other things I'm forgetting and/or not knowledgeable about as well.
OpenAI appears to have bought the DRAM, not to use it, as they are apparently buying it in unfinished form, but explicitly to take it off the market and cause this massive price increase & squash competition.
I would call that market manipulation(or failure if you wish)--in a just society Sam Alton would be heading to prison.
FEX is a CPU JIT, so your GPU settings are irrelevant to it, it is translated but not by FEX, and there is no real perf hit for the GPU
The old games don't really matter with regards to FEX perf, so the only relevant bit is the semi newer games at 30/40 fps, which seems very slow to me, given that you are only running at 1080p/Medium, so you likely have a CPU bottleneck there.
I managed to find some statistics on hull losses per million departures [1, p. 13]. Seems like indeed MD-11s have a highish rate of incidents by that metric compared to other types, even if they are not catastrophically less safe than other planes. That metric stacks the statistics a bit against cargo planes, which most (all?) MD-11s are now. These planes tend to fly longer haul instead of short hop, so you get more flight time/miles but less departures. There are also likely some other confounding factors like mostly night operations (visibility and crew fatigue) and the tendency to write off older planes instead of returning them to service after an incident. Plus these aircraft have been in operation long enough that improvements in procedures and training would impact them less than more modern types, as in they already had more accidents before these improvements.
The DC-10 had a number of other problems, but the MD-11 has always had a reputation of being an unforgiving aircraft especially when compared to the DC-10. It's less about training and more that the MD-11 was simply too many design compromises piled on to an old design.
The MD-11 had a pretty short service life as a passenger aircraft because it simply wasn't very fuel efficient compared to the competition, safety wasn't really the motivating factor. However fuel consumption was behind some of the poor design choices McDonnell/Boeing made. In broad strokes: McDonnell/Boeing shrunk the control surfaces to improve fuel consumption "necessitating" poorly designed software to mask the dodgy handling and higher landing speeds. This exacerbated a DC-10 design "quirk" where hard landings got out of hand very quickly and main landing gear failure would tend to flip the plane.
Yeah you can train around this but when something else goes tits up you've got a lot less leeway to actually recover safely.
I am surprised Intels server chips can only do 2 AVX512 ops per cycle, that is rather sad given how long they have supported it in server chips, and I hope isn't a sign of things to come with Nova Lake.
Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.