Initially I was like, "Oh cool, a new layer on top of Nix to make it more accessible!" And then read on to see it's basically just the NixCpp fork from save-nix-together now with Meson. I guess gradually moving to Rust is a cool idea, but there also exists tvix in that space with a greenfield version in Rust.
Navi[1] is perfect for this! It's both a fuzzy finder of a personal collection of commands, but it's template syntax is flexible enough to be able to have "command builders"
I have a blog on doing exactly this for a subset of strace[2].
GPS satellites don't "send" your coordinates to your receiver. Your receiver is just listening to the broadcast signal from several (usually 4+) satellites and based on the strength of that signal determining how far it is from each of those satellites. Which means the receiver is able to triangulate it's own position.
Strength of signal isn't used because strength is an unreliable measure of distance. The amount of atmosphere the signal passes through, reflection/refraction and all manner of weather effects will modify a signal's strength. So the sats transmit a pulse at a pre-agreed time and the receivers use the timing that they receive the signal as the measure of distance.
Pre-agreed time? Don't the satellites pulse a "The current time is x" signal?
With signals from 4 satellites one can triangulate oneself in 3D space, with 5 signals, in 4D! (3D + time). I once did the math and astounded myself that it worked.
If there is no agreed time then you don't know when the signals were sent and cannot make any sense out of the signal. They all have to send either in unison or according to a predetermined schedule. The synced clocks set that schedule.
The clocks are synced between satellites, but if your receiver is cold-booting in the middle of a forest, how does it know what time it is? It will receive a signal from 1 satellite that will say "I sent out this signal at time X", but you still don't know what time it is because you don't know how many nanoseconds it took for the signal to get to you, you can only be sure it's currently some time after X.
It will get another signal from another satellite, which could have the timestamp before X, because it left earlier and took longer to get to your receiver.
As I said, if you do the math, with 5 signals you can then determine your location in 3D space, and time!
> Your receiver is just listening to the broadcast signal from several (usually 4+) satellites and based on the strength of that signal determining how far it is from each of those satellites.
GPS doesn't use the strength of the signal at all. Instead, each signal contains precise information about the current time at the highly-accurate atomic clocks onboard the corresponding satellite (plus some important metadata about each satellite, including things like their orbit parameters). If the receiver already knew the precise time, it could calculate the distance to each satellite from the difference between the true time and the received time (and the speed of the light), and 3 satellites would be enough to triangulate its position. Since the receiver usually doesn't know the precise time, it needs an extra satellite because there are now 4 unknowns (3 for its position plus 1 for the current time).
(Obviously, that's a very simplified explanation, there are plenty of other things which complicate the calculations.)
By the time the signal reaches your GPS receiver, it is below the thermal noise floor of even amazing receivers. But each GPS satellite has a unique pseudo-random code (called a PRN) that is within the signal. Receivers that listen long enough can pick out the PRN and thus the GPS signal.
I'm no GPS expert, I've read some of the theory had enough of a working understanding to deal with tactical navigation systems, but that was in my past. I remember using El-Rabbany's "Introduction to GPS" text.
an explanation not only helps the ignorant, it reinforces the idea within your own thoughts and perhaps seeds new ideas that are derivative; it even teaches the otherwise uncaring that may happen upon the comment.
what you did wasn't that -- but I would just like to point out that simple concise explanations helps the community as a whole; it's not just the ignorant that lose out.
Yes, I know it's likely not your job to educate, and maybe it's a bother that someone acts expert on something that they're clearly not -- but those that care to educate serve everyone in the context of an online forum, not just the naive or ignorant.
In fact, I believe that's exactly where this bug lies. You're effectively able to trigger a case in which passing `&'a &'b` without providing any correlation (`where 'a: 'b`) that one would normally be required to provide makes the compiler behave as if those correlations were passed, albeit inferred incorrectly.
hyperfine is such a great tool that it's one of the first I reach for when doing any sort of benchmarking.
I encourage anyone who's tried hyperfine and enjoyed it to also look at sharkdp's other utilities, they're all amazing in their own right with fd[1] being the one that perhaps get the most daily use for me and has totally replaced my use of find(1).
Rust can re-use an allocation, but if the new item is smaller than the previous it doesn't automatically remove (free) the "wasted" memory left over from the previous allocation. I think this is categorically not a memory leak as the memory was absolutely accounted for and able to be freed (as evidenced by the `shrink_to_fit()`), but I can see how the author was initially confused by this optimization.
The 2x versus 200x confusion IMO is the OP was conflating that Vec will double in size when it needs more space, so they were assuming the memory should have only ever been 2x in the worst case of the new size. Which in the OPs case because the new type size was smaller than the previous, it seemed like a massive over-allocation.
Imagine you had a `Vec<Vec<u16>>` and to keep it simple it there were only 2 elements in both the inner and outer Vec's, which if we assume Rust doubled each Vec's allocation that'd be 4x4 "slots" of 2 bytes per slot (or 32 bytes total allocated...in reality it'd be a little different but to keep it simple let's just assume).
Now imagine you replace that allocation with a `Vec<Vec<u8>>` which even with the same doubling of the allocation size would be a maximum of 4x4 slots of 1 byte per slot (16 bytes total allocation required). Well we already have a 32 byte allocation and we only need 16, so Rust just re-uses it, and now it looks like we have 16 bytes of "waste."
Now the author was expecting at most 16 bytes (remember, 2x the new size) but was seeing 32 bytes because Rust just re-used the allocation and didn't free the "extra" 16 bytes. Further, when they ran `Vec::shrink_to_fit()` it shrunk down to only used space, which in our example would be a total of 4 bytes (2x2 of 1 byte slots actually used).
Meaning the author was comparing an observed 32 byte allocation, to an expectation of at most 16 bytes, and a properly sized allocation of 4 bytes. Factored out to their real world data I can see how they'd see numbers greater than "at most 2x."
TBF, when a 1.0 is released doesn't mean it's viable right way for things like this. It takes a certain level of market adoption and ecosystem buy-in first.
Also, Zig still isn't 1.0 so if we're measuring languages from when they first became public, I believe those others in your list are much older as well.
I'm not a fan of trying to put hard numbers on unknowns like this because it biases against uncertainty, but if they shaved ~74 minutes off their CI time and assuming it runs multiple times a day that very quick equates to a small teams cost savings over a year.
However, I think trying to find the actual numbers is dumb because there's also the intangibles such as marketing and brand recognition bump by doing this both for the company and individuals involved.
That's not to say all greenfield endeavors should be actioned, but ones with substantial gains like this seem fine given the company is big enough to absorb the initial up front cost of development.
I've had significantly fewer issues with `cargo [b]install`ed compiled Rust programs than `npm install`ed ones. Getting nodejs/npm installed (and at an appropriate version) is not always trivial, especially when programs require different versions.
OOTH, Precompiled Rust binaries have the libc version issue only if you're distributing binaries to unknown/all distribtuions, but that's pretty trivially solved by just compiling using an old glibc (or MUSL). Whereas `cargo install' (and targetting specific distributions) does the actual compiling and uses the current glibc so it's not an issue.