This is exactly the opposite of what he’s saying, using Arc everywhere is hacking around the borrow checker, a seasoned rust developer will structure their code in a way that works with the borrow checker; Arc has a very specific use case and a seasoned rust developer will rarely use it
These extreme generalizations are not accurate, in my experience.
There are some cases where someone new to Rust will try to use Arc as a solution to every problem, but I haven't seen much code like this outside of reviewing very junior Rust developers' code.
In some application architectures Arc is a common feature and it's fine. Saying that seasoned Rust developers rarely use Arc isn't true, because some types of code require shared references with Arc. There is nothing wrong with Arc when used properly.
I think this is less confusing to people who came from modern C++ and understand how modern C++ features like shared_ptr work and when to use them. For people coming from garbage collected languages it's more tempting to reach for the Arc types to try to write code as if it was garbage collected.
Arc<T> is all over the place if you're writing async code unfortunately. IMO Tokio using a work-stealing threaded scheduler by default and peppering literally everything with Send + Sync constraints was a huge misstep.
I mostly wind up using Arc a lot while using async streams. This tends to occur when emulating a Unix-pipeline-like architecture that also supports concurrency. Basically, "pipelines where we can process up to N items in parallel."
But in this case, the data hiding behind the Arc is almost never mutable. It's typically some shared, read-only information that needs to live until all the concurrent workers are done using it. So this is very easy to reason about: Stick a single chunk of read-only data behind the reference count, and let it get reclaimed when the final worker disappears.
Arc + work stealing scheduler is common. But work stealing schedulers are common (eg libdispatch popularized it). I believe the only alternative is thread-per core but they’re not very common/popular. For what it’s worth zig would look very similar except their novel injectable I/O syntax isn’t compatible with work stealing.
Even then, I’d agree that while Arc is used in lots of places in work stealing runtimes, I disagree that it’s used everywhere or that you can really do anything else if you want to leverage all your cores with minimum effort and not having to build your application specialized to deal with that.
Being possible with minimal effort doesn't really preclude it from it not being the default. The issue I have is huge portions of Tokio's (and other async libs) API have a Send + Sync constraint that destroy the benefit of LocalSet / spawn_local. You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity.
I don't care that they have a good work-stealing event loop, I care that it's the default and their APIs all expect the work-stealing implementation and unnecessarily constrain cases where you don't use that implementation. It's frustrating and I go out of my way to avoid Tokio because of it.
Edit: the issues are in Axum, not the core Tokio API. Other libs have this problem too due to aforementioned defaults.
>You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity. [...] It's frustrating and I go out of my way to avoid Tokio because of it.
At $dayjob we have built a large codebase (high-throughput message broker) using the thread-per-core model with tokio (ie one worker thread per CPU, pinned to that CPU, driving a single-threaded tokio Runtime) and have not had any problems. Much of our async code is !Send or !Sync (Rc, RefCell, etc) precisely because we want it to benefit from not needing to run under the default tokio multi-threaded runtime.
We don't use many external libs for async though, which is what seems to be the source of your problems. Mostly just tokio and futures-* crates.
I might be misremembering and the overbearing constraints might be in Axum (which is still a Tokio project). External libs are a huge problem in this area in general, yeah.
Single-threaded runtime doesn't require Send+Sync for spawned futures. AFAIK Tokio doesn't have a thread-per-core backend and as a sibling intimated you could build it yourself (or use something more suited for thread-per-core like Monoio or Glommio).
I've not explored every program domain, but in general I see two kinds of program memory access patterns.
The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response.
In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on.
The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on.
Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful.
Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple.
This is something I have noticed while I'm by no means seasoned enough to consider myself even a mid level, some of my colleagues are and what they tend to do it plan ahead much better or pedantically, as they put it, the worst thing you will end up doing it's trying to change an architectural decision later on.
Interesting. I would have thought that leak-free is part of the premise, since you can very well right C or C++ with a guarantee of no use after free at least, assuming you don't care about memory leaks.
The difference is that memory safety of any kind (including leaking everything) in C/C++ requires discipline, whereas in Rust the compiler is what prevents it. And yes, leaking is not part of that guarantee, because leaks cannot cause corruption or undefined behavior.
With that said, while Rust does not guarantee it, it does have much better automatic memory cleanup compared to C++, because every value has only one owner, and the owner automatically drops/destructs it at the end of its scope.
Getting leaks is possible to do with things like Box::leak, or ref-count cycles, but in practice it tends to be explicit, rather than the programmer forgetting to do something.
So in this case it is still a form of safety that’s well-defined in rust: cancel safety. The io-uring library doesn’t have the same cancel safety guarantees that everyone is used to in epoll libraries. In Tokio, the cancel safety of `accept` is well documented even though it works the way you’d expect, but in monoio, it’s literally just documented as `Accept` with no mention of the cancel safety issues when using that function.
as someone who has been working to make nix my main system for several months now, there are some very clear areas of improvement that would make things easier:
- [any] documentation, the majority of nix modules are undocumented and so the only way to figure out what settings are available is to find and read the source code (and even then it could be using a module to convert the definitions to the target config format, in which case there’s even more guessing, but at least the official documentation of that package gives you something)
- coding standards; lots of modules have different variations of camel case, snake case, adjective-noun, noun-adjective, etc so it’s not clear what the correct format of a setting is for an arbitrary package
- flakes just need to be both an official feature and set to be the default way to interact with nix, it just has too many upsides
- Better errors, the current errors are just horrible to read and you end up picking out 1 or 2 spots in several paragraphs of irrelevant code snippets and stack traces
- up to date resources, since the official docs aren’t very beginner friendly, third party resources end up being the way people learn (vimjoyer has been a godsend), but half the time when you try to use those resources, they’re out of date and lead to broken nix configs, having solid official updated documentation to help new users get started in nix would go a long way here
Seriously… flakes are still “experimental”… come on… this is starting to look bad at this point… I have been refusing to get onboard while the entire community’s preferred methods are “experimental”… I don’t want to base my entire OS install on experimental features…
The flakes situation felt like the sort of thing that the next major release would have finally solved… it’s absolutely baffling to me that this is still not offical and enabled by default… I haven’t seen any new decent documentation in over year that doesn’t use flakes.
I'd recommend taking a look at Guix as a kind of alternative implementation of the idea of Nix. It has its own usability issues, especially surrounding nonfree software and a much smaller community, but it addresses some of the issues you comment on.
nix flake update then nixos-rebuild switch --flake and encountering an error is nightmare fuel. There is always a moment right after the error occurs where it feels like you'll never figure it out. I've always manage to up to this point but man...
Incorrect, the case was appealed to the supreme court and the appeal was denied, so the lower court ruling held.
What was ruled by the supreme court was that Google's usage of the API (which had already determined to be copyrighted) fell under fair use in copyright law.
> Incorrect, the case was appealed to the supreme court and the appeal was denied, so the lower court ruling held.
Kind of; the appeal denied was an interlocutory appeal (an appeal before final judgement), so the lower court ruling was left in place until final resolution of the case potentially to be settled in any final appeal.
However, while copyrightability of APIs was raised on the final appeal, the Supreme Court sidestepped it, ruling that because Google’s use was fair use even if the API was copyrightable, it was unnecessary to decide the copyrightability question at all. So the Federal Circuit decision remains in place on copyrightability.
On the gripping hand, though, that decision really doesn't matter much because Federal Circuit decisions on issues outside of those it has unique appellate responsibility aren’t binding precedent on trial courts (it is supposed to apply the case law of the geographic circuit that would otherwise apply, but its rulings don’t have the precedential effect that rulings of that circuit would have.)
So, basically, as far as appellate case law, Oracle v. Google provides no binding precedent on copyrightability of APIs, but precedent that at least one pattern of API copying is fair use if APIs are copyrightable.
Which isn’t encouraging for anyone looking to protect an API with copyrights.
almost all of these are non-commercial; all but 1 of the commercial incidents from the past 10 days are due to striking birds in flight, and nothing with airport taxiing
> I found some trucks that were literally priced $10,000-$15,000 over MSRP, and I encountered many of the shady business practices that the FTC is now trying to ban.
because it's not ± a few hundred dollars, it's thousands of dollars, and claiming online that it's one price and charging a lot more once you get there
Yeah, that's kind of cheating... it runs Rust after compiling it to WASM and running that inside a JS runtime! Anyone thinking of using that should definitely consider just using WASI instead as Rust, as Zig, can compile to WASM/WASI (for those who don't know, WASI is like a POSIX for WASM, so WASM-compiled programs have access to a POSIX-like API, and therefore don't depend on the Web APIs like the Rust-compiled-to-worker solution does).
No, because you're framing it like they're blocking or challenging every user that comes to their site. Instead, when 90% of your malicious traffic comes from VPNs/TOR, it makes way more sense to just block or challenge those specifically even if it causes an inconvenience on the ones who use those services in a non-malicious way.