> You cannot even use Nvidia/AMD/Intel DGPUs with AS Macs
afaik you technically can, except that m1/m2 force pcie bars to be mapped as device memory (forbids unaligned r/w), so most gpu software (and drivers) that just issue memcpys to vram and expect the fabric to behave sanely will sigbus. it's possible to work around this, and some people indeed have with amdgpu, but it'd absolutely destroy performance to fix in the general case
so it doesn't really have anything to do with apple themselves blocking it but rather a niche implementation detail of the AS platform that's essentially an erratum
don't see why they would care to put out docs on it considering macos doesn't even permit kexts anymore, there'd be no gpu drivers anyways. i figured it was obvious we're talking in the context of running linux on these things, given the parent topic.
> There's also an Apple VP saying unified memory on AS doesn't leave room for DGPUs and separate VRAM
can you link to this? my intuition is that they're speaking on whether apple would include dgpus inside AS systems like they used to with nvidia and amd chips in macbooks, which i agree wouldn't make much sense atp
There were uses for DGPUs in MacOS before AS, those uses could have continued, but Apple left no choice. It's weird how continuity wasn't as important to Apple in this case.
Whatever kernel or hardware facilities are available to Apple's own GPU driver, should be available to other GPU drivers. Anything else is myopic thinking, which I realize might be common for Apple, but it is also monopolistic.
Linux also allows userspace drivers with less performance I think, but I don't think DGPU use can be made performant because of AS hardware choices.
I am predisposed to ignoring link requests for things that can be easily googled.
> Runtime borrow checking: RefCell<T> and Rc<T>. Can give other examples, but admittedly they need `unsafe` blocks.
Where are the “subtle linguistic distinctions”? These types do two completely different things. And neither are even capable of being used in a multithreaded context due to `!Sync` (and `!Send` for Rc and refguards)
I did say "runtime borrow checking" ie using them together. Example: `Rc::new(RefCell::new(value));`. This will panic at runtime. Maybe I should have used the phrase "dynamic borrowing" ?
You don't need different threads. I said concurrency not multi-threading. Interleaving tasks within the same thread (in an event loop for example) can cause panics.
I understand what you meant (but note that allocating an Rc isn’t necessary; &RefCell would work just fine). I just didn’t see the “subtle linguistic distinctions” - and still don’t… maybe you could point them out for me?
imo if you're sprinkling around `unsafe` in your codebase "liberally", you're holding it wrong. In general it's really not that hard to encapsulate most unsafety into a wide-contract abstraction; I’d argue where Rust really shines is when you take advantage of the type system and static analyzer to automatically uphold invariants for you
Glancing at the Cargo.toml, the package doesn't define any features anyways. `cargo b --no-default-features` only applies to the packages you're building, not their dependencies -- that would lead to very unpredictable behavior
You've managed to miss the entire point of using a union: the value is either a success payload or an error value, never both.
You can't encode that mutual exclusivity if you return a std::pair or std::tuple. That's exactly why std::expected, std::variant, or Rust enums exist, to make that constraint explicit in the type system.
Yeah that makes sense. My assertion was definitely incorrect, but also not really what I was trying to describe. My argument is that they are not conceptually different. The implementation is different, in that a union occupies the same memory for either value, but whether they occupy the same memory or not, you have to check the value to determine that is an error. The compiler can force you to handle multiple return values the same way it can force you to check a variant.
> My argument is that they are not conceptually different.
Your argument is nonsensical on its face, it could not be a more different way to compose types.
> The implementation is different
It's also largely irrelevant, the commenter above makes no mention of the memory state, because that's not the point. The point is that the states are exclusive.
> The compiler can force you to handle multiple return values the same way it can force you to check a variant.
No it can not, specifically because it's a product type (or it would be if it were a type anyway), and there is no relation between the fields. At most you can apply some mangy heuristics and hope they don't fuck up to often.
A sum type tells you in no uncertain terms that you can have only one of the types at any given time.
And that's not accounting for the fact that a result is a reified value, which an MRV is not, so it can be passed around and manipulated as any value, and can be operated on as a thing of its own.
I don't know that there's whining about "having to handle errors" in principle, it's pretty clearly a complaint with the syntax and semantics of doing so
Some languages even make omitting error handling impossible! (e.g. Result sum types). None have anywhere near the amount of "whining" Go seems to attract
Codex-like agents are cool but as someone with even just a passing interest in compilers I absolutely hate this attempt at appropriating the word "codegen"