Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I reduced (incremental) Rust compile times by up to 40% (coderemote.dev)
51 points by davikr on March 24, 2024 | hide | past | favorite | 17 comments


Maybe I’m not reading carefully enough, but if you have to specify non-pure macros yourself, how do you know when you didn’t do that correctly?


I think that you'd either need a way to programmatically recognise the ones that are pure, or to make it opt-in by each crate.


I was thinking more like hazard detection where it spins off side jobs running things in the cache and propagates the output if it doesn’t match. Would require a lot more processing but would hopefully still allow for faster builds.

Having marked pure functions would be ideal though, no doubt.


I'm another comment I mention an alternative to "believing" then proc macro author.

https://news.ycombinator.com/item?id=39808525


Hope this is upstreamed at some point. Good to see there's still a lot of gains to be made for Rust compile time


Wouldn't this be quite difficult to upstream to users in the state presented?

At the least you'd want to make caching opt-in per macro unless something special was enabled. Forcing the end developer to blacklist incompatible macros feels like an anti-pattern.


Man, I did a bunch of work getting this whole extra crate just for one proc macro working to extract nested function ASTs from rust files,

and on the 2nd day of work after everything finally worked, it complained because I wanted to return the AST instead of a token stream.

Joke’s on me because I didn’t know you have to use proc macros to transform code and I wound up just rewriting it as a function. Hopefully that doesn’t screw up what I want to do, which is basically compile time correctness checking, but it probably will…

Feels like every time I try to make compile time functionality in rust, I wind up getting sucked into endless type-level shenanigans, and it works, as long as you can tolerate the error messages being nigh unreadable.

WTB “comptime” Rust


Given the conclusion that a painfully large amount of time is spent re-evaluating procedural macros and that this was largely mitigated by adding a cache, it would seem as if the linked discussion about "const" macros on the Rust language design forum is incorrectly analyzing the near-term need for deploying such a feature: they are concentrating on security use cases (which frankly seem a bit dubious to me) or (this one I hadn't heard before but is in the thread) changing where they are packaged (which, to those of us in the peanut gallery wearing our separate compilation hats and prototype fan club hoodies, seems like a far-too-entrenched mistake in Rust's design to bother fixing at this level) whereas being able to safely and automatically know which macros can be cached and which ones can't be (particularly as I'd pray that the big listed use case from the article of a non-const macro--including a file as bytes--either already isn't slow or clearly should be trivially fixed to not be slow) seems to be the real banner feature of having const macros be standardized.


Reading the article.made me realize that we could do something that might or might not be a net positive: we can assume that proc macros are idempotent and immediately use the cached version while still performing the compilation and expansion as we do now. If the result of the new run matches, then we mark the macro as "likely idempotent". If not, we mark it to never attempt to cache that macro for that crate version ever again (globally in a devs machine, even!) and invalidate all of the unitd that assumed idempotency. This would pessimize the incr comp performance for the first compilation the "optimization" fails, but would allow all other cases to compuke faster (with no resource utilization improvements). This approach doesn't require people to annotate their crates in any way. As an improvement on this approach we can add annotations for known non-idempotent crates to not bother with this new behaviour. All of this would get around the issue of "what if a crate is incorrectly marked as idempotent?".


Has anyone built a tool to suggest or automatically split a crate into sub-crates to help mitigate compile times?


I.dont think so, and I'm convinced that this could be done automatically as part of rustc, with the negative side effect of small changes that affect the splitting behaviour causing wild compilation time changes that are hard to debug.


That's what heuristics are for. Just because re-splitting could improve incremental compilation doesn't mean it should happen now.

---

That said, a one off tool is way lower friction than modifying rustc / building something everyone is subject to.

I think I'd personally prefer it to behave like a linter.


I think that listing behaviour could eventually be landed on rustc itself, but as it happens I am working on a listing tool, so I might just prototype it there first!

The benefit of telling users "you can split this section off to its own crate" is that the "break" of that can't ever be silent.


Awesome. I there any chance of this change or something like it getting merged into the rustc compiler?


Marketing talk

12s down to 6s is still abysmally slow for incremental

You change a value 10 times and you lost a minute already


The path from 12 seconds to below 6 seconds has to cross through the 6-second mark first.


People have lost the meaning of incremental compilation, sad day for modern computing




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: