shouldn't a serious heatmap (or any comparative graph for that matter) normalize the stat being displayed versus the baseline population in that bucket?
in otherwords, plot the percentage or average metric and not the absolute metric.
e.g. number of lotto winners per thousand people living in that grid, percentage of starred repos as a percentage of all repos, per capita alcohol consumption, average screen-time etc.
Edit: unless ofcourse the point of the heatmap is to show the population distribution itself. In which case the metric would be number of people per square kilometer or some such.
Still would just show where people live. If nobody lives there, you've got a null (or divide by zero) spot on the map ... so you just show where people live.
They even directly conclude at the end of the article that improvements in algorithm are more important than the choice of language:
> Algorithmic complexity improvements dominate language-level optimisations. Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.
Yet they still have chosen to put the “Rust rewrite” part in the title. I almost think it's a click bait.
The system could be set up to automatically refund, if your PR wasn't checked for over $AVERAGE_TIME_TO_FIRST_REVIEW$ days. The variable is specific to the project, and even can be recalculated regularly and be parameterized with PR size.
I don't think you heard what I said: I don't want to pay money to contribute to someone else's project. If I fixed your bug, I'm not paying you money for you to ignore my PR for _any_ amount of time, I'm simply not going to contribute back.
I love how the landing page is straight to the point and has zero marketing BS. It achieves the opposite of AI-written text, while still being polished.
> The contribution of this work lies in its move from critique to measurement. It proposes concrete methods: recursive summarization chains, metaphor stress-tests, resonance surveys, and noise-infused retrieval experiments. These allow researchers to track how meaning erodes over time. By integrating these methods, it outlines a pathway toward fidelity-centered benchmarks that complement existing accuracy metrics.
To me, starting to solve the problem by meticulously measuring it, is a sign of a good solution.
reply