> I suspect the error in variance from assuming it IS normally distributed is far less than you're suggesting. [...] Like even if I'm off by a factor of 2 [...]
You would be deeply mistaken. Robust statistics texts (e.g. Wilcox) are full of examples of distributions that have zero skew and are even nearly indistinguishable by eye from a Gaussian, but where the differences in variance and thus resulting differences in conclusions drawn are profound. Heck, a sample from a Cauchy distribution looks not too bad, but in fact the variance is not even defined (or effectively infinite, and, thus, meaningless).
And even if you have enough data that statistical issues are not a concern, the problem is that most summary metrics (like effect sizes, heritability, etc) are developed under the assumptions of near-normality AND minimal skew, so that the effect size can be interpreted as something about the overlap and or positioning of the bulks of the distributions. But when skew and long tails are involved, the bulk itself is what is messed up, making most such metrics largely uninterpretable.
I.e. it isn't just that variance is hard to measure accurately here, it is that, even if measured accurately, variance isn't actually a meaningful metric here.
The few metrics that do remain interpretable in such cases tend to be those like HPDI in Bayesian methods, which look at actual distribution shapes and try to quantify a bulk in a sensible location. Likewise, meaningful effect sizes for skewed and long-tailed data need to actually take into account distribution overlap in meaningful regions. Heritability does not do this, as it is an explained variance metric.
You would be deeply mistaken. Robust statistics texts (e.g. Wilcox) are full of examples of distributions that have zero skew and are even nearly indistinguishable by eye from a Gaussian, but where the differences in variance and thus resulting differences in conclusions drawn are profound. Heck, a sample from a Cauchy distribution looks not too bad, but in fact the variance is not even defined (or effectively infinite, and, thus, meaningless).
And even if you have enough data that statistical issues are not a concern, the problem is that most summary metrics (like effect sizes, heritability, etc) are developed under the assumptions of near-normality AND minimal skew, so that the effect size can be interpreted as something about the overlap and or positioning of the bulks of the distributions. But when skew and long tails are involved, the bulk itself is what is messed up, making most such metrics largely uninterpretable.
I.e. it isn't just that variance is hard to measure accurately here, it is that, even if measured accurately, variance isn't actually a meaningful metric here.
The few metrics that do remain interpretable in such cases tend to be those like HPDI in Bayesian methods, which look at actual distribution shapes and try to quantify a bulk in a sensible location. Likewise, meaningful effect sizes for skewed and long-tailed data need to actually take into account distribution overlap in meaningful regions. Heritability does not do this, as it is an explained variance metric.