> The typical assumption is that we get a normal distribution, and so we should therefore report the average time.
That "assumption" is asking a lot and is not justified. Actually, the main source of normal distributions is from the central limit theorem where get the distribution from adding lots of random variables.
> ... therefore report the average time
There are good reasons to consider averages for any distribution, "normal" or not.
Of course, as in a classic paper by Halmos and Savage, there is the topic of sufficient statistics that for the normal distribution is a bit amazing. Gee, maybe the OP (original post) was thinking about sufficient statistics. But for the normal distribution, the sufficient statistics are the pair of sample average and sample standard deviation.
> If you have normal distributions, you can mathematically increase the accuracy of the measure by taking more samples.
This is justified by the law of large numbers, both the strong and weak versions, where need not assume a normal distribution. Texts where the law of large numbers is proven in great detail and generality are by authors Loeve, Chung, Neveu, and Breiman, among others.
> It is not possible, in a normal distribution, to be multiple times the standard deviation away from the mean.
Sorry, with the normal distribution, there is positive probability of samples positive or negative with finite absolute value as large as please.
> The typical assumption is that we get a normal distribution, and so we should therefore report the average time.
That "assumption" is asking a lot and is not justified. Actually, the main source of normal distributions is from the central limit theorem where get the distribution from adding lots of random variables.
> ... therefore report the average time
There are good reasons to consider averages for any distribution, "normal" or not.
Of course, as in a classic paper by Halmos and Savage, there is the topic of sufficient statistics that for the normal distribution is a bit amazing. Gee, maybe the OP (original post) was thinking about sufficient statistics. But for the normal distribution, the sufficient statistics are the pair of sample average and sample standard deviation.
> If you have normal distributions, you can mathematically increase the accuracy of the measure by taking more samples.
This is justified by the law of large numbers, both the strong and weak versions, where need not assume a normal distribution. Texts where the law of large numbers is proven in great detail and generality are by authors Loeve, Chung, Neveu, and Breiman, among others.
> It is not possible, in a normal distribution, to be multiple times the standard deviation away from the mean.
Sorry, with the normal distribution, there is positive probability of samples positive or negative with finite absolute value as large as please.