I would like to see an independent forecast comparison tool similar to Forecast Advisor, which evaluates numerical weather models. However, getting reliable ground truth data on a global scale can be a challenge.
Since Open-Meteo continuously downloads every weather model run, the resulting time series closely resembles assimilated gridded data. GraphCast relies on the same data to initialize each weather model run. By comparing past forecasts to future assimilated data, we can assess how much a weather model deviates from the "truth," eliminating the need for weather station data for comparison. This same principle is also applied to validate GraphCast.
Moreover, storing past weather model runs can enhance forecasts. For instance, if a weather model consistently predicts high temperatures for a specific large-scale weather pattern, a machine learning model (or a simple multilinear regression) can be trained to mitigate such biases. This improvement can be done for a single location with minimal computational effort.
Since Open-Meteo continuously downloads every weather model run, the resulting time series closely resembles assimilated gridded data. GraphCast relies on the same data to initialize each weather model run. By comparing past forecasts to future assimilated data, we can assess how much a weather model deviates from the "truth," eliminating the need for weather station data for comparison. This same principle is also applied to validate GraphCast.
Moreover, storing past weather model runs can enhance forecasts. For instance, if a weather model consistently predicts high temperatures for a specific large-scale weather pattern, a machine learning model (or a simple multilinear regression) can be trained to mitigate such biases. This improvement can be done for a single location with minimal computational effort.