Regular readers will be familiar with the Weather Test project that BBC environment analyst Roger Harrabin is trying to construct. This blog has previously speculated on the Weather Test and asked how likely it is to be impartial when the key players behind it have commercial and academic partnerships with each other.
But there is another question to ask about the Weather Test, and that is how likely it is to provide any value. After discussions with some meteorologists a scenario has emerged that has the capacity to render the whole project worthless.
In the UK there are typically around four or five major weather events per year. The problem with a project like Weather Test (if it ever sees the light of day) is how to weight the forecasts appropriately. If a competing forecaster was able to produce a forecast accuracy rate for, say, 75% of the days in the test period when there are no major weather events, but completely miss major events, how would that be weighted to demonstrate that when it comes to forecasts that really matter their accuracy was found wanting?
Such weighting before any such test commenced would by definition be arbitrary – a bit like the adjustments and smoothing applied to temperature readings that always seem to increase the recorded temperature. So what is the real value of such a project?
Perhaps a more effective guide to comparing the accuracy of forecasters would be to turn our eyes to the commercial sector and see who retains business because of their accuracy and who loses business for inability to pinpoint in good time what really matters – namely those major events that have the most bearing on commercial customers. Is there really any value to Harrabin’s little endeavour?